Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Is the new Mac Pro a Failure for traditional Mac Creative and Professional customers


  • Total voters
    417
Status
Not open for further replies.
Actually if you have issues with build time usually this a more complex thing that core count, build performance many times depend more on disk access performance than cpu/ram further thread count is secondary too since few compiler tasks are multi threaded (each code file is compiled by a unique thread) , also not all code files can be build at same time, and some have to wait other files to compile, and unless you do a full rebuild compilers IDE only build those code files modified and links to the previously build binaries, so I still don't see the hurry for multiple core multi socket developer system, the biggest impact you'll get from the storage system, I do remember now that the faster Xcode development machine right now is the 2015 iMac retina 27 on full ssd with that 4.2 GHz i7, leaves far behind the previous iMac retina and far far away the Mac Pro, because the new iMac 27 ssd is a more than twice faster also the i7 single thread speed is the fastest (long integers or double precision ).

Then I'm really hoping for that iMac type of storage in the next Mac Pro :) The 2013 is a code crunching monster for me. It does have more cores available than the iMac so I wonder how that makes them each compare in terms of compiling things. I'm really excited for the next version of the Mac Pro since it will have such fasters storage and hopefully e5 v4s - which kind of will make this post a little bit of a moot point (for most developers)t considering it's single CPU with so many more cores available. Though what Apple will make available is a different question entirely.
 
Then I'm really hoping for that iMac type of storage in the next Mac Pro :) The 2013 is a code crunching monster for me. It does have more cores available than the iMac so I wonder how that makes them each compare in terms of compiling things. I'm really excited for the next version of the Mac Pro since it will have such fasters storage and hopefully e5 v4s - which kind of will make this post a little bit of a moot point (for most developers)t considering it's single CPU with so many more cores available. Though what Apple will make available is a different question entirely.
I have two news for you, one good one bad, the good first, the 2016 nMP should arrive with NVMe storage also faster than the pci-e ssd in the iMac 27 late '15.

Now the bad news: Xeon E5v4 single thread performance is even worse than E5v2 since the 4core E5v2 is faster on single thread than the 6 core E5v4 (no 4 core E5v4) while the 6 core E5v4 will be faster on multi thread than the 4 core E5v2 at same TDP, this is not good news for programmers since on Xcode many completion task are by definition single threaded and multiple cores font speed it.

Further consider at the WWDC Apple also could launch a truly new iMac line supporting usb-c and target display mode (along an usb-c 5k cinema display) Apple some analysts consider also will upgrade the iMac to DDR4 (not enough available for late 15 launch) also the new AMD R9 nano has a lower TDP than the much slower M395X it also could be added to the iMac retina mid '16 or late' 16
 
oh.. i said that in regards to jmo's post in which he mentioned the need for cpu power.. those types of scenarios can go to the cloud for much faster (and cheaper) performance than any personal computer can offer.

re: 'nobody here cares about developers'..
hmm.. maybe so but personally, i care about developers more than anything else with regards to enhancing my computing experience.. the design of an application has much more impact on how fast/fluid i can work than any piece of hardware will be able to do.

that aside, the developers i'm in contact with don't use the most awesome hardware out there to write/test their programs on.. Rhino for Mac is being done on a mbp.. the head dev for indigo renderer is testing the openCL rewrite on an HD7770..

as far as i can gather, developers generally don't write the apps with the idea of 'my users are going to need the latest & greatest hardware in order for this to work well for them'.. they write so that their app works well on most people's systems.. not so their users will be required to drop $10k in order to use the program.

I hesitated to put my two cents in, because I'm really here on my own as a personal enthusiast, but it just started to really annoy me that nowhere in any of this discussion was there any attention paid to developers, and even just sticking my hand up and pointing out a few things related to developers got ignored. Didn't mean to bang on you particularly.

The end users of the stuff I write don't need a high end machine. But the test suite has to exercise each and every function the user might choose, as well as any internal code.

I'll go back in my hole now, and those unconvinced that some developers really could use dual CPU's can carry on as usual.
 
I have two news for you, one good one bad, the good first, the 2016 nMP should arrive with NVMe storage also faster than the pci-e ssd in the iMac 27 late '15.

Now the bad news: Xeon E5v4 single thread performance is even worse than E5v2 since the 4core E5v2 is faster on single thread than the 6 core E5v4 (no 4 core E5v4) while the 6 core E5v4 will be faster on multi thread than the 4 core E5v2 at same TDP, this is not good news for programmers since on Xcode many completion task are by definition single threaded and multiple cores font speed it.

Further consider at the WWDC Apple also could launch a truly new iMac line supporting usb-c and target display mode (along an usb-c 5k cinema display) Apple some analysts consider also will upgrade the iMac to DDR4 (not enough available for late 15 launch) also the new AMD R9 nano has a lower TDP than the much slower M395X it also could be added to the iMac retina mid '16 or late' 16

As long as new things come out and they keep getting faster it's all good news for me. If it's an iMac so be it. As a developer I've got skin in this game so the fastest platforms will not only pay for themselves but make my life easier. My old Mac Pro made me want to scream in Xcode, it would take SO LONG to do anything, it was frustrating - especially with the regular development frustration that comes with the daily life of a programmer. I had a choice to either drop $500 into a machine worth 1k or get the 2013 - which is why I think it's not a failure b/c it kicks butt for me.

I'm not sure where I read it but I believe Xcode will use as many threads that are made available by OS X. I think things like SourceKit must be done in the background.

Does the e5 v4 have the ability to scale clock speed based on the number of active cores?
 
I hesitated to put my two cents in, because I'm really here on my own as a personal enthusiast, but it just started to really annoy me that nowhere in any of this discussion was there any attention paid to developers, and even just sticking my hand up and pointing out a few things related to developers got ignored. Didn't mean to bang on you particularly.

The end users of the stuff I write don't need a high end machine. But the test suite has to exercise each and every function the user might choose, as well as any internal code.

I'll go back in my hole now, and those unconvinced that some developers really could use dual CPU's can carry on as usual.

It's a worthy topic!
 
As long as new things come out and they keep getting faster it's all good news for me. If it's an iMac so be it. As a developer I've got skin in this game so the fastest platforms will not only pay for themselves but make my life easier. My old Mac Pro made me want to scream in Xcode, it would take SO LONG to do anything, it was frustrating - especially with the regular development frustration that comes with the daily life of a programmer. I had a choice to either drop $500 into a machine worth 1k or get the 2013 - which is why I think it's not a failure b/c it kicks butt for me.

I'm not sure where I read it but I believe Xcode will use as many threads that are made available by OS X. I think things like SourceKit must be done in the background.

Does the e5 v4 have the ability to scale clock speed based on the number of active cores?
I only read the geekbench (or similar) scores and the single thread on the baseline E5v4 is slower, it's supposed if the Xeon have some clock scaling features for single thread activities this should be on at the test.

I don't know where desktop computing will go from now, 64bit architecture is quite old we should have 128 or 256 bit systems now (as gpu have for a long time, also there are gpu with 384 bit words), the post pc, have been nothing more than a marketing fiasco, tablets still as oversized phones (and Phablet phones actually slashed it's market).

OSX evolution lacks true innovation just small improvement every year and adoption of technology available on stand alone apps.

Let's see at least workstation still allows us to get the work done, but I see no real innovation, so the decline on the pc market (no to say the awful market for the "post pc" devices).
 
I hesitated to put my two cents in, because I'm really here on my own as a personal enthusiast, but it just started to really annoy me that nowhere in any of this discussion was there any attention paid to developers, and even just sticking my hand up and pointing out a few things related to developers got ignored. Didn't mean to bang on you particularly.

The end users of the stuff I write don't need a high end machine. But the test suite has to exercise each and every function the user might choose, as well as any internal code.

I'll go back in my hole now, and those unconvinced that some developers really could use dual CPU's can carry on as usual.

Yeah, I’m not sure why some here just brushed you off as you did. We all only really know our own situations. People that just basically say, “no, you’re doing it wrong”, clearly just want to stick to their narrative. I’ve known developers in your situation at very prominent tech companies they had, and some still have, fleets of duel socket cMPs. They would develop one segments of the code, but then need to essentially run a simulation of how this new code works with everything else on the site. That’s no small task for large web services and a lot of times the company wanted it done independently on workstations and not on the big cluster actually running the real service.

Anyway, every use-case is different and “mago” knows exactly squat about you or me and choses to bury his/her head in the sand when we try to discuss it. So just ignore the fool.
 
  • Like
Reactions: tuxon86
you have easy access to supercomputers these days.
way faster and cheaper to use than a pc 'crammed' full of cpu cores.


edit-
i put crammed in quotes because 32cores sounds funny when you compare it to one of these things..
for example, the computer i render on sometimes is this:

-------
btw we have new hardware
650 new dual 16core V3 xeons and 64gb ram each

in total "just" 1500 rendernodes :p
-------

..
. and it's cheaper than me buying a 12core computer for rendering.

It never hurts to have extra CPU cores. Where can you lease super computer time from? Cloud based rendering?
I've seen some production houses setup servers to queue video transcoding and uploads. The bottleneck is the lan and the disk access times.
Cool hardware upgrade! :)
 
You guys do realize that Mago is talking almost complete nonsense, right?

(and I know that kind of thing is said figuratively a lot around here, but he's literally writing nonsense)
Too bad that Mago doesn't realize that the E5 is already a 256-bit architecture, and that there is no need for more address space.
 
  • Like
Reactions: tuxon86
It never hurts to have extra CPU cores. Where can you lease super computer time from? Cloud based rendering?
I've seen some production houses setup servers to queue video transcoding and uploads. The bottleneck is the lan and the disk access times.
Cool hardware upgrade! :)

https://us.rebusfarm.net/en/

that's one place to go for anything animation/3d rendering.. they support all major players (modeling/rendering apps)..

i've also used ranch:
http://www.ranchcomputing.com/en/

great results but rebus has made it a little easier because they integrate into the applications themselves.. i.e.- there's a button to push while in the app window itself which sends then receives the images.. like- literally.. one mouse click.
 
  • Like
Reactions: Mago
https://us.rebusfarm.net/en/

that's one place to go for anything animation/3d rendering.. they support all major players (modeling/rendering apps)..

i've also used ranch:
http://www.ranchcomputing.com/en/

great results but rebus has made it a little easier because they integrate into the applications themselves.. i.e.- there's a button to push while in the app window itself which sends then receives the images.. like- literally.. one mouse click.

Informative, thank you for sharing this information with the community. Cloud farming could be a cost effective solution for smaller businesses.
 
But the test suite has to exercise each and every function the user might choose, as well as any internal code.

I'll go back in my hole now, and those unconvinced that some developers really could use dual CPU's can carry on as usual.

As a developer user of the 2013 myself, I'm not unconvinced some developers couldn't use dual CPU's for testing. For me, the two main reasons why a developer would want an nMP over the convenience of a iMac or a MBP are running tests and hosting VMs. There are plenty of other reasons too, but, if you try to do those on a lesser model, they slow to a crawl and the fans go into overdrive. However, I personally haven't needed to go over 6 core and if it had been an issue, I could have gone up to 12 core before needing a dual CPU model. I'd also say, as a developer, the form factor, the quietness, and the ability to run multiple 4K displays are very compelling. Interesting thing I've seen over the years is that, unless they're gamers, there's less and less interest among developers in opening up a computer and doing any sort of upgrades. In any event, I believe that Apple had intended the nMP to be as much if not more for developers as for other pros, which is why it was introduced at WWDC. Also, as much as I love the nMP, I suspect that Apple might more successfully address the power developer market from a unit volume standpoint with a high-end MBP that has the Mobile Xeon or an iMac Pro, and I think that Apple may be grappling with whether to just cancel the Pro altogether and simply do that. (I really doubt an expandable tower is ever coming back, we can't even get Apple to reliably test their latest OS X releases with third-party 4K monitors, what makes you think they have any appetite whatsoever for a computer that requires testing with third-party GPU PCIe cards, drives, etc.?)
 
  • Like
Reactions: thefredelement
Too bad that Mago doesn't realize that the E5 is already a 256-bit architecture, and that there is no need for more address space.
Right, 64-bit memory address space will likely be enough for our lifetime, and with 128/256/512-bit AVX extensions, etc, areas like FP calcs are already well beyond 64bit.

But trying to refute all the nonsense just gives him another opening to talk more nonsense… it’s pointless.
 
As a developer user of the 2013 myself, I'm not unconvinced some developers couldn't use dual CPU's for testing. For me, the two main reasons why a developer would want an nMP over the convenience of a iMac or a MBP are running tests and hosting VMs. There are plenty of other reasons too, but, if you try to do those on a lesser model, they slow to a crawl and the fans go into overdrive. However, I personally haven't needed to go over 6 core and if it had been an issue, I could have gone up to 12 core before needing a dual CPU model.

Great statement! One add advantage of adding a second CPU is the additional memory and memory bandwidth. The cores may not be necessary, but there are advantages especially with VMs, or other multithreaded apps.
 
I don't know where desktop computing will go from now
new input/output devices. (die mouse die! :) )
new paradigms of working (not so reliant on cursor pointing/icons/menus/command line).
new display technology (moving away from flat 2D display)

etcetc..

the type of stuff apple is referring to when saying things like 'customers don't know what they want'.

(not meaning to say apple are the only people capable of bringing new tech to the table.. for the most part, apple's method in this realm is simply recognizing potential in a smaller company then buying that company/technology with the means to put a lot more money and creative minds behind the project)
 
  • Like
Reactions: Mago
Too bad that Mago doesn't realize that the E5 is already a 256-bit architecture, and that there is no need for more address space.
Xeon still have 64 bit architecture (ia64) it has nothing to do with buses but programming, 64bit address space actually no cpu uses more than 48bit but programmatically 64bit are used, my claim is on 128bit (as on gpu) instruction set as on gpu allows manipulate twice data each instructions.
 
Xeon still have 64 bit architecture (ia64) it has nothing to do with buses but programming, 64bit address space actually no cpu uses more than 48bit but programmatically 64bit are used, my claim is on 128bit (as on gpu) instruction set as on gpu allows manipulate twice data each instructions.

ia64 is the titanium instruction set. Xeons along with AMD 64bit use x86-64 instruction set.
You correct that Xeon is a 64bit processor though.

http://ark.intel.com/products/83351/Intel-Xeon-Processor-E5-2618L-v3-20M-Cache-2_30-GHz

You can goto the equivalent page of any of the E5-2600 processors and they state 64bit.
No doubt some people here will disagree with the manufacturer of the product but not a lot you can do about that.

Some additional features such as AVX2 have 256bit registers, and AVX-512 will expand this to 512bit registers, but the general purpose registers are 64bit still, hence why Intel who actually design and make the processors call them 64bit ( but what does Intel know)
 
Last edited:
  • Like
Reactions: AidenShaw and Mago
new input/output devices. (die mouse die! :) )
new paradigms of working (not so reliant on cursor pointing/icons/menus/command line).
new display technology (moving away from flat 2D display)

etcetc..

the type of stuff apple is referring to when saying things like 'customers don't know what they want'.

(not meaning to say apple are the only people capable of bringing new tech to the table.. for the most part, apple's method in this realm is simply recognizing potential in a smaller company then buying that company/technology with the means to put a lot more money and creative minds behind the project)
VR comes strong as development area, as well IA but I still see those as specialized fields than mainstream.

Will see.

I expect from the next Mac Pro:

* unchanged form factor.
* 128 gb ddr 4 ecc as factory option.
* Xeon E5v4 from 6 to 12/16 cores
* Fiji based GPU twice the power as current D700, 12GB each (hopefully ecc)
* NVMe SSD 2.5GB/s maybe dual ssd socket.
* 6 usb-c/Thunderbolt 3/Dp.
* 4 legacy Thunderbolt 2.
* 1 HDMI 2.
* 600W PSU And more powerful fan.
* 10 GBit ethernet instead dual gigabit or 1x10Gb and 1x10Gb.
* updated wifi and Bluetooth as on the iMac line.

Also I hope to see that 5K usb-c cinema display.
 
VR comes strong as development area, as well IA but I still see those as specialized fields than mainstream.

Will see.

I expect from the next Mac Pro:

* unchanged form factor.
* 128 gb ddr 4 ecc as factory option.
* Xeon E5v4 from 6 to 12/16 cores
* Fiji based GPU twice the power as current D700, 12GB each (hopefully ecc)
* NVMe SSD 2.5GB/s maybe dual ssd socket.
* 6 usb-c/Thunderbolt 3/Dp.
* 4 legacy Thunderbolt 2.
* 1 HDMI 2.
* 600W PSU And more powerful fan.
* 10 GBit ethernet instead dual gigabit or 1x10Gb and 1x10Gb.
* updated wifi and Bluetooth as on the iMac line.

Also I hope to see that 5K usb-c cinema display.

You are probably not far off. I foresee a few changes though. I think Apple waits for new GPUs from AMD, which are rumored to be released in the summer of 2016. Maybe Fiji makes an appearance at the low end, but it is not a suitable high end option due to the 4 GB VRAM limitation. More likely would be Tonga or just the D700 (Tahiti) again. The PSU will be unchanged, since increasing the maximum power would require the fan to spin faster and louder. This shouldn't be that big of a deal because AMD has already started claiming that their next gen GPUs should achieve 2x performance/watt.

I think they will replace the thunderbolt 2 ports with USB-C/Thunderbolt 3 ports. Due to PCI-E bandwidth restrictions it will be impossible to utilize the full bandwidth of all the ports at the same time but Apple will bank on this type of scenario being rare. The bandwidth limitation is unlikely to change until Skylake-EP comes out.

Last, I don't think 10 gigabit ethernet will make an appearance since it is not standard on the X99 platform that broadwell will use.
 
I wouldn't expect to see any TB2 on the next Mac Pro at all. Especially with passive adapters for TB3 to TB2. Why bother including TB2?
 
Xeon still have 64 bit architecture (ia64) it has nothing to do with buses but programming, 64bit address space actually no cpu uses more than 48bit but programmatically 64bit are used, my claim is on 128bit (as on gpu) instruction set as on gpu allows manipulate twice data each instructions.
E5 has 256-bit data words on instructions - you quote 384-bit for GPUs, but those are memory bus widths, not data word widths.

So the E5 is a 64-bit (virtual address [really only 48-bit implemented]) CPU with 256-bit registers and instructions that operate on 256-bit (512-bit in the next CPU) operands. That's twice (soon four times) as much as your unsupported claim about GPU operand widths.

Intel programmers have 256-bit (soon 512-bit) operands at their disposal, what operand sizes does the deprecated OpenCL spec support?
 
You can goto the equivalent page of any of the E5-2600 processors and they state 64bit.

Traditionally, the "bitness" of a CPU refers to the size of its native memory address pointers. For obvious reasons, the integer register size is usually the same size as the pointer size - pointer arithmetic is very common.

Mago is confusing the width of the memory bus with the size of the pointers. (Not surprising, Mago confuses a lot of things with other unrelated things.)

The E5 is fundamentally a 256-bit CPU. Although memory pointers are 64-bit (no need for more), the four channel memory moves 256-bit of RAM per operation. A full (with v3/v4) suite of instructions operate on 256-bit registers.

The E5 is a processor with 256-bit registers, 256-bit memory paths and instructions that operate on 256-bits in single instructions. Its native virtual address pointers are 64 bit.

Of course Intel calls it a 64-bit processor. Intel is also quick to point out the SIMD instructions and wide memory paths.
 
The nMP fits in real nice with that model.

Your local Storage and maybe AddIn Cards are in external enclosure. Pop along with a nice light nMP, disconnect the Keyboard,Mouse,Monitor and the TB cables. Connect to new one and up and running.

Now compare

Lug a 5,1 over, start swapping out addin cards etc, unless carrying all of those as well spare. Start transferring the storage disks out of the existing one to the new one etc. You can see where going with this.

These days the data is more likely in Enterprise to back on the Storage Network rather then local but you get the idea.

This is an astute observation on the new methods of "pro" hardware, and it's something that the spec geeks on this forum don't understand. Because they don't do actual enterprise-level work (most of them are probably, at best, freelance) or are involved with multi-user storage networks and the new paradigm of work environments that the nMP caters to.

They just want to chase bigger spec sheet numbers for smaller prices. It's a race to the bottom, and it's no wonder why they are so obsessed with gamer-level video cards, hackintoshing, and all that other PC trash. Let them have it, I say.

The projects that I've been able to take on because of my nMP have allowed the machine to pay itself off multiple times over. Not even counting the implicit cost savings of having complicated timeline render so much faster than my old machine. That in itself makes this thing totally NOT a "failure".
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.