Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Is the new Mac Pro a Failure for traditional Mac Creative and Professional customers


  • Total voters
    417
Status
Not open for further replies.
Actually Digital Video is plain binary Data, another biased argument debunked.

FYI USB-C w/o thunderbolt also implemented this and other bus "multi-use" protocols (following Intel's "bad idea"...).

There is nothing wrong on TB2/DP, bandwidth share is transparent, when you plug an DP device it takes one of two data channels and leaves the other to TB (pci) data, works well, but many people ignores if thy plug an monitor to the TB2 daisy chain they use half bandwidth (or full bandwidth if dealing with 5K monitors).

My response failed to update in time: "When I was introduced to computers, video port were for video, data connections were for data."
 
What was biased about my facts? I'm doing three exports right now! Only the Apple Devices preset is utilizing the GPU.
Read the section at the bottom of this case study about how FCPX uses GPGPU.
http://create.pro/blog/open-cl-vs-c...upport-gpgpugpu-acceleration-real-world-face/


Another biased argument debunked, the link you provided is not updated FCP since apple added this feature this year: https://www.apple.com/pr/library/2015/04/13Apple-Updates-Final-Cut-Pro-X-Motion-and-Compressor.html (your article is from 2014.

Final Cut Pro 10.2 natively supports even more video formats, including Panasonic AVC-Ultra and Sony XAVC-S, and makes working with RED RAW files faster than ever with GPU-accelerated transcoding, playback and rendering—including support for dual GPUs on Mac Pro®.
 
Another biased argument debunked, the link you provided is not updated FCP since apple added this feature this year: https://www.apple.com/pr/library/2015/04/13Apple-Updates-Final-Cut-Pro-X-Motion-and-Compressor.html (your article is from 2014.

Yes, we are all aware of the 10.2 update. I stated in my previous posts that rendering, transcoding and exporting are supported by GPGPU. Those formats listed utilize variants of h.264. What does this have to do with the 6,1 Mac Pro being a failure?
 
My response failed to update in time: "When I was introduced to computers, video port were for video, data connections were for data."
TB/DP merge is an evolution, since both video devices are peripherals and data device also are peripherals, is an good idea link both on peripheral buses.

FYI this concept still discriminates video from other data, IEEE is working on a post USB peripheral port where every signal (data, video, lan) is routed to an single bi-directional cable defectively sharing bandwidth in real way as is done on network cables.
 
Yes, we are all aware of the 10.2 update. I stated in my previous posts that rendering, transcoding and exporting are supported by GPGPU. Those formats listed utilize variants of h.264. What does this have to do with the 6,1 Mac Pro being a failure?

As long i remember you based your nMP failure argument on the unavoidable need for multiple CPU for trasncoding...

CPU transcoding have never been efficient XEONs cant compete on WATT/DATA ratio against GPU transcoding.
 
As long i remember you based your nMP failure argument on the unavoidable need for multiple CPU for trasncoding...

CPU transcoding have never been efficient XEONs cant compete on WATT/DATA ratio against GPU transcoding.

My original statement was the lack of a dual CPU option was not ideal for many workloads/users. "The lack of a second CPU limits the nMP's audience. Video encoding, 3D renders, simulations, all benefit from more CPU power." No disagreement, you win! :)
 
My original statement was the lack of a dual CPU option was not ideal for many workloads/users. "The lack of a second CPU limits the nMP's audience. Video encoding, 3D renders, simulations, all benefit from more CPU power." No disagreement, you win! :)
you have easy access to supercomputers these days.
way faster and cheaper to use than a pc 'crammed' full of cpu cores.


edit-
i put crammed in quotes because 32cores sounds funny when you compare it to one of these things..
for example, the computer i render on sometimes is this:

-------
btw we have new hardware
650 new dual 16core V3 xeons and 64gb ram each

in total "just" 1500 rendernodes :p
-------

..
. and it's cheaper than me buying a 12core computer for rendering.
 
Last edited:
Then sadly they have three choices

1.) What Apple offer
2.) Keep upgrading that 4,1 and 5,1 machines till cannot do any more, finding ways to keep OSX going on it like people have with the 1,1 and 2,1
3.) Fully fledged Hackintosh
Mackintoshes are hardly a solution for professionals ....
The first point would have been the best ... if Apple had its MP 6,1 updated earlier this year as expected. I really don't know why they didn't.
 
you have easy access to supercomputers these days.
way faster and cheaper to use than a pc 'crammed' full of cpu cores.

I realize that nobody in here cares about developers, but this is not an option for us. The lack of a second CPU option (plus the second GPU that is useless to most developers) was pretty shocking with the 6,1.
 
  • Like
Reactions: pier
I realize that nobody in here cares about developers, but this is not an option for us. The lack of a second CPU option (plus the second GPU that is useless to most developers) was pretty shocking with the 6,1.
may I ask you why a developer absolutely need a dual CPU system ?
What's wrong about a 6-8 cores solution ?
 
may I ask you why a developer absolutely need a dual CPU system ?
What's wrong about a 6-8 cores solution ?
Not every developer does. Which is why the old option of single or dual CPU was ideal. "Absolutely need" is kind of strong, it's not like a developer is going to die without one. But the cost of developer time dwarfs any cost of equipment, even with a totally tricked out BTO system, so if your work benefits from having that second CPU, you *really* want to have it.
 
Not every developer does. Which is why the old option of single or dual CPU was ideal. "Absolutely need" is kind of strong, it's not like a developer is going to die without one. But the cost of developer time dwarfs any cost of equipment, even with a totally tricked out BTO system, so if your work benefits from having that second CPU, you *really* want to have it.
I'm not exactly a developer, so I can't see what kind of workflow require a dual CPU system ...
Would you care to give me an example ?
 
I'm not exactly a developer, so I can't see what kind of workflow require a dual CPU system ...
Would you care to give me an example ?

Well, if you go look at the build options for the 6,1, here are the CPU options:
  • 3.5GHz 6-core with 12MB of L3 cache
  • 3.0GHz 8-core with 25MB of L3 cache
  • 2.7GHz 12-core with 30MB of L3 cache
So if I want 12 core, I have to drop from 3.5GHz to 2.7GHz because I can't dual CPU 6-cores and maintain speed. Let's say I've finished a series of changes, and want to run the test suite to make sure everything remains correct. Each core can be running one test since they are independent, so more cores is great, and because a lot of the tests are compute bound, I want fast cores. That's probably the easiest example to understand.

Other developers may want the extra memory capacity from dual CPU's. It's all very highly dependent on what kind of work is being done. Things that are nearly all UI like Twitter clients and such probably don't have that need, so they'd be happy with a single CPU option. We have no option like a renderfarm to use alternate hardware other than Macs. That option has been talked about a lot, but you're basically talking Hackintosh there, and no vendor is going to do that.

Incidentally, I am a fan of the new form factor, although I can see why others might not be. It means very little downtime for things like software upgrades, or hardware problems. The internal drive holds a standard configuration, and everything that is user specific is on external storage. Swapping in a new can takes seconds, and is a helluva lot lighter for one person to carry. The cost of a developer twiddling their thumbs far outweighs the cost of stuff like TB enclosures. I just wish we could have the option of dual CPU and only one GPU.

That being said, my home machine is a 5,1 and you can take it when you pry it from my cold, dead fingers.
 
  • Like
Reactions: tuxon86 and Mago
Well, if you go look at the build options for the 6,1, here are the CPU options:
  • 3.5GHz 6-core with 12MB of L3 cache
  • 3.0GHz 8-core with 25MB of L3 cache
  • 2.7GHz 12-core with 30MB of L3 cache
So if I want 12 core, I have to drop from 3.5GHz to 2.7GHz because I can't dual CPU 6-cores and maintain speed. Let's say I've finished a series of changes, and want to run the test suite to make sure everything remains correct. Each core can be running one test since they are independent, so more cores is great, and because a lot of the tests are compute bound, I want fast cores. That's probably the easiest example to understand.

Other developers may want the extra memory capacity from dual CPU's. It's all very highly dependent on what kind of work is being done. Things that are nearly all UI like Twitter clients and such probably don't have that need, so they'd be happy with a single CPU option. We have no option like a renderfarm to use alternate hardware other than Macs. That option has been talked about a lot, but you're basically talking Hackintosh there, and no vendor is going to do that.

Incidentally, I am a fan of the new form factor, although I can see why others might not be. It means very little downtime for things like software upgrades, or hardware problems. The internal drive holds a standard configuration, and everything that is user specific is on external storage. Swapping in a new can takes seconds, and is a helluva lot lighter for one person to carry. The cost of a developer twiddling their thumbs far outweighs the cost of stuff like TB enclosures. I just wish we could have the option of dual CPU and only one GPU.

That being said, my home machine is a 5,1 and you can take it when you pry it from my cold, dead fingers.
I'm not convinced, I was a full time software developer years ago also rigth now I code few things by my self both on iOS and OSX I'm not using Swift yet I'm old school C and objective-c, and I use to build complex numeric models that take an eternity to complete not the thing you'll see on a consumer iOS or OSX app most and except when I run those models with production data I don't need more than 16 gb to build and maintain complex apps in fact sometimes I download open-source apps and compile by my self just be sure it don't have a backdoor and repeat I never need more than 16gb to develop on Xcode, and while I have a multithreaded app that uses tons of cpu and memory it does on production data on development data it don't use more than few MB. Furthermore u read something really strange on your argument, do you need dual socket for your app? To test what? Multi threading programming is transparent to multiple socket cpu actually each cpu cure is managed individually and agnostic of on which socket it resides, in fact is complicated just to detect having dual socket and as long I know on objective-c or osx SDK there is no api to detect where is fiscally some cpu core, and threads actually are agnostic on which core are running, further what's the logical benefit to know or manipulate which thread run which core at which cpu? Even systems with HI lo cpu cores (topically android OctaCores) where run an thread is handled by the chipset power manager even the os can't control this only ask the cpu to go HI or LO.

So I don't see why a software developer may need multiple socket cpu, when what is needed are multiple cores.
 
  • Like
Reactions: Max(IT)
Don't you understand that dual sockets can double the number of cores? Or give faster cores? Or both?
You'll never get faster cores on Xeon platform, both cpu should be the same.

From real programmer if you test a 12 threads application, is the same on dual 6 core system than on a 12 core system.
 
Turboboost moves the delta of clock speed a lot closer between the 6 core & 12 core e5 v2 Xeons, IIRC it's 400 or 500Mhz. Lol, not like I'd turn down an extra 400-500Mhz but depending upon the computation taking place the number of cores may in fact be the bottle neck - so really you'd want dual 12 cores for large projects.

Edit: Now that I say this I'm sure there'd be some type of throughput bottleneck, not in the amount of data but that if Xcode - or your IDE or build script of choice - is compiling one class or file per thread I'm not sure if it'd be able to concurrently consume 48 files (12 hyper threaded cores x2).

Well, if you go look at the build options for the 6,1, here are the CPU options:
  • 3.5GHz 6-core with 12MB of L3 cache
  • 3.0GHz 8-core with 25MB of L3 cache
  • 2.7GHz 12-core with 30MB of L3 cache
So if I want 12 core, I have to drop from 3.5GHz to 2.7GHz because I can't dual CPU 6-cores and maintain speed. Let's say I've finished a series of changes, and want to run the test suite to make sure everything remains correct. Each core can be running one test since they are independent, so more cores is great, and because a lot of the tests are compute bound, I want fast cores. That's probably the easiest example to understand.

Other developers may want the extra memory capacity from dual CPU's. It's all very highly dependent on what kind of work is being done. Things that are nearly all UI like Twitter clients and such probably don't have that need, so they'd be happy with a single CPU option. We have no option like a renderfarm to use alternate hardware other than Macs. That option has been talked about a lot, but you're basically talking Hackintosh there, and no vendor is going to do that.

Incidentally, I am a fan of the new form factor, although I can see why others might not be. It means very little downtime for things like software upgrades, or hardware problems. The internal drive holds a standard configuration, and everything that is user specific is on external storage. Swapping in a new can takes seconds, and is a helluva lot lighter for one person to carry. The cost of a developer twiddling their thumbs far outweighs the cost of stuff like TB enclosures. I just wish we could have the option of dual CPU and only one GPU.

That being said, my home machine is a 5,1 and you can take it when you pry it from my cold, dead fingers.

I'm not convinced, I was a full time software developer years ago also rigth now I code few things by my self both on iOS and OSX I'm not using Swift yet I'm old school C and objective-c, and I use to build complex numeric models that take an eternity to complete not the thing you'll see on a consumer iOS or OSX app most and except when I run those models with production data I don't need more than 16 gb to build and maintain complex apps in fact sometimes I download open-source apps and compile by my self just be sure it don't have a backdoor and repeat I never need more than 16gb to develop on Xcode, and while I have a multithreaded app that uses tons of cpu and memory it does on production data on development data it don't use more than few MB. Furthermore u read something really strange on your argument, do you need dual socket for your app? To test what? Multi threading programming is transparent to multiple socket cpu actually each cpu cure is managed individually and agnostic of on which socket it resides, in fact is complicated just to detect having dual socket and as long I know on objective-c or osx SDK there is no api to detect where is fiscally some cpu core, and threads actually are agnostic on which core are running, further what's the logical benefit to know or manipulate which thread run which core at which cpu? Even systems with HI lo cpu cores (topically android OctaCores) where run an thread is handled by the chipset power manager even the os can't control this only ask the cpu to go HI or LO.

So I don't see why a software developer may need multiple socket cpu, when what is needed are multiple cores.

Don't you understand that dual sockets can double the number of cores? Or give faster cores? Or both?
 
Turboboost moves the delta of clock speed a lot closer between the 6 core & 12 core e5 v2 Xeons, IIRC it's 400 or 500Mhz. Lol, not like I'd turn down an extra 400-500Mhz but depending upon the computation taking place the number of cores may in fact be the bottle neck - so really you'd want dual 12 cores for large projects.
This is a performance arrangement, not a programmer/developer issue , this could be a condition for a production system but not a requirement for a developer system.

There is nothing you can't do on single socket than on dual socket, dual socket may offer you a faster system on some configuration at the expense of more heat dissipated by gigaflop, also a dual socket system (at same core count at same speed) cost more than single socket on same core count.
 
Further if you develop a multi threaded app and those thread don't overlap (each can end asynchronous the other threads) you can validate an app on a dual core system for a 12 core production system.

Overlapped threads are more complex to develop but you also can simulate all the cores need the difference is the time to end twice or triplet since overlapped threads have to wait the threads they depends to end before continue, so in no way a developer justifies dual socket development machine, it may be necessary for production but definitely not for development.
 
Well, if you go look at the build options for the 6,1, here are the CPU options:
  • 3.5GHz 6-core with 12MB of L3 cache
  • 3.0GHz 8-core with 25MB of L3 cache
  • 2.7GHz 12-core with 30MB of L3 cache
So if I want 12 core, I have to drop from 3.5GHz to 2.7GHz because I can't dual CPU 6-cores and maintain speed. Let's say I've finished a series of changes, and want to run the test suite to make sure everything remains correct. Each core can be running one test since they are independent, so more cores is great, and because a lot of the tests are compute bound, I want fast cores. That's probably the easiest example to understand.

Other developers may want the extra memory capacity from dual CPU's. It's all very highly dependent on what kind of work is being done. Things that are nearly all UI like Twitter clients and such probably don't have that need, so they'd be happy with a single CPU option. We have no option like a renderfarm to use alternate hardware other than Macs. That option has been talked about a lot, but you're basically talking Hackintosh there, and no vendor is going to do that.

Incidentally, I am a fan of the new form factor, although I can see why others might not be. It means very little downtime for things like software upgrades, or hardware problems. The internal drive holds a standard configuration, and everything that is user specific is on external storage. Swapping in a new can takes seconds, and is a helluva lot lighter for one person to carry. The cost of a developer twiddling their thumbs far outweighs the cost of stuff like TB enclosures. I just wish we could have the option of dual CPU and only one GPU.

That being said, my home machine is a 5,1 and you can take it when you pry it from my cold, dead fingers.

Understood your point but I still can't see the need for a dual socket system for a developer.
In the after mentioned case an 8 core / 3 ghz setup seems to be a good compromise, but not knowing exactly the software developed I can't weight the performance loss in such a setup.
 
This is a performance arrangement, not a programmer/developer issue , this could be a condition for a production system but not a requirement for a developer system.

There is nothing you can't do on single socket than on dual socket, dual socket may offer you a faster system on some configuration at the expense of more heat dissipated by gigaflop, also a dual socket system (at same core count at same speed) cost more than single socket on same core count.

Further if you develop a multi threaded app and those thread don't overlap (each can end asynchronous the other threads) you can validate an app on a dual core system for a 12 core production system.

Overlapped threads are more complex to develop but you also can simulate all the cores need the difference is the time to end twice or triplet since overlapped threads have to wait the threads they depends to end before continue, so in no way a developer justifies dual socket development machine, it may be necessary for production but definitely not for development.

I am really referring to build/compile time and not the multithreaded app itself.

Personally I've mostly worked with my own queues and waiting for things to finish on threads that aren't the main thread but don't manage things outside of asynchronous dependency - as long as it's not blocking the interface!

I don't know how to estimate what other developers do but when I talk to others in my circle we all tend to use these concepts in the same way. I can create different queues but I don't literally control what threads they're being executed on as the abstractions I work with don't really support that (GCD & NSOperation).
 
I am really referring to build/compile time and not the multithreaded app itself.

Personally I've mostly worked with my own queues and waiting for things to finish on threads that aren't the main thread but don't manage things outside of asynchronous dependency - as long as it's not blocking the interface!

I don't know how to estimate what other developers do but when I talk to others in my circle we all tend to use these concepts in the same way. I can create different queues but I don't literally control what threads they're being executed on as the abstractions I work with don't really support that (GCD & NSOperation).
Actually if you have issues with build time usually this a more complex thing that core count, build performance many times depend more on disk access performance than cpu/ram further thread count is secondary too since few compiler tasks are multi threaded (each code file is compiled by a unique thread) , also not all code files can be build at same time, and some have to wait other files to compile, and unless you do a full rebuild compilers IDE only build those code files modified and links to the previously build binaries, so I still don't see the hurry for multiple core multi socket developer system, the biggest impact you'll get from the storage system, I do remember now that the faster Xcode development machine right now is the 2015 iMac retina 27 on full ssd with that 4.2 GHz i7, leaves far behind the previous iMac retina and far far away the Mac Pro, because the new iMac 27 ssd is a more than twice faster also the i7 single thread speed is the fastest (long integers or double precision ).
 
  • Like
Reactions: Max(IT)
Actually if you have issues with build time usually this a more complex thing that core count, build performance many times depend more on disk access performance than cpu/ram further thread count is secondary too since few compiler tasks are multi threaded (each code file is compiled by a unique thread) , also not all code files can be build at same time, and some have to wait other files to compile, and unless you do a full rebuild compilers IDE only build those code files modified and links to the previously build binaries, so I still don't see the hurry for multiple core multi socket developer system, the biggest impact you'll get from the storage system, I do remember now that the faster Xcode development machine right now is the 2015 iMac retina 27 on full ssd with that 4.2 GHz i7, leaves far behind the previous iMac retina and far far away the Mac Pro, because the new iMac 27 ssd is a more than twice faster also the i7 single thread speed is the fastest (long integers or double precision ).
Absolutely agree.
That's the reason why I can't really understand the multi socket need for developers (while I can understand it in some other fields).
 
I am really referring to build/compile time and not the multithreaded app itself.

Personally I've mostly worked with my own queues and waiting for things to finish on threads that aren't the main thread but don't manage things outside of asynchronous dependency - as long as it's not blocking the interface!

I don't know how to estimate what other developers do but when I talk to others in my circle we all tend to use these concepts in the same way. I can create different queues but I don't literally control what threads they're being executed on as the abstractions I work with don't really support that (GCD & NSOperation).

I don't know, maybe people are tired, or just skimmed my post and aren't getting the point. With the 6,1 as it stands now, 12 cores comes at 2.7GHz. With a dual CPU system, I can get 12 cores at 3.5GHz. Did you not ever run tests, or perhaps not ever have a test suite that took significant time to complete?

As for compromising, that's what you do when you have to, but we only have to compromise because Apple doesn't offer the option. It's not a matter of compromising because of budget or anything else, but solely because we have to use Apple hardware.
 
  • Like
Reactions: tuxon86
I realize that nobody in here cares about developers, but this is not an option for us. The lack of a second CPU option (plus the second GPU that is useless to most developers) was pretty shocking with the 6,1.
oh.. i said that in regards to jmo's post in which he mentioned the need for cpu power.. those types of scenarios can go to the cloud for much faster (and cheaper) performance than any personal computer can offer.

re: 'nobody here cares about developers'..
hmm.. maybe so but personally, i care about developers more than anything else with regards to enhancing my computing experience.. the design of an application has much more impact on how fast/fluid i can work than any piece of hardware will be able to do.

that aside, the developers i'm in contact with don't use the most awesome hardware out there to write/test their programs on.. Rhino for Mac is being done on a mbp.. the head dev for indigo renderer is testing the openCL rewrite on an HD7770..

as far as i can gather, developers generally don't write the apps with the idea of 'my users are going to need the latest & greatest hardware in order for this to work well for them'.. they write so that their app works well on most people's systems.. not so their users will be required to drop $10k on hardware in order to use the program.
 
Last edited:
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.