Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
About the upgradeability of Mac Pro. Unfortunately, MP1 is first gen product. Future computers with new generation GPUs that will rely massively on Asynchronous Compute technology will require not upgrade to the hardware, but only expansion through external GPUs.

The problem here is the bandwidth of Thunderbolt port. Gen 3 brings big update to the technology and opens door to new ways of expanding the computers.
IMO, it is way better to have one CPU with dual GPUs inside the computer and connect it to rack made from 50 or even 100 GPUs by Thunderbolt, than have dual CPUs with limited internal space for GPUs. It is one of the possible futures, and it looks like Mac Pro is made for it. Will it be like this? Time will tell, but I can assure you all, that this idea has been already discussed by designers few years ago. Power efficiency is the key here, and biggest factor that impacts the design of computers and the idea itself of the computers, today.
 
I think it really breaks down to "it has to start somewhere". And I suspect Apple consumers tend to be wealthier and more socially conscious. Similar to organics. Just ask Whole Foods how well that works. BS? Of course! It's marketing!

BTW Sigmadog. That signature is awesome. Nice LOL!

Personally, I was offering a larger cultural opinion about the "Going Green" ethos.

In this context, of course, the power consumption level seems to result in, from what I've read here, GPU's that are under-clocked (if that is the right term) to fit into the power consumption and cooling capacity window determined by a small, tubular design. Which came first, the tube or the power and cooling parameters?

My (uneducated) guess is the design (the tube) dictated the parameters (lower power). This is exactly opposite of the classical design aphorism: Form Follows Function.

This fits into Apple's overall "Make it small" conceptual framework, but also plays into the cultural "Save Power = Save the Planet" (which is largely B.S. in my opinion). Two birds with one stone, as it were.

The fallacy underneath these arguments is the assumption that a Z-series or other system uses more energy (kWh) than the MP6,1.

The faster systems may use more peak power, but they finish faster and get back to idle.

Using 400 watts for an hour is the same as using 800 watts for 30 minutes.

The MP6,1 isn't power-constrained for environmental reasons - it's constrained for aesthetic reasons.
 
  • Like
Reactions: tuxon86
Aiden, Explain to me, why would you want 0.5 TFLOPs of compute power more on single D700 while consuming 100W more? The problem for GPUs is that you have to find the best spot of power efficiency. Efficiency is not "how small amount of power your hardware takes to get appreciable performance" but its maximizing performance for each watt consumed.
Tahiti in D700 consumes at max 129W with 850 MHz core clock, and gives 3.5 TFLOPs of compute power. Tahiti with 1000 MHz core clock consumes 240W of power and gives 4 TFLOPs of compute power. Does the power to cost ratio brings benefit here? I don't think so. Its all about balance of it.
 
The MP6,1 isn't power-constrained for environmental reasons - it's constrained for aesthetic reasons.

Yeah, that was sort of my point.

Talk of GPU's and CPU's and power constraints are far above my pay-grade, so I'll simply lean back and observe this thread for the time being.

But as for the question, "Will there be a new Mac Pro in 2015?"

My answer on October 1 is "Nah".
 
Certainly a lot of people. If you click "views" in the Mac Pro forum to sort threads by most views, you will see a staggering amount of activity here is related to upgrading. It adds up to the multi-millions. Upgrade threads are among the most popular posts here.

From running OS X on unsupported computers, to swapping CPU/GPU/memory/drives, and even updating standards and features (USB 3.x, 802.11AC, Bluetooth 4.0, Continuity, Handoff) there are clearly many, many people that like to change their computer from stock.

Perhaps upgraders are the minority. Heck, I'm willing to concede that even though I don't think there's a way to prove that one way or the other. But regardless, just on the MR forums alone, there are a huge number of people that care about changing their computer from stock.

So "who cares"? Well, a lot of us do.

I never said it wasn't contentious.


This was pretty common back in the 80s. In modern times there is no 'constant fiddling' needed.

Disagree. A simple setup perhaps not. Laptops have fine hardware ( Like a mac in fact ) but when you have dual graphics cards, specialised audio cards, Vid codec cards like the Red rocket. Various specialised controllers like the 3d connexions, Shuttles and Video Grading decks and audio consoles then thinks require constant fiddling.

It is down the varied hardware in windows machine that are both their best feature and worst failing.
 
  • Like
Reactions: ixxx69
When the nMP was first introduced, I remember reading somewhere that Apple had user data showing that the majority of the cMP's were never upgraded. I wish I could dig up that article.

It's true. I've worked at 50+ companies in the last 10 years. Perhaps 2 actually had spec'd up machines and properly set up.

Companies with

12 core mac pros and 6 gb ram running after effects with no multiprocessing turned on. Unaware that they need 3+ GB per core anyway.

2008 Mac Pros with the stock ATI P.O.S card expecting them to run 3D stuff well ( I have 685gtx which I think the highest you can go on a 2008 and it works great )

5400rpm Drives as a main drive and they are shocked when I've drop an SSD in them.

Actually not understanding that they are probably better off with a High end iMac for a lot of reasons.

Giving artists who ONLY use photoshop or illustrator 12 core machines and 3D guys 4 cores because they are more senior.

Putting in brand new Mac Pros but not updating the monitors - I am talking nMPs running on 1280x720 monitors.

Oh my favourite, not actually really related but using a 2007 Mac mini as a server with a USB2 connection to a 6tb Mybook and making all 10 of their staff open and run 3d files / photoshop and others DIRECTLY from that machine/mybook. And not understanding why it's slower than hell and crashing all the time.

Point is.
Studio managers think they know what will work - but don't have a clue really.
I.T. guys don't understand macs, or if they do don't understand how multiprocessing works or if it's even used in a given bit of software.
Artists often have ZERO idea how an app might need to be set up to use this power anyway.
Editors are even worse as they often use turn key stuff and NEVER muck around with any settings.
 
  • Like
Reactions: zephonic and ixxx69
Certainly a lot of people. If you click "views" in the Mac Pro forum to sort threads by most views, you will see a staggering amount of activity here is related to upgrading. It adds up to the multi-millions. Upgrade threads are among the most popular posts here.

From running OS X on unsupported computers, to swapping CPU/GPU/memory/drives, and even updating standards and features (USB 3.x, 802.11AC, Bluetooth 4.0, Continuity, Handoff) there are clearly many, many people that like to change their computer from stock.

Perhaps upgraders are the minority. Heck, I'm willing to concede that even though I don't think there's a way to prove that one way or the other. But regardless, just on the MR forums alone, there are a huge number of people that care about changing their computer from stock.

So "who cares"? Well, a lot of us do.

Sure I shoudl have added a big old IMO to that - but well I suppose anything written on the inter web is an opinion... I never said it wasn't contentious.

I know there is a great Hackintosh community. BUT it's sometimes used in companies which a) is illegal and b) utterly counterproductive as it often just doesn't work right.

Of course there a lot of people on here who want to do upgrades etc. It's a geek site. Most artists I work with wouldn't even know it exists, nor care, nor would have an inkling how to install a stick of Ram. Which is why they like Apple - for the most part the limited hardware options allow for a constantly working machine that they can just turn on, do work and then go home.

I showed some people how to do some simple automator stuff ( that they never new existed ) to add right click functionality and they though i was a witch doctor and supposed they didn't burn me at the stake.
 
  • Like
Reactions: ixxx69
I dunno, the new Mac Pro is unbearably underpowered, outrageously expensive and creates an entirely new side investment for running a desktop internal storage. Back in 2009, I used to pay 2.5 grand for a machine with 8 cores, so what happened here?

By the way, is it actually possible to run an OS off the thunderbolt interface just like internal drive / boot Pro Tools / Final Cut sessions reliably?

And one thing I have not noticed up until now is that the new Mac Pro comes straight out of the box without a mouse and keyboard? So Apple patented a system that's detecting your brain waves as peripheral input? That's genius. Someone on macrumors should put up a article about this.

If apple won't come up with an impressive update / cut prices, I will probably get a 2012 with 12 cores, regardless of how I'd really love to buy a brand new system.

In my opinion, Apple is still mostly dependent on the professional customer when it comes to their Mac Pro line, and they might have noticed that something's off with their new model due to dwindling sales. If Apple won't change course with the new model, sales will dwindle even more and eventually they drop the Mac Pro entirely.
 
Last edited:
Is everybody sure that the nMP form factor won't be open to modifications? It looks, in principle, that the design comes out from an extruded cylinder, which could be longer to host a wider thermal core, still keeping the aesthetics close to the original.
I already said it once, and with the new Apple TV, I'm starting to suspect it even more. :)
That could open to 2S architecture and widen the memory bandwidth necessary to keep 4 TB(3) ports.
 
Last edited:
I dunno, the new Mac Pro is unbearably underpowered, outrageously expensive and creates an entirely new side investment for running a desktop internal storage. Back in 2009, I used to pay 2.5 grand for a machine with 8 cores, so what happened here?

By the way, is it actually possible to run an OS off the thunderbolt interface just like internal drive / boot Pro Tools / Final Cut sessions reliably?

And one thing I have not noticed up until now is that the new Mac Pro comes straight out of the box without a mouse and keyboard? So Apple patented a system that's detecting your brain waves as peripheral input? That's genius. Someone on macrumors should put up a article about this.

If apple won't come up with an impressive update / cut prices, I will probably get a 2012 with 12 cores, regardless of how I'd really love to buy a brand new system.

In my opinion, Apple is still mostly dependent on the professional customer when it comes to their Mac Pro line, and they might have noticed that something's off with their new model due to dwindling sales. If Apple won't change course with the new model, sales will dwindle even more and eventually they drop the Mac Pro entirely.

I'm not convinced sales are dwindling.
 
Here is a 1,1 from 2006 running current OS, and using Octane Render at speeds completely current. Note that the baseline numbers are for a GTX980 at PCIE 3.0.

Now ask yourself how many 2013 nMPs will be able to run anything at all by 2022? If that is too hard to think of, realize that the 1,1 would have been locked out of new OS's at 10.8 if it still had GPUs stuck in it as nMP does. So, when 10.8 came out it's value would have plunged like a dead falcon.

As far as the "I'm a Pro and work at Big Company LLC, we sell everything as soon as it collects dust and buy new ones" argument, realize that a big part of why the 3,1 and the 4,1 and the 5,1 held their value so well for so long was precisely because they remained relevant. How relevant would this 1,1 be with an X1900 or GT 7300? (No CUDA, No OPENCl)

What is going to happen to values of 6,1 as time goes on? Really think about it. (think "Crater" think "VW Diesel")

I think we are talking big tradeoffs for running recent cards in old Mac Pros. A person usually pays big money on a workstation for reliability, stability and power. Its kind of ironic as you talk about being able to run the latest video cards in cMP's. But then warn people on your website not to upgrade to El Capitan yet as they could boot to a non functioning screens. So all your doing is trading a powerful video card for less reliability. Something that does not evoke a warm and fussy feeling in many people.

As we both know, Apple does add driver support unofficially for some cards. You also can run some cards but only with Nvidia drivers installed. But using these unofficially supported cards do have big risks. Some of these risks to some people are not worth it.

Both Apple & Nvidia could drop support at any time, which reliability would take another hit. That would either make that card non-functioning or missing features. Like my GTX 570 can only use one monitor now using Nvidia drivers, which now decrease my productivity.

Saying upgradability on Macs is the only solution is futility at best. It will never be 100% viable solution when one company always controls the entire platform.

Your just putting a bandaid solution in attempts to keep really old Macs relevant.
 
  • Like
Reactions: ixxx69
I'm not convinced sales are dwindling.

Are there official numbers available somewhere we can refer to? Yeah that's been just a wild guess of mine but I can tell you more a hand full of audio professionals that really would like to upgrade their Apple rigs hold out due to dislike of the new model.
 
Aiden, Explain to me, why would you want 0.5 TFLOPs of compute power more on single D700...
I'd use Maxwell... ;) ...and I'd worry about application performance, not the theoretical peak for individual components which may or may not even be used by the application.

And what's the idle wattage difference for the two?
 
I'd use Maxwell... ;) ...and I'd worry about application performance, not the theoretical peak for individual components which may or may not even be used by the application.

And what's the idle wattage difference for the two?
If you worry about application performance, why you argue about hardware performance?
 
No but benchmarks tell hella lot.

Disagree. Most benchmarks don't give you real world use. I have yet to see any good benchmarks dedicated to a workstation computer. Graphic card benchmarks almost entirely for gaming. Workstations are not focused on gaming. Most won't do long term testing where most consumer PC's might choke/throttle.
 
Both are linked.
If your application can handle the hardware - then yes, they are. Sometimes Applications are not able to use all of the power in hardware and its capabilities. Want to look for proofs? Look at DX11 and DX12 performance comparison both for AMD and Nvidia. DX12 is DX11 without driver overhead, and with Asynchronous Compute capabilities. In all fairness - it turns out that DX12 is Mantle AND DirectX11 in one API. It reduces the driver overhead, driver optimizations of performance, and allows for greater use of CPU and GPU in parallel environment. Something that was not possible with DX11. With DX11 games got much higher performance gains from Driver optimizations, but here, all you have in driver is API driver. Not specific application drivers. This is why Nvidia Maxwell had that type of performance from it. Simple linear architecture, with the optimization lying in the hands of Nvidia driver Engineers. Now lets look what hap[pens when you go to DX12. Two games already have been shown. Fable Legends and Ashes of the Singularity. Ashes have 20% of compute capabilities added to the pipeline, and quite a lot of context switching between graphics and compute in it. What happens with AMD hardware? You get massive increase in performance. What happens when your hardware is not capable of switching contexts in pipeline? You get decrease in performance, as is the matter of Nvidia hardware in that particular situation. Fable - here is completely different story. 5% of compute and absolutely no Asynchronous scheduling, and no context switching in the pipeline. Effect is that the engine is pretty streamlined and easy to code. And Nvidia hardware still flies on it. Why? Because its still DirectX11 engine only without the driver overhead. And without Asynchronous Compute.

Why is it important? Because Asynchronous Compute will get much, much more attention in future software, because the benefit is unprecedented. A lot more work can be done in the same time with it, than without it. The API that brought it was Mantle, and is on every major platform. It is even now API for professional use in Apple world and in AMD world: FireRays API. And another thing is ability to run few different applications at the same time on AMD hardware, thanks to Asynchronous Compute. It is starting to grow extremely fast. And only hardware that is true capable of asynchronous compute is GCN from AMD. And nothing so far says that with Pascal it will change. DirectX12.1 does not came to life for no reason(Nvidia pushed MS for creating it). And AC allows the full utilization of GPU for the Application. Regardless if its Game or Professional App.
 
If your application can handle the hardware - then yes, they are. Sometimes Applications are not able to use all of the power in hardware and its capabilities. Want to look for proofs? Look at DX11 and DX12 performance comparison both for AMD and Nvidia. DX12 is DX11 without driver overhead, and with Asynchronous Compute capabilities. In all fairness - it turns out that DX12 is Mantle AND DirectX11 in one API. It reduces the driver overhead, driver optimizations of performance, and allows for greater use of CPU and GPU in parallel environment. Something that was not possible with DX11. With DX11 games got much higher performance gains from Driver optimizations, but here, all you have in driver is API driver. Not specific application drivers. This is why Nvidia Maxwell had that type of performance from it. Simple linear architecture, with the optimization lying in the hands of Nvidia driver Engineers. Now lets look what hap[pens when you go to DX12. Two games already have been shown. Fable Legends and Ashes of the Singularity. Ashes have 20% of compute capabilities added to the pipeline, and quite a lot of context switching between graphics and compute in it. What happens with AMD hardware? You get massive increase in performance. What happens when your hardware is not capable of switching contexts in pipeline? You get decrease in performance, as is the matter of Nvidia hardware in that particular situation. Fable - here is completely different story. 5% of compute and absolutely no Asynchronous scheduling, and no context switching in the pipeline. Effect is that the engine is pretty streamlined and easy to code. And Nvidia hardware still flies on it. Why? Because its still DirectX11 engine only without the driver overhead. And without Asynchronous Compute.

Why is it important? Because Asynchronous Compute will get much, much more attention in future software, because the benefit is unprecedented. A lot more work can be done in the same time with it, than without it. The API that brought it was Mantle, and is on every major platform. It is even now API for professional use in Apple world and in AMD world: FireRays API. And another thing is ability to run few different applications at the same time on AMD hardware, thanks to Asynchronous Compute. It is starting to grow extremely fast. And only hardware that is true capable of asynchronous compute is GCN from AMD. And nothing so far says that with Pascal it will change. DirectX12.1 does not came to life for no reason(Nvidia pushed MS for creating it). And AC allows the full utilization of GPU for the Application. Regardless if its Game or Professional App.

Really? More AMD PR gibberish? If they were really what the world needed you wouldn't need so many words.
 
  • Like
Reactions: tuxon86
Multi core 64-bit benchmark, a workstation for 3.5 grand.

You pretty much pay more than you did for your early 2009 machine and are actually slower than before, great job Apple, minus internal storage upgradability and periphery out of the box.

Even the non-workstation CPU Imac including a display and keyboard is faster. So customers pretty much spend between 1 and 1.5 grand on the new tower design. There is no advantage over the Imac for example whatsoever, rather severely disadvantaged.

The message here, in comparison with all the other macs, is that if you really wanna waste your money, buy the new Mac Pro.

multi64bit.jpg
 
Last edited:
I'm sorry but you 're comparing a 4 core cpu with with 6-8 core cpus in multicore benchmark, is this fair? really?
What about the 6 core nMP at the top? What about the 8 and 12 core models?
What about their single core benchmarks, aren't these good enough too? or they are?

Anyway you 're buying a system not just a single cpu, and these tests can't show you their true life performance.
You have to take also in account and the other parts to have a real image.
 
MP1.png MP2.png
Multi core 64-bit benchmark, a workstation for 3.5 grand.

You pretty much pay more than you did for your early 2009 machine and are actually slower than before, great job Apple, minus internal storage upgradability and periphery out of the box.

Even the non-workstation CPU Imac including a display and keyboard is faster. So customers pretty much spend between 1 and 1.5 grand on the new tower design. There is no advantage over the Imac for example whatsoever, rather severely disadvantaged.

Nice spin on the benchmarks. For some reason you want to compare a 4 core to an 8 core Mac Pro and expect the 4 core to be faster...lol

The iMac ( Which also uses a mobile GPU BTW ) while faster on very short benchmarks, in very long renders will be beat by the Mac Pro. Thermal throttling/mobile GPU verses dedicated GPU will tell the tale.

Here are some more realistic data with a better comparison if you really want to use benchmarks:
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.