Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
And with most devices trying to integrate bluetooth now, who knows if we'll have ports at all in the future?

I don't think that I'll find myself in a computer room without cables anytime soon.
 
Last edited:
Let's see. Said it before, Apple even on the desktop will come to having only one type of port, sooner than later. And no ports at all, all wireless, is just a matter of time. Some here just came on at me for it. We'll see how long it takes though.

Anyway, doing a dual CPU mMP is unlikely, at least with Intel. That would require a dual socket motherboard based on socket P even for 1S systems (nothing new here, but more expensive), Silver or Bronze CPUs (also more expensive than -W parts), extra mem sockets, higher power PSU. OK, the die-hard guys here will say "I don't mind paying extra for it" but most will complain that it's already too expensive, and paying even more is out of the question.
And, math made, the die-hard crowd is a small percent of users that will hardly make Apple even look into it.
I'm one of those that prefers to pay more to get what I really want but the vast majority I believe will not.

Maybe with all this delays Apple will indeed start looking at Naples for the mMP. It would go nicely with what I believe it's their strategy with HSA. Although on the leaked roadmap I don't see Naples and Starship being HSA compliant.
The limitation here is the different sockets AMD uses for Naples and Snowy Owl that would limit the CPU options that could be used. Maybe it would be restricted to Naples and that's it, 3 options is all we get now anyway, only with less cores.
Snowy Owl being BGA and having 2 different sockets for MCM and SCM doesn't seem to fit the bill.
 
I think Apple meant something different than the definition of "modular" being thrown around here.
Then use the feedback link in my sig to let Apple know what you want from "modular". Especially if you don't want proprietary form factor cards using proprietary bastard versions of EFI.
 
Then use the feedback link in my sig to let Apple know what you want from "modular". Especially if you don't want proprietary form factor cards using proprietary bastard versions of EFI.


TBH Aiden. I've gotten past Apple and their "pro" machines. My custom build PC is great as much as I dislike Windows.

If Jonny Ives can make that trashcan computer I don't see where he can go in design.
He screwed up and he should know it.
 
  • Like
Reactions: scoobs69
I think it's safe to say the vast majority of the potential mMP market is comprised of people who perform tasks beyond the mainstream user. Many of those tasks will require more of some resource(s) (RAM, shader cores, etc) than would make sense to put in the base configuration (TDP, cost). A truly modular design could offer power users the potential to BTO an elegant solution for their particular use case. IMO, that would include a way to exploit PCIe cards and other external assets without significant performance compromises.

Keeping the "central core" under 800W with high speed bus connections to other resources seems like the easiest way to underpin the system with enough muscle to to support whatever add ons a particular use case demands. I like the idea of a "partner" case for PCIe cards, flash storage, etc that has a very fast/wide connection to the mobo. The partner case could be offered with extreme cooling options for multiple big GPUs, etc.

Bottom line - the Mac Pro market is anything but one size fits all, so really embrace modularity and perhaps there will be a lot of happy campers...
 
Bottom line - the Mac Pro market is anything but one size fits all, so really embrace modularity and perhaps there will be a lot of happy campers...
If it's done right, yes.

I just put a new system online for my users today.
  • 44 cores / 88 threads (dual E5-2699 v4 @ 2.20GHz - turbo to 3.6 GHz)
  • 1 TiB RAM (easy upgrade to 1.5 TiB - only 16 of 24 DIMM slots in use)
  • quad GTX 1080Ti GPUs (3584 cores, 11 GiB VRAM per GPU)
  • 480 GB system SSD
  • 7.2 TB (usable) RAID-6 spinners with hot spare for work space
  • dual port SFP+ 10GbE converged NIC (10GbE + iSCSI offload + FCoE)
Will Apple even come close?

Or will it be horribly gimped with proprietary GPUs and BIOS?

Use the feedback link in my sig to tell Apple you want off-the-shelf PCIe cards.
 
Last edited:
  • Like
Reactions: zoltm and ssgbryan
Mac Pro or Power Mac's have always been "super computers" according to Apple. In the good old days it was fun to see their comparisons, apps (usually Adobe) ran side by side on Windows and a Mac.

Mac Pro needs to cast a new skin to become something to talk about. How could Apple do it again? Not just puting the fastest hw available - that would be boring in the sense of marketing, but to make it do something better as well. That is why I've always thought that Apple is after HSA with Metal... now that would have made the difference. Same hw, but a lot better results.

But what they said on their recent absolution, it seems that Intel is on their future roadmap... so no HSA. I think co-processor ARM chip a la MBP 2016 is their way to make "the" difference next coming years...

iOS does support HSA, and with Metal makes the Axx chips more powerful than the GFLOPS suggest.
 
Last edited:
It's kinda funny that the new AMD CPUs don't support HSA, at least the slides don't mention it.
It would fit nicely in the mMP, Naples. If HSA was available I believe Apple would jump on it, along with Vega. It would make the most sense.
 
It does seem like the timing is right to employ HSA in the mMP. A ground up redesign just might offer an opportunity for a variety of deeper architectural improvements than a simple iterative upgrade of an existing model. The type of efficiencies HSA provides (shared virtual memory access, zero copy, etc) would seem to be right in Apple's wheelhouse for extracting more performance per watt.
 
It does seem like the timing is right to employ HSA in the mMP. A ground up redesign just might offer an opportunity for a variety of deeper architectural improvements than a simple iterative upgrade of an existing model. The type of efficiencies HSA provides (shared virtual memory access, zero copy, etc) would seem to be right in Apple's wheelhouse for extracting more performance per watt.
And it would really speed up the embedded emoji bar. ;)

Seriously, though, do you think that the four amigos could even learn how to spell HSA, much less understand its significance?

And then add confusion by pitching Tensor cores and NVlink and ML and AI and VR and AR - instead of just making cat videos more quickly.

If the mMP is an ATI-only system it will fail just like the nMP has failed.
 
I still don't understand why a "Professional Workstation" would have onboard analog audio i/o ports. Even with an optical S/PDIF connection (TOSLINK), it's not as useful as ADAT or MADI.
 
The nMP failed because of lack of upgradeability, not the choice of GPU vendor.

The case Apple presented would suggest the Lack of upgradability, was because the choice of GPU vendor couldn't deliver an upgrade within the necessary power and heat budgets.
 
The case Apple presented would suggest the Lack of upgradability, was because the choice of GPU vendor couldn't deliver an upgrade within the necessary power and heat budgets.

High end GPUs from both AMD and Nvidia moved away from multiple small/medium size GPUs towards single behemoth sized GPUs. GK110, GM110, and GP110 from Nvidia are all large, with correspondingly high TDPs. The same goes for AMD's Hawaii and Fiji GPUs. Heck, even Tahiti (featured on the FirePro D700) was pretty big.

Apple expected GPU vendors to continue with their multiple GPU solutions. However, extracting maximum performance from two or more GPUs was a lot more difficult than foreseen. So both Nvidia and AMD scaled up their GPU designs. This shift in GPU size growth was also accelerated by Microsoft's DirectX12 graphics API, which made an already difficult multiple GPU performance optimisation task even more difficult for game developers.

The Mac Pro (2013) just wasn't designed to accept a single, large ~300w TDP GPU -- let alone two of them. So Apple didn't really have any upgrade options.
 
I've always seen SLI as an in-and-out trend. Even back in the Voodoo days when everyone was doing it. It seems to go in and out, but a nice, single GPU seems to always persevere. Apple made the wrong call, and I was I could tell them "told you so".

It was a fad in 1999, it was a fad in 2008, and it's a fad now. JMO.

No doubt it has valid VERY niche uses, but to architect the future of your desktops around a dual GPU philosophy. LOL?
 
High end GPUs from both AMD and Nvidia moved away from multiple small/medium size GPUs towards single behemoth sized GPUs.

Nvidia delivered a 1080 for laptops. Go look at barefeats to see what a pair of them can do, and that's what the 2013 could be today. The failure of the 2013 is all down to AMD being unable to deliver a GPU that has performance AND power efficiency AND low heat output AND compact size.
 
  • Like
Reactions: tuxon86
Nvidia delivered a 1080 for laptops. Go look at barefeats to see what a pair of them can do, and that's what the 2013 could be today. The failure of the 2013 is all down to AMD being unable to deliver a GPU that has performance AND power efficiency AND low heat output AND compact size.

The GPU's in the nMP aren't exactly what I would call "compact size".
 
Nvidia delivered a 1080 for laptops. Go look at barefeats to see what a pair of them can do, and that's what the 2013 could be today. The failure of the 2013 is all down to AMD being unable to deliver a GPU that has performance AND power efficiency AND low heat output AND compact size.

Was this available back in 2014?

Secondly, GP104 (GTX 1080) features low FP64 performance, and Nvidia's woeful OpenCL compute support and performance leaves a lot to be desired.

For "G4merz!", yes, a pair of GP104 or even GM104 chips would be great. But Apple has never cared, nor will they ever care, about gaming performance. Apple has no interest in supporting CUDA, has no interest in supporting G-Sync, and has no interest in trying to win gaming benchmarks.

Finally, yes there were chips from AMD available had Apple wished to lengthen the life of the current Mac Pro 7,1 platform. Fiji (in its R9 Nano configuration) and Tonga had amazing computer perf/w, and were more than able to fit into thermal constraints.
 
Last edited:
Apple has no interest in supporting CUDA, has no interest in supporting G-Sync, and has no interest in trying to win gaming benchmarks.

Well, it seems that apple has no interest for a lot of things lately regarding macs, doesn't it ? While, at the same time, expects people to be interested in their machines. If they keep this way of thinking, one can safely predict that people will have no interest for their upcoming machine.
 
Was this available back in 2014?

There has been no point since 2012 in which AMD has offered consistently better performance across a wide variety of professional apps than Nvidia.

Secondly, GP104 (GTX 1080) features low FP64 performance, and Nvidia's woeful OpenCL compute support and performance leaves a lot to be desired.

...on macOS. The "woeful" performance of Nvidia seems to be specific to Apple software.

For "G4merz!", yes, a pair of GP104 or even GM104 chips would be great. But Apple has never cared, nor will they ever care, about gaming performance.

...aside from their "Pro" computer being built on gaming cards. You can't criticise Nvidia's options as being for "G4merz!" as if Apple is offering "workstation" hardware as an alternative. They're not. The choice is fast Nvidia gaming GPUs, or slow AMD gaming GPUs.

Apple has no interest in supporting CUDA, has no interest in supporting G-Sync, and has no interest in trying to win gaming benchmarks.

Yes, but they also don't win on actual getting work done benchmarks, either. That's why the 2013 has been such an unmitigated failure.

Apple tried only supporting the technologies they want to support, making expensive targeted FCPX appliances, and telling everyone who didn't fit that narrow product to go buy a Windows workstation. That was the 2013 strategy. Unfortunately for Apple, Nvidia is better at making GPUs than Apple is at making Pro hardware and software.

Here's a few other things that aren't of critical importance to enough of the Pro market that you could fund development of a workstation by fixating on them:
  • Small desk footprint
  • Dead silence when running outside of a sound insulated cabinet
  • Low power draw
  • OpenCL
  • macOS
  • Final Cut
Here's a few things that are of critical importance to enough of the Pro market that you can fund development of a workstation by prioritising them:
  • Onboard, bulk data storage.
  • The ability to swap out and replace the GPU every 8-12 months, without replacing any other part of the machine.
  • Multiple Nvidia GPUs
  • CUDA
It turns out there aren't enough FCPX users to fund the development of a specialist workstation, and the rest of the pro app world is sufficiently Nvidia based, that they're not going to go all in on OpenCL unless it's better on Nvidia hardware than CUDA.

Finally, yes there were chips from AMD available had Apple wished to lengthen the life of the current Mac Pro 7,1 platform. Fiji (in its R9 Nano configuration) and Tonga had amazing computer perf/w, and were more than able to fit into thermal constraints.

You're suggesting Apple just let the machine sit idle for 4 (to 5, 6,7?) years because? Maybe the 100% thermal failure rate of the D700 indicates that there's something more to the story than "on paper this should fit into the thermal constraints".
 
  • Like
Reactions: ssgbryan
If the mMP is an ATI-only system it will fail just like the nMP has failed.
It would fail because of lack of software exposing HSA capabilities, not because of Vendor name. Oh, right. Apple Metal already has HSA capabilties.

Yes, but they also don't win on actual getting work done benchmarks, either. That's why the 2013 has been such an unmitigated failure.
Yep. And that is very reason why RX 480 is faster in OpenCL Blender than GTX 1060 using CUDA in the same application.

Why do people use "outdated benchmarks" as their proofs about situations?

Only benefit in Nvidia vs AMD hardware is CUDA software stack. Even performance factor may be outdated right now.

Nvidia delivered a 1080 for laptops. Go look at barefeats to see what a pair of them can do, and that's what the 2013 could be today. The failure of the 2013 is all down to AMD being unable to deliver a GPU that has performance AND power efficiency AND low heat output AND compact size.
What differes 110W of thermal output from Nvidia from 110W thermal output from AMD? I am alaways triggered when I read that somehow, 110W GPU from Nvidia will have lower temeperatures than 110W GPU from AMD. That is not how physics works.

GTX 1080 in iMac would heat up as much as R9 M395X. Why? Because of the design of the iMac.

Edit. I will give you a hint why Apple put a mark on both 6.1 MP and AMD. AMD was misleading Apple by saying that in future smaller GPUs will take place, with lower TDP's. That is true, but only for AMD. Nvidia can afford designing large dies, which consume large amount of money. AMD cannot. They believe that future for them is in multiple GPUs on single package, connected through Interposer, or other forms of internal connection(Infinity Fabric...). Raja Koduri was at Apple before he went back to AMD. Where do you think they got this information? AMD co-developed with Apple Polaris 11 GPU, for MBP's. It was because of Apple's request GlobalFoundries and Samsung synced their process for 14 nm LPP(GloFo has worst process in industry, but best yields, and highest price per wafer, that is why everybody in industry is making everything on TSMC processes).

Reality about AMD is that current upcoming architecture is also power hungry if you consider 200-250W TDP power hungry. Is the multi-chip GPUs a reality? Yes, with Navi. But that is not coming till 2018-2019. So goMac is correct that fingers are being pointed at AMD right now. But for slightly different reason, than most think.

If power would be factor, Apple would not be able for example fit in MP 7.1 dual Nano GPUs, which are able to fit in 125W TDP, and still deliver almost 7 TFLOPs(Around 800 MHz core clock) of compute performance, each(after downclocking and downvolting).
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.