Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You do not need the Mac Pro in your hands in order to judge it. The fact that it does not allow GPU cards makes it a failure. A year or 2 of ownership and the hot new GPU comes out which is blowing the doors off everything and you cannot use it in your "Pro" desktop is a failure. No need to purchase or see it in person to figure this out.
The hot new GPU won’t work anyway even if the new Mac Pro had the slots for such cards because Apple insists on locking up its OS away from from powerful new GPUs. At least on 7,1 you can use Windows and put new GPUs in without problem, just so long as they physically fit.

While they keep on with that, I won’t buy anything new from them. When the next workstation is needed it will be anything but Apple.
 
I understand what you say but in the case of workstations, whether at Dell, HP or Lenovo, the space is often not large enough to install an RTX gaming. Buyers of these workstations buy a turnkey system for two or three years, or even four years (depending on the work required) and do not update them themselves. Professional graphics cards (which cost more than a PC because of their ECC vram) are updated with the machine.
So the expected scalability is mainly due to the second-hand market, or users who work at home, etc.
 
I understand what you say but in the case of workstations, whether at Dell, HP or Lenovo, the space is often not large enough to install an RTX gaming. Buyers of these workstations buy a turnkey system for two or three years, or even four years (depending on the work required) and do not update them themselves. Professional graphics cards (which cost more than a PC because of their ECC vram) are updated with the machine.
So the expected scalability is mainly due to the second-hand market, or users who work at home, etc.
This is a narrow view.
All kinds of users.
All kinds of cases and workstations which also can accommodate full length PCI cards.
 
I understand what you say but in the case of workstations, whether at Dell, HP or Lenovo, the space is often not large enough to install an RTX gaming. Buyers of these workstations buy a turnkey system for two or three years, or even four years (depending on the work required) and do not update them themselves. Professional graphics cards (which cost more than a PC because of their ECC vram) are updated with the machine.
So the expected scalability is mainly due to the second-hand market, or users who work at home, etc.
With the success of the Mac Studio relative to the Mac Pro I could imagine Dell/HP/Lenovo may start offering more desktop Xeon PC workstations without PCIe slots or at most 1 for use of a desktop dGPU.

StorageReview-Intel-NUC-9-Pro-Size.jpg


StorageReview-Intel-NUC-9-Pro-Rear.jpg
 
Has anyone actually answered the question posed by the original poster?

If not, let me re-pose it with clarification.

  • The M2 Ultra Mac Pro is based around a SOC architecture that uses a shared RAM pool for CPU and GPU, to provide extremely high memory speeds and bandwidth (in addition to the chiplet-to-chiplet CPU fabric architecture), with up to 72 GPU units and other media / function accelerators. It has a maximum shared RAM pool of 192GB.
  • The previous Xeon Mac Pro 7,1 was based on a traditional Intel system architecture with separate system RAM, and a variety of different supported GPUs with their own VRAM, using the unique MPX module format (PCI 3.0 + additional power and data lanes) and the optional 'Afterburner' accelerator card. It can support up to 1.5TB of RAM.
  • Is there anyone with a Xeon Mac Pro 7,1 with 1TB+ of RAM installed who can run a benchmark suite to compare it against the M2 Ultra, so that we can see if RAM makes a difference to performance?
  • Obviously the graphics performance is not exactly apples-to-apples, so a bigger discrete GPU may be faster at some things than an iGPU, but it's not fair to compare to graphics cards that Apple doesn't offer or doesn't support.
 
Has anyone actually answered the question posed by the original poster?

If not, let me re-pose it with clarification.

  • The M2 Ultra Mac Pro is based around a SOC architecture that uses a shared RAM pool for CPU and GPU, to provide extremely high memory speeds and bandwidth (in addition to the chiplet-to-chiplet CPU fabric architecture), with up to 72 GPU units and other media / function accelerators. It has a maximum shared RAM pool of 192GB.
  • The previous Xeon Mac Pro 7,1 was based on a traditional Intel system architecture with separate system RAM, and a variety of different supported GPUs with their own VRAM, using the unique MPX module format (PCI 3.0 + additional power and data lanes) and the optional 'Afterburner' accelerator card. It can support up to 1.5TB of RAM.
  • Is there anyone with a Xeon Mac Pro 7,1 with 1TB+ of RAM installed who can run a benchmark suite to compare it against the M2 Ultra, so that we can see if RAM makes a difference to performance?
  • Obviously the graphics performance is not exactly apples-to-apples, so a bigger discrete GPU may be faster at some things than an iGPU, but it's not fair to compare to graphics cards that Apple doesn't offer or doesn't support.
I guess we’re just ignoring the examples where mentioned tasks requiring large memory won’t run at all on the m2.
 
  • Like
Reactions: mattspace
Obviously the graphics performance is not exactly apples-to-apples, so a bigger discrete GPU may be faster at some things than an iGPU, but it's not fair to compare to graphics cards that Apple doesn't offer or doesn't support.
It absolutely is a fair comparison because the apologists are soiling their panties yelling "real world performance". Which video editor would care if their computer had a dGPU or M2 GPU? they only care whats faster right? what loser would possibly want to only upgrade their computer part by part based on their needs? they should only be able to upgrade their entire computer because a trillion dollar company said it should be that way. By the way unrelated I also totally buy that apple cares about the environment. so yeah it is a fair comparison.
I guess we’re just ignoring the examples where mentioned tasks requiring large memory won’t run at all on the m2.
Yeah I'm not sure whats so difficult about this. If I want to load a dataset that is >200gb into RAM the 7,1 can do it no problem while the M2 will just straight up not work.

Just another example of the apple apologists trying to gaslight others into thinking less functionality is a better thing because apple says so
 
I guess we’re just ignoring the examples where mentioned tasks requiring large memory won’t run at all on the m2.
So far the use case mentioned has been large sampled orchestral instrument libraries, which is of course valid (and I’m not being an Apple apologist here), as someone who studied music production. Nobody who relies on that for film scoring or composition wants to go back to the “run the library as a plugin on another computer connected by Ethernet” days, I get it.

That said, I’m wondering less about what use cases are or aren’t enabled by the RAM limit, vs the performance benefits / deficits of one architecture vs the other.

If, say, a 2019 MP was set up with 192GB, one Afterburner card and one Vega duo MPX card, how well would it perform at a suite of benchmark tasks vs the exact same machine with 1.5TB? and how do both of those configurations compare in terms of performance to a 2023 M2 Ultra MP with 192GB?

ie does the M2U’s architecture make up for things in certain cases (via memory bandwidth / low latency / integrated features), and where does it stumble (need to swap memory to disk or do on the fly compression)?

Does the 2019’s RAM architecture allow for fewer disk reads / swaps? What are the tradeoffs?
 
It absolutely is a fair comparison because the apologists are soiling their panties yelling "real world performance". Which video editor would care if their computer had a dGPU or M2 GPU? they only care whats faster right? what loser would possibly want to only upgrade their computer part by part based on their needs? they should only be able to upgrade their entire computer because a trillion dollar company said it should be that way. By the way unrelated I also totally buy that apple cares about the environment. so yeah it is a fair comparison.

Yeah I'm not sure whats so difficult about this. If I want to load a dataset that is >200gb into RAM the 7,1 can do it no problem while the M2 will just straight up not work.

Just another example of the apple apologists trying to gaslight others into thinking less functionality is a better thing because apple says so
I get that you’re disappointed about this. I think those use cases are valid, and I’m not saying Apple is doing a good thing by ignoring them.

I’m just literally trying to find the answer to the question OP posed, which is not “what use cases are enabled” but “does more RAM improve system performance,” and I would add, “and for which tasks?”

We don’t even need to bring the M2 into the equation - theoretically, if we ran Geekbench on two base spec 7,1s where the only difference was the amount of RAM, should we expect the one with 1.5TB to outperform one with 128GB, and if so, how?
 
  • Like
Reactions: JazzyGB1
So far the use case mentioned has been large sampled orchestral instrument libraries, which is of course valid (and I’m not being an Apple apologist here), as someone who studied music production. Nobody who relies on that for film scoring or composition wants to go back to the “run the library as a plugin on another computer connected by Ethernet” days, I get it.

That said, I’m wondering less about what use cases are or aren’t enabled by the RAM limit, vs the performance benefits / deficits of one architecture vs the other.

If, say, a 2019 MP was set up with 192GB, one Afterburner card and one Vega duo MPX card, how well would it perform at a suite of benchmark tasks vs the exact same machine with 1.5TB? and how do both of those configurations compare in terms of performance to a 2023 M2 Ultra MP with 192GB?

ie does the M2U’s architecture make up for things in certain cases (via memory bandwidth / low latency / integrated features), and where does it stumble (need to swap memory to disk or do on the fly compression)?

Does the 2019’s RAM architecture allow for fewer disk reads / swaps? What are the tradeoffs?
If you have to load something in RAM that is > 192gb the 2023 MP will 1) start to use swap memory or 2) crash. Even if it does start using swap memory, it will be much slower since youre still accessing secondary storage. More RAM helps when loading large datasets into memory. E.g. typically using high dimensional datasets and performing computation on them.

If you're dynamically accessing memory for CPU compute, 2023 MP will be faster because of higher memory bandwith from M2 and higher FLOPs from M2 CPU compared to 2019 xeon cpu. With computer architecture there are always tradeoffs.
 
I just discovered a new justification for having ginormous amounts of RAM: running large AI models locally. I bring this up because it's easy to imagine more near-term demand for locally running large AI models compared with some other more esoteric uses for huge amounts of RAM in a desktop machine.
 
  • Like
Reactions: ZombiePhysicist
Of course Apple is going to say "it' the best Mac Pro we ever created" because they want sell you a computer. Why does this matter?

To answer your perfectly valid question (and I swear there isn't an ounce of sarcasm in that), the 2013 Mac Pro was largely considered a massive failure. Good for those that needed a machine that performant. Completely piss poor for just about everyone else.

Apple admitted this publicly in 2017 (as long after that machine's launch as we are currently from the 2019 Mac Pro's launch) and then took TWO YEARS ON TOP OF THAT to release the 2019 model that mostly rectified its 2013 predecessor's shortcomings. They had to introduce a one-off workstation iMac model as a stopgap for crying out loud!

The work and research into what their highest-end customers wanted that they did to push out the 2019 model was not insubstantial. It should also be noted that Apple doesn't admit it when they're wrong all that easily, nor do they do it all that often.

I'm not saying that Apple isn't out of touch with their highest end customers. I'm saying that, you'd really think that, after the 2013 Mac Pro fiasco, Apple would (a) remove the ability to upgrade RAM aftermarket, (b) remove the ability to upgrade and/or add GPUs, and (c) provide a RAM ceiling of 1/4 to 1/8 the maximum of its predecessor - all lightly, then you have even less faith in them than I do (which says a lot, because these days, I don't have much faith in them).

Prior to releasing this 2023 Mac Pro model, they met with most of the key PCIe card manufacturers used by those who own 2019 (and earlier PCIe-based) Mac Pros to ensure day 1 compatibility. I cited that white paper from the 2019 model to state that Apple, still on the apology tour for the 2013 model, did a fair amount of research into what THEY thought was needed for those workflows.

Again (and I really shouldn't have to belabor this point), these are not my opinions or my views. Nor do I necessarily agree with them that 192GB of RAM is enough to cover the highest end workflows. I KNOW that RAM is RAM and that if you try to load more than 192GB of data into 192GB of RAM, the system will dig into swap. Furthermore, I can't imagine that those that need to load 256GB (or more) of instruments into Logic or whatever other music production app, still won't have serious performance issues doing so.

However, you would think that Apple, having suffered through a massive Mac Pro PR crisis last decade would be ever more vigilant to avoid doing so again this decade.

All that to say that they probably did the math and realized that the percentage of folks that needed more than 192GB of RAM in a Mac Pro was about as small (if not smaller) than the percentage of Mac customers that buy a Mac Pro and that they were safe alienating those users.

Not saying I agree with them doing that. (For the record, I don't.) Just that that's probably what happened here.


You do not need the Mac Pro in your hands in order to judge it.

It's a tool, not a toy. Whether or not it does what you purchase it to do is the only metric worthy of judgement at the end of the day.


The fact that it does not allow GPU cards makes it a failure.

While M2 Ultra doesn't seem to bench favorably compared to multi-GPU/multi-MPX GPU configurations of the 2019 in graphics, I don't know how it's a failure for anyone other than those that deem GPU expansion to be of the utmost importance. Incidentally, the Mac Pro has never been anywhere near as flexible when it comes to GPU expansion as a garden variety PC desktop, let alone a Dell Precision or HP Z workstation tower. Yes, you can't shove two GPUs in there. Yes, that is a performance downgrade over the highest end GPU configurations of the 2019 model. I'm not here to say that isn't the case. Apple clearly thought that wouldn't affect enough Mac Pro users to really matter.

A year or 2 of ownership and the hot new GPU comes out which is blowing the doors off everything and you cannot use it in your "Pro" desktop is a failure.

I know that there were many that did this to extend the life of their Mac Pro towers. It wouldn't surprise me that people will continue to do this with their Intel Mac Pros for several years to come. It's a legitimate bummer that the SoC is not socketed to allow for upgrades to GPU, RAM, and CPU that would've been possible before and a total misstep for Apple.

However, to say it's a failure makes the sweeping assumption that this makes it a no-sale for the vast majority of customers and I'm pretty sure that's not the case. Furthermore, I'm pretty sure Apple has done the research to verify just how many people out there would agree with you and have determined that the loss of Mac Pro customers with this model (which I'm sure is solidly non-zero) would still be acceptable enough to produce this machine and more just like it.

No need to purchase or see it in person to figure this out.
If GPU expansion is the only reason you buy a Mac Pro, then why are you buying a Mac Pro to begin with? I'm not saying the loss of aftermarket upgrades isn't a bummer. But the Mac platform in general has always been hostile in this department. You're lamenting the loss of GPU expansion and you never even got to put in the best PCIe GPUs out there to begin with.
 
If GPU expansion is the only reason you buy a Mac Pro, then why are you buying a Mac Pro to begin with? I'm not saying the loss of aftermarket upgrades isn't a bummer. But the Mac platform in general has always been hostile in this department. You're lamenting the loss of GPU expansion and you never even got to put in the best PCIe GPUs out there to begin with.
I'm not, and I am sure a lot of other people are looking elsewhere for something more robust.
Even if Nivida GPU's were off the table, at least you still had an option to put in a AMD GPU or 2 in the 2019 MP.
Basically now Apple is telling users no aftermarket GPU's period going forward.
That's pretty big, who else in the industry does this in the "Pro" market?
And if I was AMD and Apple came back someday and allowed GPU's I would say no.

Anyhow not sure why you are responding anyway, all you are trying to do here is downplay Apple's moves and spin this around.

Basically the MP is a dead mac.
The Studio Ultra is a better buy for anyone needing more oophm, and nothing more, because there isn't anything worth the purchase.
 
If you have to load something in RAM that is > 192gb the 2023 MP will 1) start to use swap memory or 2) crash. Even if it does start using swap memory, it will be much slower since youre still accessing secondary storage. More RAM helps when loading large datasets into memory. E.g. typically using high dimensional datasets and performing computation on them.

If you're dynamically accessing memory for CPU compute, 2023 MP will be faster because of higher memory bandwith from M2 and higher FLOPs from M2 CPU compared to 2019 xeon cpu. With computer architecture there are always tradeoffs.
Thanks. That is pretty much what I had understood.

I do wish they had SOCs with more RAM because that would be awesome, and I think that's what they would have liked to release, but maybe there was some production-related reason why they couldn't (yields, RAM shortages, supply lines).

At the very least, having the option to go up to 256, 512, 768 and 1TB would alleviate a lot of the large dataset issues.

And yeah, it would be nice to have support for graphics cards to do things the iGPU can't (dedicated raytracing etc). Hopefully the next generation M3 will start to match what high-end cards can do, and in the MP tower form factor allow a wider thermal envelope.

For my use case, the Studio is almost overpowered (running native plugins and synths in Waveform, and doing UX work in Figma which requires nearly no power at all), and I haven't bought into large sample libraries or DSP-powered plugins so I'm safe for now...
 
And yeah, it would be nice to have support for graphics cards to do things the iGPU can't (dedicated raytracing etc). Hopefully the next generation M3 will start to match what high-end cards can do, and in the MP tower form factor allow a wider thermal envelope.
That's the problem though. Everyone just keeps hoping and hoping and hoping...forget about the fact that Apple historically doesnt do what a lot of pros are looking for.

Time is passing and people have lives and work to accomplish, so the logical thing to do is just ditch apple and move to windows/linux. I guess since they have so much money it doesn't matter to them, but its still a shame. all these consumer products won't push the company forward. releasing a 15" macbook air isnt innovative at all.

I can guarantee you 90% of the research and dev work done for the vision pro was done on linux computers. But I mean whatever, they can do w/e. I've moved to a linux box I enjoy using it so apple doesn't get my money in this case.
 
Plenty of people are fully aware of the potential of access to 192 GB of video memory. The people complaining about losing 1.5 TB of RAM (LMAO) don't understand swap, and need to work on their workflows.
Why would anyone need 192 GB at all if the "high-speed" swap works fine for most use cases?
 
Approx 75,000 Mac Pros are shipped annually among 28.6 million Macs in 2022.

Approx 20% of those Mac Pros exceed the use cases of the 2023 model.

Apple attempted and failed to deliver a M2 Extreme with 384GB unified memory to address 192GB limit of the M2 Ultra.

They'll try again in Q1 2025 with the M3 Extreme with more than 384GB. Cross your fingers.

This will give the pro app devs more time to provide bug fixes found by 2023 users.

Other than more RAM that M3 Extreme will likely have Thunderbolt 5 80Gb/s and maybe even PCIe 5.0 slots.

Sadly the two USB-A ports may disappear by then. It was wrong of Apple to abruptly dump them in 2016 but approaching a decade later are they still relevant at 5Gb/s or even 10Gb/s?

So what are use cases like yours to do?

"Buy/keep the 2019 until RAM aligns or move to AMD/Intel."

2019 should receive its final Security Update as late as 2027.

By then 2026 M4 Ultra/Extreme 1.4nm (A14) will be out hopefully with ~1.5TB.
Where did you get the 20%/80% from?
 
All you're comparing there is unified memory of the SOC, and the SSD performance in the M2 Ultra. You're not comparing it to a machine with 1.5 TB of much, much slower memory.
Depending on the workload, the storage to memory for bandwidth and latency become the bottleneck once the 192 GB is exceeded.
 
Isn't a major selling point of the M2 architecture that apps and the OS in general uses less RAM for the same tasks anyhow, and the access speed is so much faster that even if a swap does occur it may still provide a performance benefit? I guess not perhaps if it hits the SSD.

I'm pretty sure people are forgetting that RAM amounts aren't directly comparable between x86 and Apple's RISC architecture though. That being said, I'm sure at this point as was the same in past history ... Intel has had a fire lit under their ass and have made a major improvement in performance to the point where Apple really needs to step it up again in the workstation performance metrics.
 
Isn't a major selling point of the M2 architecture that apps and the OS in general uses less RAM for the same tasks anyhow, and the access speed is so much faster that even if a swap does occur it may still provide a performance benefit? I guess not perhaps if it hits the SSD.

I'm pretty sure people are forgetting that RAM amounts aren't directly comparable between x86 and Apple's RISC architecture though. That being said, I'm sure at this point as was the same in past history ... Intel has had a fire lit under their ass and have made a major improvement in performance to the point where Apple really needs to step it up again in the workstation performance metrics.

It's not a major selling point and there really is no difference. RAM is RAM. I dunno how these things start but those of us who use hundreds of gigabytes of memory for our tasks are under no illusions.

The main difference that Apple has been touting is the CPU and GPU being able to access the same allotment of memory, a unified memory architecture. And combined with those specs are the fast SSD's they've put on the machines that make accessing data more reliable with consistent latency and bandwidth.

But those SSD's are not special. They put out the same kinds of numbers as other high-end SSD's in Windows machines. And in-fact Apple has been halving the amount of NAND packages on some of their lower-end MacBook's like the new MacBook Air if you have a lower capacity SSD option and that one performs slower than commercially available M.2 drives as a result.

Another thing to keep in mind is we are all looking at these fantastically high memory bandwidth numbers like 800GB/s on the M2 Ultra. But that entire memory bandwidth is not actually available to the CPU.

If you look at the research done in the M1 Max for example (400GB/s) the CPU can only use 224GB/s maximum. That's still insanely fast, don't get me wrong. But it's in the realm of high-end workstation CPU's like Threadripper Pro.

When it comes to the M2 Ultra. I would say based on what we know from the M1 Ultra, you will be able to hit 224GB/s upto-4 cores. And if you are utilising 8 cores, 4 from each of the M1 Max dies that make up the M1 Ultra you can hit 450GB/s - So a little over half the advertised 800GB/s.

To utilise all of it you would need to utilise the GPU. Again these numbers are great. 450GB/s for example would be class-leading and double a Threadripper Pro system. It's just not quite the 800GB/s Apple touts because that is only accessible under certain load scenarios, mainly ones that utilise the GPU.

But looking past all this memory bandwidth talk. The real crux is the quantity. There has been talk in this thread about people just not having the right workflow, that their setup is unoptimised etc

To that I say, kind of but not really. See if you're constantly reading things in from the SSD into memory, you just halved your memory bandwidth so straight away you lost performance. It's much more performative to only be reading from memory for your App, not performing reads and writes simultaneously. Memory is not magic, there is a penalty to simultaneous access in this manner.

Secondly to that, we in this thread are mostly talking about end users, not the software developers who are actually making the apps. If your workflow demands a specific application and the developer is unwilling to rewrite how it functions to better make use of swap and instead expects you to have hundreds of gigabytes of memory for storing assets then you're just out of luck.

In my opinion, the Mac Pro and the G5 PowerMac before it were great workstations that could be used to accelerate almost any task. They gave us dual processors, a lot of PCIe slots with the ability to install multiple graphics cards so we could have lots of monitors and later on accelerate our computing needs. We gained 64-bit addressing and an operating system that could take advantage, which allowed for more than 4GB of system memory, which too opened up a new vista of computing for high-end and professional users.

It wasn't so long ago that you needed 8 hard disks in a type of RAID0 just to edit video and scrubbing your timeline while editing was essentially seeing a slide show of one frame here and there etc - This started the whole industry of proxy workflows, editing a much lower resolution and codec efficient proxy of your real footage just to be able to actually edit it in real-time.

As RAM and processors have gotten better and better things like that have gone away. Now we look at the Mac Pro today. No upgradable RAM, no upgradeable graphics, no upgradable CPU. All of these things could be upgraded on the previous model (yes even the CPU, though not supported by Apple of course).

Not having upgrades can be acceptable, we have all relented on that issue when it comes to the MacBook Pro. But that's because Apple has delivered an actually very compelling laptop with insane performance, battery life and specs when it pertains to storage and memory. 96GB of RAM and 8TB of SSD inside the laptop is class-leading for a notebook.

But 192GB of RAM and 8TB SSD is not class-leading in a workstation. I think 512GB of RAM and up to 32TB of SSD would have been enough to quell most people's concerns that is until we look at the GPU situation. No CUDA, no NVIDIA graphics options, no AMD graphics options for that matter. There is no possibility of buying the amount of computing you need today in this box when it comes to machine learning. Often when we talk about upgrades we're thinking about the longevity of a system but in this case we can't even get the specs we need today, at the point of purchase which has not been a thing with the G5 PowerMac, the 2006 Mac Pro or the 2019 Mac Pro.

What you've got here is a Mac Studio with severely gimped PCIe slots because the most useful card people want to put in there (graphics) doesn't work and the integrated GPU is not powerful enough for the use cases where people would want a dedicated graphics card or multiple cards.

In my next machine learning system, I'm aiming for four GPU's. That could be 4 x 4090's or 4 x A100's etc - The M2 Ultra isn't even the equivalent to one of those. But funnily enough, I can put those kinds of cards in the previous Mac Pro. Maybe not four of them but at least two.

If you think 192GB is enough and no one needs that much or the Mac architecture with M series chips works differently and doesn't need so much memory then why does the MacBook Pro 14" and 16" come with a 96GB RAM option? What laptop have you ever seen come with that much memory from the Intel / AMD world? - There's really no difference, RAM is RAM, your working set needs to be in memory to keep the CPU caches fed or instruction processing stalls and you lose performance.

This is why they are offering such high amounts, to begin with, but their SoC design has a physical constraint to how much memory they can physically put so close to the CPU and the further you go the lower the frequency has to be to compensate, this is why there are no physical modules and the RAM is mm's from the die.

My final point: I think the Mac Studio and Mac Pro with M2 Ultra are great computers. For the tasks they specifically target like video editing. The Mac Pro is perhaps a little less great because the PCIe slots it offers are mostly useless at this point but still if you do Video Editing I'm sure it's incredible with those media encoders etc
 
Last edited:
  • Like
Reactions: prefuse07
Do you have any experience in the field where Neil Parfitt is working? Because, as far as I know, he certainly knows what he is talking about and he is a highly regarded professional in his field. Unless you know much more than he does of course.....
I don't make those kinds of assumptions about people just because I personally don't have the exact same experience in the exact same field. That's Expertism and I've found it to be wrong 90% of the time.
 

“While a solid-state drive (SSD) is a device used for data storage, Intel® Optane™ memory is a system acceleration solution installed between the processor and slower storage devices (SATA HDD, SSHD, SSD), which enables the computer to store commonly used data and programs closer to the processor. This allows the system to access this information more quickly, which can improve overall system responsiveness.”
Optane failed





 
I'm not, and I am sure a lot of other people are looking elsewhere for something more robust.
Even if Nivida GPU's were off the table, at least you still had an option to put in a AMD GPU or 2 in the 2019 MP.

Right, but how many Mac Pro customers (whose workloads weren't already better suited by a Windows or Linux workstation with NVIDIA cards) NEEDED 2 AMD GPUs?

I'm not saying and never did say that the loss of being able to stuff in GPUs after the fact isn't a bummer. But I will completely challenge the notion that a computer NEEDS this ability in order to be considered a proper workstation. Especially since I'm sure that there were plenty of 2019 Mac Pro customers that only ever had one GPU in their Mac Pro (in which case M2 Ultra's GPU would outperform it).

Basically now Apple is telling users no aftermarket GPU's period going forward.

Yes. And the results tell us that barring extremely high end multi-GPU configurations, it's faster across the board.

I do agree that, for the few that needed more than one AMD GPU, this isn't an upgrade and that it's a bummer. I also DO think that the lack of aftermarket expandability inherent in Apple's SoC model of system architecture doesn't mesh well with your typical 2006-2012+2019 Mac Pro expandability.

So, either live with it. Wait for better. Or jump ship. Though, "wait for better", at best, will only yield a socket-able SoC and/or a higher end configuration that finally beats out EVERY possible 2019 Mac Pro in every possible way.

That's pretty big, who else in the industry does this in the "Pro" market?

You assume that "Pro" necessarily means "full expandability" rather than "performance you only get from a tower with workstation-grade specs".

Incidentally, again, you never got "full expandability" even from a 2019 Mac Pro. You never got NVIDIA support and, long after Apple drops support for the 2019 Mac Pro, you'll still have people hacking its EFI to put in cards that you wouldn't have to fight to get into any other 2019-vintage Xeon box.

Furthermore, MPX modules were the epitome of proprietary expansion modules; something that inherently goes against your definition of "Pro" level expansion.

And if I was AMD and Apple came back someday and allowed GPU's I would say no.

Out of spite? These are businesses. I'm pretty sure AMD would be jazzed to get Apple's business again. Though, it's never going to happen.

Anyhow not sure why you are responding anyway, all you are trying to do here is downplay Apple's moves and spin this around.

I'm not downplaying Apple's moves. I admit repeatedly that I think some of those moves are bummers. I am, however, completely downplaying your concerns that the inability for the 2023 Mac Pro to address most niche segment of customers of Apple's most niche Mac model makes it a failure for the rest of the Mac Pro customer base.

The dude needing to load 200GB+ of instruments into Logic or the chick needing two W6900X GPUs or one to two W6800X Duo MPX modules to do serious 3D FX work...those people are going to be in trouble. The person who bought a 2019 Mac Pro with a 16-core Xeon, 128GB of RAM, and a single W6800X GPU and thought they might beef it up later, but never did? This is not their bummer.

Basically the MP is a dead mac.

So, you've used one then, I take it?

The Studio Ultra is a better buy for anyone needing more oophm, and nothing more, because there isn't anything worth the purchase.
You know that PCIe expansion isn't JUST for graphics cards, yeah?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.