Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I can also speak to spending weeks (6?) in 2020 solely dedicated to re-working a level art toolset to fit down into 256 gb of ram for a AAA game. We started with 64 gb, but that was too unstable to work in houdini and unreal at the same time, upgraded to 128 gb, and then to 256 as the toolset matured. ( for all those that say IT at studios never upgrade leased computers, also put in more SSD's ) It would of been a waste of my time and possibly the project would of had to pivot if we were stuck at 128 and I had to spend 6 months figuring out how to do what we needed in just 128 gb. Not to mention it likely would of made computation many times slower and reduce quality.

It was nice to be able to just throw some money at the problem at make the problem less of a mountain to climb. Games is a pretty dynamic space, you need flexibility, you have no idea where a project will go over the course of 3-5 yrs of development.

We also had a Epyc machine with 2TB of ram to run several 256 gb jobs at once.
 
Last edited:
Thanks for the reply. Those are absolutely incredible numbers. Just a mind boggling amount of data that needs to all be accessed in RAM at the same time. I struggle to even think of what a dataset that goes into the 100gbs would even be (outside of massive video or sound files that have incredible amounts of detail).
The dataset in question had absolutely nothing to do with video or sound; if memory serves, the model/simulation was operating on GIS data. I think individual runs of the model/simulation generated up to 6TB of output data each (and there were many runs). I don’t remember exactly how big the input dataset was.

The issue, as with any scientific model, is how much of the dataset and modeling window needs to be in RAM and accessible by the CPUs at one time in order to get good performance. For some things (or with careful tuning) you might be able to get it to where the model only needs a little piece of the overall dataset in RAM to do its calculations, and can be busy loading the next set from slow storage while it’s number-crunching on the current one.

In others--like this model--it only ran efficiently if it was able to access ~400GB of RAM at once, which I assume was a combination of the input dataset, intermediate working memory, and output results that hadn’t been flushed to disk yet.

I wonder if Apple Silicon can ever be set up to access auxiliary RAM, like a RAM module. Though how that would be much different from an SSD storage chip being accessed, I'm not sure.
“Can” is the wrong question to ask. It should be fairly obvious that it can be set up to access outboard RAM--there’s nothing magic about on-die memory chips, it’s just lower latency and potentially a more tuned architecture. It’ll presumably be somewhat slower than on-die RAM, and if the GPUs can’t efficiently use external RAM I could imagine it being a non-trivial engineering problem dividing up RAM between that which can and can’t be used by the GPU cores (or severe performance hit for some workloads if it’s not).

The question is whether building and getting an outboard RAM controller working is worth it to Apple, to keep the small number of customers who need more than 192GB of RAM (plus a chunk of customers who need less, but will opt not not buy either because it’s expensive or they refuse to do without future upgradability).

I have no idea what the answer to that question is. Could be it’s a challenge that they’re working on but haven’t yet satisfactorily solved and the current Mac Pro is a “let’s get rid of Intel” stopgap while they get the kinks worked out, or it could be they’ve decided the number of additional sales just isn’t worth the cost.
 
  • Like
Reactions: Allen_Wentz
The dataset in question had absolutely nothing to do with video or sound; if memory serves, the model/simulation was operating on GIS data. I think individual runs of the model/simulation generated up to 6TB of output data each (and there were many runs). I don’t remember exactly how big the input dataset was.

The issue, as with any scientific model, is how much of the dataset and modeling window needs to be in RAM and accessible by the CPUs at one time in order to get good performance. For some things (or with careful tuning) you might be able to get it to where the model only needs a little piece of the overall dataset in RAM to do its calculations, and can be busy loading the next set from slow storage while it’s number-crunching on the current one.

In others--like this model--it only ran efficiently if it was able to access ~400GB of RAM at once, which I assume was a combination of the input dataset, intermediate working memory, and output results that hadn’t been flushed to disk yet.


“Can” is the wrong question to ask. It should be fairly obvious that it can be set up to access outboard RAM--there’s nothing magic about on-die memory chips, it’s just lower latency and potentially a more tuned architecture. It’ll presumably be somewhat slower than on-die RAM, and if the GPUs can’t efficiently use external RAM I could imagine it being a non-trivial engineering problem dividing up RAM between that which can and can’t be used by the GPU cores (or severe performance hit for some workloads if it’s not).

The question is whether building and getting an outboard RAM controller working is worth it to Apple, to keep the small number of customers who need more than 192GB of RAM (plus a chunk of customers who need less, but will opt not not buy either because it’s expensive or they refuse to do without future upgradability).

I have no idea what the answer to that question is. Could be it’s a challenge that they’re working on but haven’t yet satisfactorily solved and the current Mac Pro is a “let’s get rid of Intel” stopgap while they get the kinks worked out, or it could be they’ve decided the number of additional sales just isn’t worth the cost.
Thanks for the reply. Ah, GIS I'm somewhat familiar with as I use it in my work (or more precisely, I have folks who use it that report to me). But our layers of data aren't remotely at the scale of 6TB. And when we use the data they make our 16gb RAM laptops chug a bit, but ultimately they continue to work. I do have one guy on my team who I got a 32gb RAM laptop for for this process (this is all PC at work). And I think I may push for 32gb of RAM to be our baseline for new laptops. But still ultimately just laptops and much more simple stuff.

Yes, the analysis on the following point is a "can". I suspect the answer will be that Apple won't do that step. Instead they will just keep doubling up their Silicon and there will be a fast chip that can be equipped with 384gb on die in the not too distant future and Apple will call that good enough.
 
  • Like
Reactions: Allen_Wentz
I can also speak to spending weeks (6?) in 2020 solely dedicated to re-working a level art toolset to fit down into 256 gb of ram for a AAA game. [...]

It was nice to be able to just throw some money at the problem at make the problem less of a mountain to climb.
That’s a good example of a professional studio use case most people wouldn’t think of (certainly didn’t occur to me), but also a great point: it’s often not that it’s impossible to get the job done more efficiently in terms of resource use (RAM in particular), but that it takes time and resources to do so, and there are situations where it’s just much cheaper and faster to throw hardware at the problem instead of engineering hours.

This is particularly true with in-house toolsets or things like the modeling in my example. Yes, there are ways to make it more efficient, but figuring those out and implementing them takes time, sometimes a lot of it, which is expensive even if you aren’t on a deadline. If your game needed 256GB on the consumer machine, that would be a deal-breaker, but for a handful of people using internal tools on very expensive computers, that’s an acceptable compromise.

Ah, GIS I'm somewhat familiar with as I use it in my work (or more precisely, I have folks who use it that report to me). But our layers of data aren't remotely at the scale of 6TB. And when we use the data they make our 16gb RAM laptops chug a bit, but ultimately they continue to work.
This may have been clear already, but I did want to note that this was more of a math modeling/simulation task than what comes to mind when I think of GIS; the input dataset was GIS, but the actual task was running an involved simulation. And it was the output datasets that were in the TB range; I actually don’t know how big the input dataset was, but it may have been much smaller than that.
 
  • Like
Reactions: TallManNY


Upon the launch of the latest Mac Pro, Apple's transition to Apple silicon across its entire Mac lineup is complete. The new Mac Pro features the M2 Ultra chip – the same chip offered in the refreshed Mac Studio – so why should some prospective customers buy the Mac Pro, despite its $6,999 starting price, and which performance-focused desktop Mac is best for you?

Studio-v-Pro-Feature-Purple.jpg


The Mac Studio starts at $1,999, substantially less than the $6,999 starting price of the Mac Pro. When configured with the same M2 Ultra chip as the Mac Pro, the Mac Studio starts at $3,999. There are several crucial differences between the Mac Studio and Mac Pro that justify their different price points and designs:
Mac StudioMac Pro
Integrated, non-upgradeable design with sealed casingModular design with openable casing and potential for SSD upgrades
Seven PCI Express expansion slots (six available slots; one slot comes with Apple I/O card installed)
Two impeller fansThree impeller fans
Apple M2 Max or M2 Ultra chipApple M2 Ultra chip
Up to 24-core CPU24-core CPU
10Gb EthernetDual 10Gb Ethernet
Up to six Thunderbolt 4 portsEight Thunderbolt 4 ports
Two USB-A portsThree USB-A ports
HDMI portTwo HDMI ports
SDXC card slot (UHS-II)
Rack-mounted version available
Starts at $1,999Starts at $6,999


The main reason to buy the Mac Pro is to be able to use its seven PCIe expansion slots add the likes of digital signal processing (DSP) cards, serial digital interface (SDI) I/O cards, additional networking, and built-in storage. This also allows a user to change some of their Mac Pro's hardware over time, and Apple is offering additional do-it-yourself SSD upgrade kits and wheels for the device.

If you require multiple Ethernet ports, more than six Thunderbolt ports, or more than two USB-A ports to connect a large number of peripherals, only the Mac Pro can facilitate this. Otherwise, since the Mac Studio can be configured with the same M2 Ultra chip as the Mac Pro, there is no reason to buy the more expensive desktop machine, and most users will be better off buying the Mac Studio and saving $3,000.


Buy a Mac Studio if...
  • You prefer a smaller desktop machine that takes up significantly less space
  • The M2 Max chip offers sufficient performance for your needs and you do not need the M2 Ultra chip
  • You need a versatile, high-performance machine below the Mac Pro's $6,999 starting price


Buy a Mac Pro if...
  • You need the ability to upgrade the internal SSD
  • You need more than six Thunderbolt ports, more than two USB-A ports, more than one HDMI port, or more than one ethernet port
  • You need PCIe expansion slots


If you don't need the performance and number of ports that the Mac Studio offers, it is worth noting that Apple offers the Mac mini with the M2 Pro chip for $1,299. This high-end Mac mini offers a good balance of price and performance that should be more than sufficient for many users looking for a desktop Mac.


The Mac Pro is targeted at professionals with distinct hardware requirements and complicated workflows, often in production environments. These customers will know they need a Mac Pro to meet their needs. Considering the fact that the base model is $5,000 cheaper than the Mac Pro, the Mac Studio is now the best "Pro" desktop Mac for the overwhelming majority of prospective customers, with more than enough performance and versatility for most users.

Article Link: Mac Studio vs. Mac Pro Buyer's Guide
The article's statement that you need a Mac Pro if you need "more than two USB-A ports" is absurd. A) The MP only adds one more USB-A port anyway; B) many of us need more than three USB-A ports; and most importantly C) USB-A uses minimal bandwidth and is therefore easily dealt with via one of the cheap, ubiquitous powered USB hubs available everywhere.
 
I used to cart my 40 lbs Mac Pro back & forth to work and home - so yeah, it’s important to me.
At one point I hauled around an iMac, setting it up as a kiosk, which worked well and took advantage of the iMac's clean lines as a presentation i/o device. That is literally the only good reason I can see for iMac form factor.
 
It boils down to this: the Mac Studio is for fun (unless your definition of 'fun' involves games) and the Mac Pro is for people who actually have work to do
Nonsense! Where are you and the 20+ likes you got coming from, saying the Mac Studio is for fun not work? Do you even use Macs for real worK?

Any of the boxes with an M2 Max chip or two in them are superb boxes for all us folks doing Adobe-type work (even though many of us left Adobe): art directors, photogs, graphic artists, etc. The M2 Studio even without the bump up to Ultra is ideal for such workers. And it is overkill for the Office workers types.
 
  • Like
Reactions: Killroy
The dataset in question had absolutely nothing to do with video or sound; if memory serves, the model/simulation was operating on GIS data. I think individual runs of the model/simulation generated up to 6TB of output data each (and there were many runs). I don’t remember exactly how big the input dataset was.

The issue, as with any scientific model, is how much of the dataset and modeling window needs to be in RAM and accessible by the CPUs at one time in order to get good performance. For some things (or with careful tuning) you might be able to get it to where the model only needs a little piece of the overall dataset in RAM to do its calculations, and can be busy loading the next set from slow storage while it’s number-crunching on the current one.

In others--like this model--it only ran efficiently if it was able to access ~400GB of RAM at once, which I assume was a combination of the input dataset, intermediate working memory, and output results that hadn’t been flushed to disk yet.


“Can” is the wrong question to ask. It should be fairly obvious that it can be set up to access outboard RAM--there’s nothing magic about on-die memory chips, it’s just lower latency and potentially a more tuned architecture. It’ll presumably be somewhat slower than on-die RAM, and if the GPUs can’t efficiently use external RAM I could imagine it being a non-trivial engineering problem dividing up RAM between that which can and can’t be used by the GPU cores (or severe performance hit for some workloads if it’s not).

The question is whether building and getting an outboard RAM controller working is worth it to Apple, to keep the small number of customers who need more than 192GB of RAM (plus a chunk of customers who need less, but will opt not not buy either because it’s expensive or they refuse to do without future upgradability).

I have no idea what the answer to that question is. Could be it’s a challenge that they’re working on but haven’t yet satisfactorily solved and the current Mac Pro is a “let’s get rid of Intel” stopgap while they get the kinks worked out, or it could be they’ve decided the number of additional sales just isn’t worth the cost.
The M2 Max allows 96 GB RAM per chip, an Ultra 192 GB RAM, so configuring two Ultras in some fashion reaches 384 GB RAM. And we do not yet know the math of the M3 architecture (at least I don't). Certainly the 3nm process will facilitate higher densities. One way or another I expect M3 MPs to allow at least 384 GB RAM.
 
Last edited:
  • Like
Reactions: Killroy
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.