I’m writing an N-body gravitational program to model the Milky Way. I want to use at least 100,000 “heavy” stars, which translates to 100,000**2 or 10 billion operations = 320 GBytes (4 double precision variables). I’ve implemented this on a Mac Studio with 32 GBytes RAM,1 TByte SSD (~370 GBytes free). With multi-threading disabled in my IDL code, and using small data chunks, the program successfully generates a rotating galaxy that takes too long to execute. Using the IDL engine, which facilitates multi-threading, and re-engineering the code to use very large data chunks for the 12 CPU cores to handle, the program crashes after processing about 80 GBytes of data (it runs OK using 50,000 stars). Activity Monitor shows about 60% CPU usage before the crash, reasonable memory pressure and up to 94 GBytes of memory in use by the IDL engine. I presume swapping to virtual is somehow the cause. I understand that RAM cannot be upgraded in the Mac Studios, right?. Any suggestions on the source of the problem and a fix? Thanks!