Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

satcomer

Suspended
Feb 19, 2008
9,115
1,977
The Finger Lakes Region
Well, I'm not retired. An M1 SOC Mini is not a competitive supercomputer node. Sorry. Too few cores, inadequate memory, limited to ethernet.... I don't think HPE is worried. Vastly different market. I thought you were joking.

If I'm parsing "mine own eyes walking back to my section for a new Unix crypto device worked!" correctly, that's not a supercomputer.

The sever farm was well over 250 Mac minis put together in sever room in top secret class in No Such Agency!
 

jjcs

Cancelled
Oct 18, 2021
317
153
The sever farm was well over 250 Mac minis put together in sever room in top secret class in No Such Agency!
That is not what most HPC users would consider a "supercomputer". This is: Aurora. Along with the Cray systems using the AMD Epyc2 chips and IBM systems using POWER9.

Apple's M1 architecture could be competitive in that world if they wanted to and maybe they do, but a Mac Mini isn't remotely in that market. Perhaps a multi-CPU M1 Max Xserve with substantially more RAM and a decent communications fabric would be. The last "Mac" supercomputer worthy of the term was the 2nd Virginia Tech cluster with G5 Xserves (the first deployment with G5 PowerMacs lacked ECC and wasn't reliable, as I recall). Long term support would be a problem based on past history, however, and Apple has pretty much zero current experience selling to that market. In fact, the scientific computing market isn't something they seem to care one bit about anymore. Not flashy enough, I guess.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
@deconstruct60 Thank you!

AMD has two types of GPU: CDNA for scientific computing and RDNA for gaming/rendering.

Which type of GPU should Apple's server chips have: CDNA-like GPU or RDNA-like GPU?

What part of the data center systems?

Apple likely has a cluster for Siri language training. ( could even be Nvidia DXG nodes just bought and racked up). Apple's NPU is for "inference" ; not "training". For training Apple would need something more like a CDNA (to use AMD's terminology). But also would need some outsized large memory capacities also. There are some other areas where they are running large training models. ( same issue).


A vastly bigger block of systems ( and energy consumption) for their cloud services though is iCloud data/backup , iCloud app back end, Messages transaction back end , authentication , and AppleTV+ video 'back end'. None of these need a GPU at all. File servers don't need GPUs. Neither do telecomm services (again mainly data transport from point a to point b). While there are video data files that AppleTV+ is serving.. they are more files than video (the conversion to actual video is primarily done on the client ).
[ Apple's unified CPU+GPU buys lots of nothing for these workloads.... which is the vast majority of their data center ]. Some, probably large, fraction of the static video file serving is farmed out to edge providers for AppleTV+ (everyone gets the same movie so can distribute cache that closer on the internet). So Akamai, Cloudflare , etc.

The Apple Cloud+ internet address "masking" ( pseudo VPN-ish ) that they are ramping now? Same thing. No significant GPU task present.

Apple has multimillion systems for supply chain ( SAP and other tools ) , Fortune 500 level worldwide bookeeping , and corporate level stuff that isn't GPU bound either. ( not big chunk of data centers but an expensive part. )


Some others are on a more slippery slope.

Apple probably has a non trivial cluster to devoted to doing batch EDA simulations. ( computational support for their chip design). If they had some "smart" (AI-ish) tools then there could be some CDNA. [ I'd suspect Apple has more "we're smarter than the AI" chip designers so probably not much there at the moment. ] If Apple's EDA simulations are all very loose node quasi-clustered then they probably are not leaning on GPUs all that much now. ( and more so looking for FGPA cards as "accelerators" here than GPUs. )

XCode Cloud .. yes but that is just "end user macs" in a cloud. It is more of a RDNA GPU ( which is how Apple GPU is highly skewed toward ) . Again as I have said previously they really don't "need" a server chip here at all. The same thing shipping to end users can be used here. Especially, if they not doing concurrent , multitenant hosting.


P.S. Back in the last century Apple has a Cray Supercomputer to do thermal , mechanical , and some other modeling. ( And Seymour Cray had a Mac to create some of the basic components of the Cray systems. ) . There is probably still some lab server doing finite element analysis for Apple now. Apple GPU doesn't even believe in FP64 so they are really detached from the high fidelity, real world modeling solution space . Similarly with the no ECC GPUs stance ( same essential disqualifier for their Mac Pro /Xserves serving as nodes when lacked it. )

In the supercomputer node space , Apple was/is dead in the water for more than several years. With MilanX plus MI250X coupling and Intel Xeon SP +CXL to Ponte Vecchio coupling next year ... even more dead in water now.

If Apple needs a supercomputer in a subset of their data center(s) they can just buy one like back when bought previous ones. Folks are probably only fooling themselves if think Apple is going to get into the "everything for everybody" server SoC business. They probably are not.
 

Xiao_Xi

macrumors 68000
Original poster
Oct 27, 2021
1,628
1,101
AWS explains how its designed chips have helped it to improve its services.

It seems that Apple has more room to innovate in the server market than making a gigantic M1 SOC.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.