@deconstruct60 Thank you!
AMD has two types of GPU: CDNA for scientific computing and RDNA for gaming/rendering.
Which type of GPU should Apple's server chips have: CDNA-like GPU or RDNA-like GPU?
What part of the data center systems?
Apple likely has a cluster for Siri language training. ( could even be Nvidia
DXG nodes just bought and racked up). Apple's NPU is for "inference" ; not "training". For training Apple would need something more like a CDNA (to use AMD's terminology). But also would need some outsized large memory capacities also. There are some other areas where they are running large training models. ( same issue).
A vastly bigger block of systems ( and energy consumption) for their cloud services though is iCloud data/backup , iCloud app back end, Messages transaction back end , authentication , and AppleTV+ video 'back end'.
None of these need a GPU at all. File servers don't need GPUs. Neither do telecomm services (again mainly data transport from point a to point b). While there are video data files that AppleTV+ is serving.. they are more files than video (the conversion to actual video is primarily done on the client ).
[ Apple's unified CPU+GPU buys lots of nothing for these workloads.... which is the vast majority of their data center ]. Some, probably large, fraction of the static video file serving is farmed out to edge providers for AppleTV+ (everyone gets the same movie so can distribute cache that closer on the internet). So Akamai, Cloudflare , etc.
The Apple Cloud+ internet address "masking" ( pseudo VPN-ish ) that they are ramping now? Same thing. No significant GPU task present.
Apple has multimillion systems for supply chain ( SAP and other tools ) , Fortune 500 level worldwide bookeeping , and corporate level stuff that isn't GPU bound either. ( not big chunk of data centers but an expensive part. )
Some others are on a more slippery slope.
Apple probably has a non trivial cluster to devoted to doing batch EDA simulations. ( computational support for their chip design). If they had some "smart" (AI-ish) tools then there could be some CDNA. [ I'd suspect Apple has more "we're smarter than the AI" chip designers so probably not much there at the moment. ] If Apple's EDA simulations are all very loose node quasi-clustered then they probably are not leaning on GPUs all that much now. ( and more so looking for FGPA cards as "accelerators" here than GPUs. )
XCode Cloud .. yes but that is just "end user macs" in a cloud. It is more of a RDNA GPU ( which is how Apple GPU is highly skewed toward ) . Again as I have said previously they really don't "need" a server chip here at all. The same thing shipping to end users can be used here. Especially, if they not doing concurrent , multitenant hosting.
P.S. Back in the last century Apple has a Cray Supercomputer to do thermal , mechanical , and some other modeling. ( And Seymour Cray had a Mac to create some of the basic components of the Cray systems. ) . There is probably still some lab server doing finite element analysis for Apple now. Apple GPU doesn't even believe in FP64 so they are really detached from the high fidelity, real world modeling solution space . Similarly with the no ECC GPUs stance ( same essential disqualifier for their Mac Pro /Xserves serving as nodes when lacked it. )
In the supercomputer node space , Apple was/is dead in the water for more than several years. With MilanX plus MI250X coupling and Intel Xeon SP +CXL to Ponte Vecchio coupling next year ... even more dead in water now.
If Apple needs a supercomputer in a subset of their data center(s) they can just buy one like back when bought previous ones. Folks are probably only fooling themselves if think Apple is going to get into the "everything for everybody" server SoC business. They probably are not.