Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
good server chip might be something like an M1 Ultra, but with all the P cores replaced with 2x as many E cores.
Would it make sense to split M1 Ultra into three chips: one for the E-cores, one for the GPU and the last one for the neural engine/hardware encoders?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Would it make sense to split M1 Ultra into three chips: one for the E-cores, one for the GPU and the last one for the neural engine/hardware encoders?
You mean for servers? Or just as a general rule so that p-cores could be mixed and matched? Any time you split there is a penalty, even with these new chiplet schemes. So you want to partition in as few places as possible, at places where there doesn’t need to be a lot of communications.

I saw a rumor today that ultra times 2 is still a thing, using a tall skinny interposer chip (I’ve commented here on the need for something like that), but it would only be able to access 128GB of RAM. An example of why you can’t just slice and glue your way to the perfect chip always.
 
  • Like
Reactions: Xiao_Xi

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Yes. For instance, E-cores chips for Apache Cassandra, GPU-cores chips for Tensorflow training and neural engine chips for CoreML inference.

If apple were to decide it wanted its own chips in servers, I assume they’d design monolithic SoCs for that purpose and not try and assemble chiplets. It would be more efficient, and there are probably things they’d want to do to optimize for that use case.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I’m wondering if what Apple is planning for their next Big Chip is really designed to their server chip. To make money on something that’s is supposed to go into a low volume product like the Mac Pro wouldn’t make financial sense in terms of R and D, unless they already have plans to make a whole lot more of them for another purpose.

A M2/M3 generation Ultra could rather straightforwardly go into a "half sized" Mac Pro with a few modest changes to the I/O die space allocation. ( swap some displayPort, SSD controller, Secure Enclave , and/or Thunderbolt controllers for a x16 PCI-e v4/5 There are "extra" , completely unused controllers on the '2nd' Max die in an Ultra. If just not dogmatically committed to completely identical twins . Apple possibly could stuff some PCI-e controller into the UltraFusion interposer die if that worked technology wise and continue to waste the extra area ( cheaper R&D to just do one die that servers multiple roles.. ).

Once that low end Ultra covers the 'lower half' Mac Pro there is not really much for this "server" SoC to go on. The other "bigger" solution for the Mac Pro would more likely be even more dependent upon the design choses of the rest of the M-series line up. Once Apple gets to TSMC N3 they could just make a bigger 550mm^2 die that subsumes the Ultra onto one ide and then just couple two of those with a modified UltraFusion connector.

Does Apple really need to go higher than 40-48 cores for the Mac Pr? Probably not. macOS/mach tops out at 64 threads. The rest of the iOS/iPadOS/macOS line up doesn't need more than 64 threads. Apple is pushing loads of workload onto AMX/NPU/ProRe/GPU/Image-camera/etc specialized processessor that don't need more OS threads. Those P/E cores would need updates over time so each one does more, but core count 'war'? No. P/E core performance updates do flow across the rest of the Apple product space. So plenty of R&D funds to spend for that. If the Mac Pro is dependding upon the rest of the product space to rund P/E evolution there is zero funding problem. Apple needs a server chip to pay for the Mac Pro SoC like they need another hole in the head.







The M1 Ultra is a beast of a chip already and when you look at the theoretical peak performance, it’s basically an i9 12900k + W6900X hooked up to insane memory bandwidth and speed. Would be easy enough to just add two of them for the top of the line skew and call it a day.

So if just double up the beast, then what is the pressing need for more than that? Not much. If
just wait for a fab shrink can do that with just two chips/tiles also.

Apple is trading scale up / scale out for performance / watt. UltraFusion is super fast, but it will probably have trouble scaling past two chips and still delivering "one GPU" unity.

Apple's W6900X benchmark is likely those Affinty Photo benchmarks that have been floating around. That is a skewed , cherry picked benchmark. A substantnive factor there is that the W6900X is kneecapped by the Mac Pro 2019 PCI-e v3 bus. ( W6900X is capable of twice that speed and then the benchmark makes it into a context of how quickly can you move data back and forth. 50% reduction bus versus full speed bus. Duh. that is not really saying much about competing with the modern 2022 era server or workstation packages with PCI-e v4/5 busses. )

Apple's unified memory GPUs are a dual edge. They give up scale ( going to multiples ) on embarrassingly parallel workloads at higher power consumption to achieve higher Performance/Watt. It isn't a free lunch (Apple is giving up features/value-adds to get there ).


If anybody has any server knowledge here, how many would they need to replace whatever they have now from Intel or AMD? Considering how AMD can’t keep up with demand with their Epyc Systems,

AMD is selling more Server packages these days but they don't dominate the server market. The easiest economically substitutable good for a AMD server package is an Intel package. Intel has kneecapped the W-3300 volume just to keep the Xeon SP volume high enough to partially contain AMD expansion.

If someone wants to live migration their Intel VM image to another system Intel has an advantage that Apple can't touch.

A substantial amount of the higher end servers systems are critically dependent on some low level hypervisor ( ESXsi/vSphere , Xen , Hyper-V, etc. ). Apple doesn't have a type-1 hypervisor. Nor do they support booting off of those either ( hackery that allows "at your own risk (don't call us for support) boot " is not technical support in these kinds server contexts. )

Multiple tier virtual machines is a useful tool at time too in general server market.


would be nice for Apple to explore this market, even if for their own internal use.

Internal use? For iCloud Drive , video/audio serving , AppleID authentication , and bulk of iCloud services. Corporate Finance , SupplyChain , Customer service. First, much of the scale out of those are done by contractors. So not really "internal".

For the critical stuff that is insider of Apple data centers. The bulk of that is all Linux; not macOS. There is no big leverage here for a macOS focused SoC.

Apple also sometimes serves users on multiple continents out of there data centers. That means they can't really shun 40/50/100GbE network in their servers. There is tons of data to get out to 10's of thousands of users concurrently. They have to be able to fill big pipes to internet backbone providers. ( 10GbE isn't really doing much in that context. )


If Apple wants to lower power consumption they can just order Ampere Max boxes for workload module zones. It runs Linux fine and gathering more virtual machine support from other type-1 vendors.

The only "internal" service where Apple needs some bigger scale out is the Xcode Cloud service that is still evolving out of Beta. Macmini-coloc/MacStadium and others have done very similar service with Minis and MP2013 for years. If a 40 core/128 GPU Mac Pro comes along they only would need that for corner cases. Mini's and Studios tilted 90 degree and mounted to rack shelving would be all the energy efficient scale out Apple would need for most of the XCode cloud service.

macOS remote virtualization licensing running at day/rate for a single user/organization charging means it is most geared to be a rent-a-mac for a day/week/month/year service. It has a large 'sell more Macs" drive to it. Apple can do exactly the same thing internally. There is not mega-server driver present if there isn't large multitenant hosting going on.

They are growing their services business massively and having everything running on Apple chips could yield benefits we haven’t even thought of for them as a completely vertically integrated company.

Making users/companies buy more Macs makes Apple more money. Services for non-multitenant hosting will be higher rrevenue ( and likely margin) for Apple on a per user served basis.

For the Linux services workload there is bigger savings in riding off the shared R&D that all the rest of the folks in that market are pouring in. There are already ARM server options there with a shared R&D cost base.




P.S. There is a tons of "echo chamber" about the rumored '40 core' M-series and calls for Server SoC . Part of this is a hope that 'server' focus will drive Apple into a higher modularity SoC. Apple will toss laser focus on Performance/Watt aside to worship at the alter of hyper modularity. Can get DIMMs slots and discrete GPU drivers if Apple does a server Apple silicon SoC.

Trying to herd Apple into more modularity isn't really a good win/win for Apple. Apple has been very clear that Perf/Watt is one of the most highest priorities they have for their internal SoC work. It is an attempt to create an expensive server chip and then have the Mac Pro help pay for it. The Mac Pro has the rest of the mac line up to help pay R&D for a Mac Pro SoC.

A highly optimized individual user workstation SoC isn't necessarily going to be a good , well rounded "server" chip.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Apple's SoC architecture is not a good match for cloud servers. Apple makes balanced systems, while servers are often customized for a specific purpose. For example:

RAM: Cloud servers have 256 GB to 24 TB memory these days. A system with 2x M1 Ultra would only be suitable for tasks where the user does not care about memory.

Not only capacity. Apple is punting on ECC also. Another likely contributing reason why they are trying very hard to get into a "max capacity" contest.

Going over512GB and no ECC is dubious. At the 10's of TB range even more doubious. It is actually a complete non-starter for any serious cloud HPC server.

Apple's "poor man HBM" with super wide LPDDR memory is a dual edge sword. It gets them higher perf/watt but it comes with trade-offs and long term data integrity. The bulk of Apple's SoC is dedicated to a iGPU which doesn't really care about those two downsides.

At the Ultra's introduction Apple made multiple comments about how 128GB RAM was extraordinarily large /enormous/ amazing capacity size. If a later "double Ultra" goes to 256GB it seems doubtful they'll be loosing tons of sleep over the size of the user base they 'cut off'.

'At rest' ECC runs counter to LPDDIR data path widths. Apple is running so many independent LPDDR memory channels that probably don't want to add side channels as it would add significant die area consumption to the memory controllers.

SSD: Systems with 8 internal SSDs are common in applications where the user needs fast local storage. Each of them is comparable to the SSDs used in the Mac Studio.

LAN NVMe SSDs and fast Network NAS/SAN is a factor in server contexts also ( > 20GbE ) . Those 8 SSDs come with PCI-e provisioning. (e.g., Epyc 128 lanes , Xeon SP Gen 4 80 lanes , Ampere Max 128 lanes ). Not only the internal storage but often need to hook at higher bandwidths to storage and users data that is out on the network also.



GPU: Either the GPU is a completely unnecessary waste of money, or the user is going to need a lot of GPU power. 8x Nvidia A100, for example.

again PCI-e lane provisioing is at the core of that. If Apple is primarily interested in a "half sized" Mac Pro then they are probabaly killing off slot count ( e.g., retreating back to 3-4 ) rather than trying to build a big a box as possible with maximum legal power draw from the wall.
 
  • Like
Reactions: Xiao_Xi

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
For file storage? Who is using cloud storage for files?

a relatively large chunk of the iOS/iPadOS/macOS/Android folks do. Google Drive? iCloud Drive backups? Nobody is using those?

As primary drives on home LAN networks? No. In server contexts on server data center level LAN networks .. happens all the time.
 

ahurst

macrumors 6502
Oct 12, 2021
410
815
For file storage? Who is using cloud storage for files? It’s expensive and very slow. Cloud storage in this context is usually delegated to backups.
Especially for any sensitive data. For research and data collection (at least in universities), ethics boards are pretty strict about storing data locally (even for anonymized data to share publicly we're only allowed to host it on servers within the country). I'm sure it's a similar deal for industrial research and medical facilities.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Especially for any sensitive data. For research and data collection (at least in universities), ethics boards are pretty strict about storing data locally (even for anonymized data to share publicly we're only allowed to host it on servers within the country). I'm sure it's a similar deal for industrial research and medical facilities.

Cloud can cover NAS/SAN storage by a company/institution that isn't limited to a single department or subdivision. The data center at a university supercomputer isn't going to store data inside of individual nodes. ( 1 mile across the campus in a data center run by another part of the organization or 100 miles away in AWS isn't all that materially different. )

Different regions have different levels of compilance requirements. But in a number of cases even senstive data is moving. Cannot use the generic , copied from some open source repository virtual machine / container instances but extremely large cloud vendors like AWS have instrastures and procedures to deal with sensitive data.

For example HIPPA data AWS.

https://aws.amazon.com/compliance/hipaa-compliance/

and



Another was the USA DoD JEDI contract that imploded but would move decent chunk of processing into a "cloud" for them. ( contractors and subcontractors have run some DoD data sites for decades. Not sexy, new age Linux cloud, but old school mainframe consolidation centers. )


No commerical software projects at all inside of GitHub (MIcrosoft paid $7B to just how open source projects? Err. no. ).

Have to go through gyrations (postpone AppleID login as long as possible) not to have the standard macOS install/new_machine_boot process not to turn on and stuff personal data into iCloud. Apple turns it on by default.


Is everyone stuffing everything into AWS like clouds? No. Is it "extremely" rare usage at this point? Again no. There is lots of folks using Salesforce.com , Oracle Apps cloud , AWS , Azure , etc.

P.S. there are hybrid set ups also.

Other folks "containers" / "VMs" run inside the internal network, but pragmatically a subnetwork of their own with outside maintainers of the 'soft' instances, but limited access to the data. Azure hybrid and one that Apple ran/runs.

 
Last edited:
  • Like
Reactions: Fomalhaut

Rickroller

macrumors regular
Original poster
May 21, 2021
114
45
Melbourne, Australia
The question I raised may have been limiting in its scope since I was struggling to find expanded use cases for what comes next for Apple silicon. Also we seem to define limitations on how these chips can be configured based on what we’ve seen so far and are extrapolating for the next ones.

The supposed Mac Pro chip would be untouchable performance wise with double the M1 Ultra.

Are we happy to say that’s it…Apple will end it here at this last chip…?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
The question I raised may have been limiting in its scope since I was struggling to find expanded use cases for what comes next for Apple silicon. Also we seem to define limitations on how these chips can be configured based on what we’ve seen so far and are extrapolating for the next ones.

The supposed Mac Pro chip would be untouchable performance wise with double the M1 Ultra.

Are we happy to say that’s it…Apple will end it here at this last chip…?

Well, they say they have ended M1 on M1 Ultra. So presumably what comes next is M2. And so it begins again…
 
  • Like
Reactions: Tagbert

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
It would make sense for Apple to release servers for Small business/companies.

A Server that preserve your company's data & privacy locally instead of using data harvesting cloud hosting.

And the software support is right there, as MacOs can run almost anything made for Linux with little tweaking.
Unfortunately, Apple wound down their Server software years ago….
 

Rickroller

macrumors regular
Original poster
May 21, 2021
114
45
Melbourne, Australia
Well, they say they have ended M1 on M1 Ultra. So presumably what comes next is M2. And so it begins again…
Unless the Mac Pro has a different enough setup that it gets a new name with maybe a longer/slower product cycle. We expect M1 Ultras in some form, but again we know this would mean some limitations without serious modifications to improve expansion capabilities.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Synology is the king of NAS's. I have 2 at home.

I'd never buy an Apple NAS.

I also have one. It works ok most of the time, but software quality is poor. I also had a file system failure once that completely destroyed all my data: neither RAID1 nor btfs helped, the support just shrugged and said these things can happen. I also experience regular issues with users and connecting to server. And performance is rather poor, despite using fast HDDs.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I also have one. It works ok most of the time, but software quality is poor. I also had a file system failure once that completely destroyed all my data: neither RAID1 nor btfs helped, the support just shrugged and said these things can happen. I also experience regular issues with users and connecting to server. And performance is rather poor, despite using fast HDDs.
I have 2 Synology 3612’s and 1 24xx (can’t recall the year), fully populated, and only crashed the file system once, due to my own stupidity and a mishap with a UPS. Been rock solid for a decade now. That said, their new stance re: not supporting hard drives except for their own branded ones is really a non-starter for me, and unless the situation improves I’d avoid their big boxes from 2022 onward.
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482

I'm sure Apple is testing/planning their chips for server use at some point.

They could be testing their own chips on parts of their internal services like iCloud, iMessage, etc. Then their own data scientists could switch to using Apple Silicon to train ML models instead of Nvidia. Next, you could see them releasing a real-world service such as macOS emulation in the cloud-powered by M3, M3 Pro, M3 Max, M3 Ultra, and M3 Ultra Duo.
 
Last edited:

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
their own data scientists could switch to using Apple Silicon to train ML models instead of Nvidia.
Deep learning frameworks for Apple GPU are not yet ready. Pytorch is not compatible with Apple's GPU, and Tensorflow on Apple's GPU is not reliable. However, Neural Engine using CoreML makes inference very fast.

Next, you could see them releasing a real-world service such as macOS emulation in the cloud-powered by M3, M3 Pro, M3 Max, M3 Ultra, and M3 Ultra Duo.
Apple could copy AWS and offer macOS as a service when professional software fully embraces Apple GPU. AWS currently offers Windows virtual machines with Nvidia GPUs to use 3D software.
 

Kierkegaarden

Cancelled
Dec 13, 2018
2,424
4,137
Excellent question, and one I posed recently — though not as succinctly.

It does seem like they have something else planned besides the consumer market with the incredible power they’re flexing right now.

The article brought up the possibility that their services could run on their own infrastructure built on AS — is this possible, would it be cost effective, and would this have an effect on security?
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
Deep learning frameworks for Apple GPU are not yet ready. Pytorch is not compatible with Apple's GPU, and Tensorflow on Apple's GPU is not reliable. However, Neural Engine using CoreML makes inference very fast.
Apple has created adapters for Tensorflow. Of course, they've used their own Apple Silicon for training internally. At least tested.

Their data scientists could be internally training/testing with a future M3 Ultra Duo engineering sample with 512GB of unified RAM right now which is impossible to get with Nvidia A100. Who knows.

Internal Apple data scientists don't have to wait for M2/M3 like the general public.
 

Rickroller

macrumors regular
Original poster
May 21, 2021
114
45
Melbourne, Australia
Excellent question, and one I posed recently — though not as succinctly.

It does seem like they have something else planned besides the consumer market with the incredible power they’re flexing right now.

The article brought up the possibility that their services could run on their own infrastructure built on AS — is this possible, would it be cost effective, and would this have an effect on security?
This is the reason I started this thread in the first place. There are very few companies out there that have a level of ambition that can be met with the resources to bear. I couldn’t see the limitations that the consumer market placed on the products lining up.
 
  • Like
Reactions: Kierkegaarden

Kierkegaarden

Cancelled
Dec 13, 2018
2,424
4,137
This is the reason I started this thread in the first place. There are very few companies out there that have a level of ambition that can be met with the resources to bear. I couldn’t see the limitations that the consumer market placed on the products lining up.
Absolutely agree. Outside the possible enterprise ambitions, I do have another theory — that the power they are putting out is laying the foundation for future VR/AR hardware.
 
  • Like
Reactions: singhs.apps

Rickroller

macrumors regular
Original poster
May 21, 2021
114
45
Melbourne, Australia
Absolutely agree. Outside the possible enterprise ambitions, I do have another theory — that the power they are putting out is laying the foundation for future VR/AR hardware.
Well you have a point. On more than a few occasions already Tim Cook has been very clear that AR/VR is very important to Apple. You would think there is a strategy for that backed up by real effort from the whole company to execute at the level Apple is known for.

Putting the amount of computing power in their machines that they do could be a clue as to what they feel they need to achieve a a certain level of QOS. The GPU in the iPhone 13 Pro being 50% faster than the previous model is a lot for a single model year progression. 25% would have already been respectable. Even the hints their tech guys let slip about lossless wireless audio adds a lot to this picture.
 
  • Like
Reactions: Kierkegaarden

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
Apple has created adapters for Tensorflow. Of course, they've used their own Apple Silicon for training internally. At least tested.
Other than the small team developing the Tensorflow plugin, I doubt that any other Apple engineers use Tensorflow on Apple hardware. Tensorflow support for Apple hardware is incomplete and buggy.

Their data scientists could be internally training/testing with a future M3 Ultra Duo engineering sample with 512GB of unified RAM right now which is impossible to get with Nvidia A100. Who knows.
Training a deep learning model on multiple GPUs is a solved problem. Apple needs to do more to shake up the current deep learning status quo.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
I also have one. It works ok most of the time, but software quality is poor. I also had a file system failure once that completely destroyed all my data: neither RAID1 nor btfs helped, the support just shrugged and said these things can happen. I also experience regular issues with users and connecting to server. And performance is rather poor, despite using fast HDDs.
I've never had problems like that, but I run RAID5. We have Synology at work too.

As for performance, yes, it's not blazingly fast, but comparing it to other NAS's, it's the fastest I've used. You'd have to go to a real SAN to get anything better. Anyway, it's mainly for near line storage and speed isn't the most important thing. I've never lost a byte with a Synology NAS.

The reason I said I wouldn't buy an Apple NAS is because there's nothing in their current makeup that scream they haven't any idea at all what a NAS needs. They're far too much into soldering their RAM and SSD's to the motherboard rather than having anything upgradeable or fixable, and that's not going to work in the NAS product category, at all.

Could someone do better for a NAS, sure, Windows and an intel based small server can do it already, but that's even more expensive.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.