Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Probably not, although I do have benchmark data for SGEMM. This is going off a tangent because my original point is the hardware could run things like OpenMM and JAX. However the software ecosystem prevents you from fully utilizing an M1 Mac you already own. It’s not the optimal solution for a computer you don’t yet own, but want to buy for HPC/AI.

For my best comparison, the following: M1 Ultra extrapolated from M1 Max GPU (e = emulated). Next is using the CPU, GPU, and ANE simultaneously, extrapolated from M1 Max or M1. Then the RTX 3090 Ti, with a purpose more similar to M1 Ultra, and the same generation as A100. Finally, a single A100 - not using sparsity. The data does reflect downfalls of real-world performance, as hardware isn't fully utilized. Regarding cost: M1 Ultra is a hybrid between consumer hardware and overpriced special-purpose hardware. The GPU's design and lower clock speed also requires more silicon for the same performance.

Utilized TFLOPSM1 Ultra (GPU)M1 Ultra (System)RTX 3090 TiA100 80 GB
Vector FP64~0.26-0.53e0.620.619.57
Matrix FP64~0.26-0.53e1.400.6119.14
Vector FP3220.8022.0339.2919.14
Matrix FP3216.8722.36157.16153.23
Vector FP1620.8023.2639.2976.59
Matrix FP1616.2843.06314.32306.47
Power (W)96184450250
Bandwidth (GB/s)75075010082039
Cost (USD)50005000200016300

Sources:

https://github.com/pytorch/pytorch/files/10250248/Matrix.Multiplication.Performance.xlsx

https://github.com/philipturner/metal-benchmarks

https://web.eece.maine.edu/~vweaver/group/green_machines.html

https://www.anandtech.com/show/17024/apple-m1-max-performance-review/3

https://github.com/philipturner/met...BlitEncoderAlternative/MainFile.swift#L27-L36

https://www.nvidia.com/en-us/geforce/news/geforce-rtx-3090-ti-out-now/

https://www.servethehome.com/nvidia-geforce-rtx-3090-review-a-compute-powerhouse/3/

https://www.ebay.com/itm/333991727955

I am surprised at the matrix scores. I would have thought that AMX on Ultra would have higher FP32 performance? Also, the disparity between FP16 and FP32 is huge! AMX should run FP16 at a dual rate compared to FP32, right? Or is it contribution from ANE?

P.S. Does your matrix implementation on Apple GPU use the matmul intrinsics?
 

Philip Turner

macrumors regular
Dec 7, 2021
170
111
https://github.com/philipturner/met...rformanceTests/ClockCycleTests.swift#L55-L252

I was surprised too, but for some reason FP16 is never faster than FP32 in Accelerate. It's also inaccessible from BLAS. Apple saw little need to make FP16 accessible on CPU, since it's mostly used on the neural engine. There is more performance disparity from NEON (0.6 TFLOPS FP64, 1.3 TFLOPS FP32, 2.6 TFLOPS FP16).

Matrix scores are CPU + GPU combined. This goes for FP32 and FP16, both with ~4 TFLOPS AMX and ~16 TFLOPS GPU. ANE (22 TFLOPS) is only added to FP16 matrix, although in real-world you'll rarely use it (much less so concurrently with GPU).
 
  • Like
Reactions: maflynn

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
Your claims have no basis in reality.
Well, I always claimed that the M1 Extreme rumor was nonsense. So far reality is backing me. Until Apple releases such a four-in-one chip, you don't get to say: "See, I was right". Instead you get to make funny excuses why your predictions didn't come to pass.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
an Apple datacenter CPU would be miles ahead of anyone else. With heir performance per core and low power usage per core, they could easily build a huge chip that will outperform everyone else by a factor of two while using half the power.
Do you think Apple could create a data center CPU twice as fast and efficient as Graviton 3? How much more efficient is an M2 core than a Graviton 3 core?
 

maflynn

macrumors Haswell
May 3, 2009
73,682
43,740
Do you think Apple could create a data center CPU twice as fast and efficient as Graviton 3?
Maybe, but at what cost and of course as many folks have been saying, the CPU is just one small piece of the puzzle, they need to develop a whole new infrastructure to build around the CPU. There's speed, but there also has to be redundancy and reliability. There's other factors to be sure but overall its a steep mountain to climb for apple, and what is the reward? There seems high risk and low reward imo
 

Zest28

macrumors 68030
Jul 11, 2022
2,581
3,933
Honestly, Apple is very overrated.

Let's take cloud gaming for example. Microsoft their blade servers are based on the Xbox Series X . And a $500 Xbox Series X is more powerful than a $2000 M1 Max Mac Studio.

Likewise, Nvidia uses $700 RTX 3080 for GeForce Now which is much more powerful than a $4000 M1 Ultra Mac Studio.

And I'm not even going into the additional technologies that RTX cards and Xbox Series X has that is missing on the M1 GPU's.

Also I have never heard from anyone buying warehouses full of M1 Ultra's to mine cryptocurrencies either. You'd think that the efficiency of the M1 chip would be amazing here, but nope.
 
  • Haha
Reactions: jdb8167

Philip Turner

macrumors regular
Dec 7, 2021
170
111
And I'm not even going into the additional technologies that RTX cards and Xbox Series X has that is missing on the M1 GPU's.
Let's wait for Apple's M2 Pro with hardware ray tracing. Apple's been taking their time to get it just right, power-efficient enough to run on a mobile AR headset. If the A16 chip didn't have that failure, Apple would have been the first (major company) in the industry to sell (products with) ray-sorting hardware. They failed, so Nvidia SER became the first.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Do you think Apple could create a data center CPU twice as fast and efficient as Graviton 3? How much more efficient is an M2 core than a Graviton 3 core?
I was not able to find SPEC2017 results for Graviton3, Graviton2 achieves 170/160 for int and fp tests, respectively (thats the 64-core 100W model). A 4x M1 Max (40+8 cores) would achieve 200/320 at total CPU power of 120W. Graviton3 is what, 20-30% faster than Graviton2? And that’s keeping Firestorm cores at 2.9Ghz clock. So yeah, I’d say that even firestorm can be more efficient than Graviton3 if you clock it a big lower and pack a lot of cores (64 Firestorm cores at 2.5 should give you around 300/500 in SPEC at under 150 watts).

Still doesn’t mean that Apple could deliver a meaningful offering. As @maflynn says, it’s a big investment and risk for a very questionable reward. I mean, would one suggest Ferrari to make taxis?
 
  • Like
Reactions: maflynn and Xiao_Xi

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I was not able to find SPEC2017 results for Graviton3, Graviton2 achieves 170/160 for int and fp tests, respectively (thats the 64-core 100W model). A 4x M1 Max (40+8 cores) would achieve 200/320 at total CPU power of 120W. Graviton3 is what, 20-30% faster than Graviton2? And that’s keeping Firestorm cores at 2.9Ghz clock. So yeah, I’d say that even firestorm can be more efficient than Graviton3 if you clock it a big lower and pack a lot of cores (64 Firestorm cores at 2.5 should give you around 300/500 in SPEC at under 150 watts).

Specint is a somewhat dubious benchmark for a multiuser, multitenant cloud services server system. A single user, single app, on single thread. Who hires Amazon to that???? People ship backend workloads to AWS; not single user GUI apps.

SpecRate perhaps.


aws-graviton3-spec-tests.jpg


https://www.nextplatform.com/2022/01/04/inside-amazons-graviton3-arm-server-processor/

The other part is want to do this with not just one client soaking up the whole SoC but multiple different apps running against the L3/L4 subsystem and the client in a subset vCPU cluster. At that point would have something that represents what the general workload of AWS is about ( user using (and sharing) fractionals at verying rates. And aggregating multiple workloads to run the servers at 70-90% workloads. ) .

Hot rod, single threading drag racing is something else. It minimizes cache and memory bandwidth contention issues.


Graviton 3 has V1 derivative cores so 'rates" (parallel) FP is bigger jump over the N1 design baseline. Depending on workload there can be a very big jump Graviton 2 over Graviton 3 . Summary graph toward end of Phoronix's review.

embed.php




There are Java and computation workloads here that are far better than 30% uplift.



Still doesn’t mean that Apple could deliver a meaningful offering. As @maflynn says, it’s a big investment and risk for a very questionable reward. I mean, would one suggest Ferrari to make taxis?

There are a number of uncore parts they are missing also. No ECC memory. native UEFI boot support. etc. etc. etc. Just slapping some M-series CPU cores into a new die isn't even the biggest missing piece expensive wise.
Apple has nothing for Nitro/DPU/IPU (Data Processor Unit / Infrastructure Processor Unit ) solution. that also has a software/firmware aspect to the holistic solution.
 
  • Like
Reactions: Oculus Mentis

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
...The data does reflect downfalls of real-world performance, as hardware isn't fully utilized. Regarding cost: M1 Ultra is a hybrid between consumer hardware and overpriced special-purpose hardware. The GPU's design and lower clock speed also requires more silicon for the same performance.

Utilized TFLOPSM1 Ultra (GPU)M1 Ultra (System)RTX 3090 TiA100 80 GB
Vector FP64~0.26-0.53e0.620.619.57
Matrix FP64~0.26-0.53e1.400.6119.14

Errr, the M1 Ultra GPU is suppose to be creditable for HPC work? And the A100 is two years old. Some folks are running H100 in labs now and the uplift is pretty large. In a GPU vs GPH contest that there is beat down.

Two major problems for Apple M-series being generally creditable for broad HPC workloads. First, Apple thinks the FP64 datatype is superfluous on the GPU. ( Metal doesn't particularly cover it well) So there is no FP64 uplift support there at all. It is a very mobile centric viewpoint that the GPU is really primarily there to drive some GUI display and FP64 is not very useful for a GUI display and takes up a disproportionate amount of die space that they'd rather spend on tile memory or faster graphic (not compute).

Apple adds GPU cores at a higher rate to larger dies than CPU cores. So even if try to chase the A100/H100 FP64 performance with CPU cores there is a huge silicon that is completely 'dark' in adding to that "solution". The $/perform will likely trail ( unless bailed out by Nvidia's huge markup ).


Apple is happily scurrying along the trend line of throw away as much precision as you can get away with. 32 , 16 , 8 ... etc. That is OK if a particular HPC workload doesn't need it. If you it does then that is 'fail'. The A100 / H100 cover a substantively different set of HPC customers than what Apple is covering.

The other issues being tapped danced around here is data integrity. Apple also is tossing ECC out the window.
Again in stark contrast to Nvidia's A100/H100 offering.



If go back to post 37 in the thread https://forums.macrumors.com/thread...rket-with-apple-silicon.2375390/post-31842460

The 200,000 hours on a A100 to train isn't just a TFLOPS issue. It is an 'hours' issue also. The APIs mismatch are more misdirection than a critical factor. Apple's high end compute is aimed far more at emphemeral data ( a screen image that flashes by at 60Hz. ) or perhaps a couple of hours of user attention time.


Apple is not seriously in the hard core HPC market. They dabble at the edges they find convenient and happen to be synergistic to their primary objectives. So they have some coverage, but probably not going to be a major player.
 
  • Like
Reactions: Oculus Mentis

Philip Turner

macrumors regular
Dec 7, 2021
170
111
First, Apple thinks the FP64 datatype is superfluous on the GPU.
Well, so does Nvidia. Look at the emulated FP64 performance on M1 Ultra GPU (via Int64 multiply), vs the native FP64 performance on RTX 3090 (via "Int53" multiply). Many consumer-grade CPUs outperform both in FP64. Does anyone sell a GPU with 10 TFLOPS of native FP64 that doesn't cost $20,000?

Errr, the M1 Ultra GPU is suppose to be creditable for HPC work?
A lot of applications can make do performing 90% of the calculations in FP32, and a small amount in FP64. Some examples are GROMACS (100% FP32) an OpenMM mixed precision (98% FP32). The more pressing use case is medium-scale physics situations you'd fit on your personal computer. That's part of the field of high-performance computing, even though an M1 isn't an exascale supercomputer.

takes up a disproportionate amount of die space
Have you looked at the die space consumed by the AMX, which contains the majority of the FP64 compute power?
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
That seems to be a consensus. What's Mac Pro's market share for similarly powerful PCs/workstations? Close to 0%. If Apple stopped producing them, nobody would notice.
A lot of the work of a workstation has been moved to the cloud. Hence, an Extreme cloud version could provide a big enough market to make the economics of building an Extreme chip work.
 
  • Like
Reactions: dgdosen

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
Have you looked at the die space consumed by the AMX, which contains the majority of the FP64 compute power?
69502024-a55e-4ceb-bbd4-0cf1193153d8_1024x421.jpg


A lot of the work of a workstation has been moved to the cloud. Hence, an Extreme cloud version could provide a big enough market to make the economics of building an Extreme chip work.
To make financial sense, Apple needs a cloud service where it has a competitive advantage through hardware or software, similar to what Nvidia does with GeForce Now. For example, Apple could create a renderer so realistic that it would become a standard and Hollywood would be "forced" to use it and render on Apple's hardware-based render farm.
 
Last edited:

Zest28

macrumors 68030
Jul 11, 2022
2,581
3,933
Apple will never have a hardware advantage.

Suppose the SSD breaks or you want to upgrade the GPU or CPU in your server, have fun doing that with Apple. You have to buy a whole new super expensive machine for that. Even amongst consumer computers, Apple generally scores the worst in terms of repairability and upgradability.

And to make it more funny, suppose if you need huge amount of storage. Then you got to buy thunderbolt enclosures with Apple computers. That is not good at all.

And Apple "geniuses" aren't "geniuses" at all. So if they have to come to your datacenter to fix any hardware issue (which is something you cannot do yourself with Apple), that is going to be super hilarious. And I highly doubt this is even allowed in datacenters due to the huge security risk of having external people in the building and having actual access to the hardware.
 
Last edited:
  • Like
Reactions: Oculus Mentis

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I agree, and from what rumors I've seen, I'm guessing M2 Extreme yields were so low that it didn't make financial sense. If history is an indicator, the M2 Extreme was going to be even larger in size then the M1 ultra - if that is the case, then each wafer produces less M2s driving the unit costs up

There wasn't lots of details about yields. According to the rumors Apple was using 4 "M_ Max" sized dies to compose the Ultra. The component die yields of the "Max " , "Ultra" , and "extreme" would all be the same. It would be the same die being used in three different ways. ( I don't believe that it would be the laptop "Max" die. It would need to be a different die in a similar size range. i.e., would wouldn't magically get M1 Pro or plain M1 sized. Still would be > 300mm^2 ). The Ultra is still coming and a pretty decent chance it will be the same die it was going to be if there was an Extreme. It would not have made much sense to construct them with completely different dies.
If either one was going to support provisioning multiple PCI-e slots of high bandwidth the basic building block couldn't be the laptop "Max" class die. ( Nor does it make much wafer utilization sense to print millions of UltraFusion connectors on an increasing expensive fab process that are never going to use. Once have multiple desktops using chiplets, they can fork that off the laptop "Max" die. )


From the descriptions is was more likely a wafer shortage. If Apple can only get XXXX wafers and needed to make M2 Pro, Max for laptops , Max and Ultra for Studios ... would they have to "rob Peter to pay Paul" to also churn out Extreme SoCs. It wasn't that the extreme dies were horrible and the others were insanely great. Apple had constraints on all of them. Does Apple make four 3,299 laptops ($13,1960) or one $12,999 MP extreme ? The first makes them more money. Gets worse if talking about 14 $1,199 phones ($16,768) or one $12,999 .

It has little to do with the performance.

The primarily way the M2 Extreme package yields would get substantially lower using the same 'chiplets' as the M2 destkop Max or Ultra is if the 3D packaging technology was screwing up working good dies in the construction process. There are likely some small losses , but probably not huge. ( The Ultra uses 3D packaging technology and no huge rumors about tons of good M1 Max dies being tossed in the trash can. )

or if Apple was using a even bigger "chiplet" to construct a Extreme ( try to mash the "Ultra" into a monoethnic die and it balloons up into the > 500mm^2 zone). The M1 Max is already a too big, chunky chiplet. Going bigger would just get worse 'chiplet' characteristics. (although it could get better pref/watt).

If Apple is effectively the only volume customer on N3B ( N3 ) then there is probably a max number of wafers that TSMC will want to allocate to N3B. If all the other customers are off waiting on N3E which get allocated from a different (and bigger) wafer pool than Apple has to juggle a cap.

That juggling gets even more protracted if the A17 is on N3B also. ( flipping the A17 to N3E may or may not have happened. Reportedly that switch requires non trivial redesign. )


The volume sales problem with the > $11K Mac Pro isn't the chips inside. It is the price. Fewer folks have that much money to spend. At $20K fewer still. At $30K even fewer. It isn't the "unit" costs for the M2 Extreme. if the M2 Extreme costs $1,000 to make and Apple charges the customer $4K . Apple is clearly making money. And if the MP cost $12K .. again the customer is clearly paying for the $1K unit costs. The more pressing issues is the number of people paying and the number of other people who want to also buy very profitable Apple stuff. if there is 10x as many of them then they can tip the balance can get tipped in their favor.



Apple and TSMC makes so choices based on production constraints means there are wafer limitations. Not that the defect density is very high. (or yields extremely bad).

I think there are a set of folks that don't want to pay $3-4K for an Apple GPU that spun that production limitation into "Extreme is bad" story. And this is a backdoor way to force Apple to do 3rd party display GPU drivers. Good luck with that ( don't think that is going to work) .

Pretty good chance this is more so the case that Apple 'bet the farm' on N3 and it isn't going to pay off as well as they hoped. Not going to be completely bad but they are going to have to take some lumps on the upper niche of the Mac Pro line up.
 
  • Like
Reactions: Oculus Mentis

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Nah! The by far likeliest "explanation" is that the entire rumor was fabricated by idiots, who thought only because two M1 Max can be fused together to work as one M1 Ultra, you can also glue four of them together. It was a ridiculous idea to begin with.

The M1 Max die is more so a lousy chiplet ( far too chunky, bulky ) than UltraFusion couldn't be scaled to four. The root cause problem was too excessive use of a laptop optimized monolithic die for use as a chiplet.

What the M2 building block was isn't really clear yet. Refactored to remove excess monolithic baggage it could have worked with some modest changes. Didn't have to massively change the CPU/GPU/NPU core groupings. But would need factor a least a subset of the I/O ( SSD , thunderbolt , secure element ) off. And shift the memory controller layout (away from laptop logic board constraints). the M1 Max die has large memory controller lay out problems more so that UltraFusion problems. To dense pack four dies together need two 'inner' adjancent sides to be available. The M1 Max die only has 'free' sides on opposite ends.

[ the M1 four chip building black didn't "have to be" a exact copy of the Max die either. ]

The M1 Max design can be iterated to the M1 Pro design with a 'chop'. Remove a predefined chunk in a structured way with design ( and the new boundary 'cleaned up' with limited design work) and get a smaller die. Similar issue where I/O could be pulled off and memory controllers rebalanced against the edges.
 

Philip Turner

macrumors regular
Dec 7, 2021
170
111
Not going to be completely bad but they are going to have to take some lumps on the upper niche of the Mac Pro line up.
Apple has always been a company for consumers. That's why they focus so intensively on power efficiency, something that actually matters. And not absolute performance, which at this point in Moore's Law everything is fast. I've seen a PC laptop user with a 165 Hz display. But they always have to throttle to 60 Hz when on battery, as it's a static refresh rate. I can shamelessly say that I have the best display (4K @ 120 Hz) even on battery. Wielding 120 Hz (which is noticeable) like this comes at a cost, but Apple's GPU design is one of many reasons it's possible.

On top of that, the GPU is still powerful enough to run some good simulations. The M1 Max (which I use) is no better than a GTX 1080 Ti from 5 years ago. But it's also not much worse, and given that I already own this MBP, it's cheaper than buying yet another computer just for HPC. I strongly considered getting an AMD 7900 XTX (350 W, 60 TFLOPS). but the hand-me-down PC I'm hopefully getting has a 150 W budget for GPU. Much greater than the M1 Max, but not even an AMD 7700 XT (200 W, 20 TFLOPS) would work. This is on the same N5 node as M1. Their 20 TFLOPS GPU consumes 2-3x as much energy as Apple's (70 W sustained, 96 W fluctuations), even through RDNA 3 is the most power-efficient PC architecture.

So in the end I'm limited to ~10 TFLOPS (which I already have), from any vendor. No sense buying a dedicated PC GPU - even in a few years - for dismal performance gain. Finally, although my current machine has zero FP64 power, emulation is about as fast as native on NVIDIA.
 
Last edited:
  • Like
Reactions: leman

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
To make financial sense, Apple needs a cloud service where it has a competitive advantage through hardware or software, similar to what Nvidia does with GeForce Now. For example, Apple could create a renderer so realistic that it would become a standard and Hollywood would be "forced" to use it and render on Apple's hardware-based render farm.
No, Apple only needs a cloud service that provides macOS and Apple Silicon hardware. I'm not talking about sending raw compute to the cloud but actually remote accessing an Apple Silicon based macOS instance.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
No, Apple only needs a cloud service that provides macOS and Apple Silicon hardware. I'm not talking about sending raw compute to the cloud but actually remote accessing an Apple Silicon based macOS instance.
Currently Apple hardware can be accessed in the cloud for an absurd price, and that market must be even much smaller than macOS. Other than compiling an application for Apple hardware, what would be the use case for a virtual machine for macOS?

So in the end I'm limited to ~10 TFLOPS (which I already have), from any vendor. No sense buying a dedicated PC GPU - even in a few years - for dismal performance gain. Finally, although my current machine has zero FP64 power, emulation is about as fast as native on NVIDIA.
If you are happy with your MBP for your "HPC" scientific computing, we do not share the same definition of HPC
 

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
The M1 Max die is more so a lousy chiplet ( far too chunky, bulky ) than UltraFusion couldn't be scaled to four.
I'm not enough Taiwanese to call any of Apple's chip design lousy. And I also don't have the entitlement to expect that it must always be possible to double the doubled performance all over again and again.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Suppose the SSD breaks or you want to upgrade the GPU or CPU in your server, have fun doing that with Apple. You have to buy a whole new super expensive machine for that. Even amongst consumer computers, Apple generally scores the worst in terms of repairability and upgradability.

What a ridiculous argument. Obviously a datacenter system would feature replaceable storage. As to upgrading CPU or GPU, well, good luck doing that with upcoming system like Nvidia Grace supercomputer either.

In fact, I will let you in on a secret. Datacenter computers don't get upgraded really. They are expected to serve for some time (usually a couple of years) after which they are written off. At least this was the case with pretty much any supercomputer I ever worked with.
 

Zest28

macrumors 68030
Jul 11, 2022
2,581
3,933
What a ridiculous argument. Obviously a datacenter system would feature replaceable storage. As to upgrading CPU or GPU, well, good luck doing that with upcoming system like Nvidia Grace supercomputer either.

In fact, I will let you in on a secret. Datacenter computers don't get upgraded really. They are expected to serve for some time (usually a couple of years) after which they are written off. At least this was the case with pretty much any supercomputer I ever worked with.

What replaceable storage with Apple? You need to use thunderbolt enclosures with Apple if you want that.

With Geforce Now, Nvidia can and probably will upgrade all their RTX 3080 to RTX 4080 at some point and they can do it easily by just swapping the GPU. With Apple you cannot do this without buying a whole new system for insane amounts of money and with inferior performance too.

Apple is simply too expensive. You got to be joking me if you think that buying a whole new machine if 1 component in the computer breaks is a viable business strategy. It's no accident that Apple is one of the most richest companies in the world.
 
Last edited:

Oculus Mentis

macrumors regular
Original poster
Sep 26, 2018
144
163
UK
Apple will never have a hardware advantage.

Suppose the SSD breaks or you want to upgrade the GPU or CPU in your server, have fun doing that with Apple. You have to buy a whole new super expensive machine for that. Even amongst consumer computers, Apple generally scores the worst in terms of repairability and upgradability.

And to make it more funny, suppose if you need huge amount of storage. Then you got to buy thunderbolt enclosures with Apple computers. That is not good at all.

And Apple "geniuses" aren't "geniuses" at all. So if they have to come to your datacenter to fix any hardware issue (which is something you cannot do yourself with Apple), that is going to be super hilarious. And I highly doubt this is even allowed in datacenters due to the huge security risk of having external people in the building and having actual access to the hardware.
All valid points but the question of this thread is about the sale of Apple Silicon CPUs (not even current M lines, but future new ones purposely designed yet binary compatible) to server and motherboard manufacturers rather than the sale of whole systems. Apologies for the confusion.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
What replaceable storage with Apple? You need to use thunderbolt enclosures with Apple if you want that.
Do you think Apple will create a cloud service by stacking Mac Minis like other companies do now?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
What replaceable storage with Apple? You need to use thunderbolt enclosures with Apple if you want that.

We are talking about enterprise market and you bring up how Apple builds laptops? By this logic Intel is unsuitable for enterprise because some Intel-based laptops have soldered-on storage. Don't be ridiculous.

With Geforce Now, Nvidia can and probably will upgrade all their RTX 3080 to RTX 4080 at some point and they can do it easily by just swapping the GPU. With Apple you cannot do this without buying a whole new system for insane amounts of money and with inferior performance too.

I hope you don't suggest running cloud-based gaming services on Apple hardware. Because I don't see any relevance of this post otherwise.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.