Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Ethosik

Contributor
Oct 21, 2009
7,820
6,724
I am not in charge of Apple's upgrade cycle, the 2019 should have been updated a month after new Xeon's were announced by Intel. But Apple Silicon, so that was a no-go for obvious Apple reasons.

Ummm, the last time I checked the Pugent site on After Effects benchmarks (last month); the M3 Max was beating everything.

After Effects is a huge part of my workflow, so performance in it is very important to me; but I also kind of hate Windows. But I have a Windows box anyway.
Yes, AE is faster on Apple Silicon. Even the M2 Ultra beats out my 13900k and 4090 Windows PC.
 

MRMSFC

macrumors 6502
Jul 6, 2023
341
352
No. We need more competition in GPUs, not less. Some things are too reliant on NVIDIA as it is.
I agree, but in terms of AI, Apple isn’t concerned with competing at the enterprise level, and NVidia is.

Apple since Steve took over was always an end-user first company. Education and anything else was secondary.

Any solution that Apple provides for generative AI is going to (probably) be focused on AI training on individual users and individual devices. I doubt that we’ll ever see racks of Mac Pros doing AI training in corporate labs. At the very least because Dell, HP, and Lenovo throws deals left and right at those companies and can provide a magnitude higher number of workstations and support than Apple is interested in.

(And truthfully, Apple still gets a cut because most of the software is written on laptops provided to the engineers, then run on the servers themselves, if they want to use a Mac, they still can.)

If you’re looking for competitors to NVidia’s AI dominance, you need to look elsewhere. If I had to make a prediction anyway, I would bet on chip designers eventually providing add in cards with nothing but ASICs designed to train AI.
 

avkills

macrumors 65816
Jun 14, 2002
1,182
985
Yes, AE is faster on Apple Silicon. Even the M2 Ultra beats out my 13900k and 4090 Windows PC.
It is a welcome change on the Mac front. For whatever reason Apple always seemed to pick the worst CPUs to use if you wanted good AE performance; before they went to Apple Silicon.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
772
1,652
2) The slotted SSDs on the Pro and the Studio have to do with the Ultra chip. Even if SSD failure under warranty is rare, throwing an entire motherboard away for it when that motherboard has a very expensive Ultra chip is just not acceptable cost (why it's acceptable for a still quite expensive Max chip not in a Studio or any of the others even if they don't cost as much is a different question or maybe the same question but asked louder and with greater exasperation*)
I think there's an even more primal explanation than failures. Consider the number of logic board variants Apple has to build for the M2 Mac Pro, M2 Mac Studio, and 16" M2 MBP - now that they are putting repair manuals online we can easily look up the orderable parts.


6 M2 Mac Pro logic boards
11 M2 Mac Studio logic boards
35 M2 16" MBP logic boards

Apple doesn't sell many Studios or Pros, and I would be willing to bet that Studio / Pro buyers tend to BTO their computers more in order to tailor the SSD size to their needs. If their internal supply chain had to deal with 40 or more logic boards for a low volume product like the Studio, with much less a focus on a few mass produced configs (as happens with MBPs), it'd be a bit of a nightmare. Socketing the flash makes matching supply to demand for each BTO variant much easier.
 

Moncler

macrumors member
Apr 4, 2024
40
23
Apple has not given nvidia permission to support their newest Macbook Pro models with the latest NVIDIA Verde drivers. their notebook program is an opt in program in that we support notebook models in which the notebook manufacturer has allowed to support because in case the user has to go back to the notebook maker for support, they will be assurred the company will not turn them away for using a driver which they did not themselves release from their website.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,299
997
I think there's an even more primal explanation than failures. Consider the number of logic board variants Apple has to build for the M2 Mac Pro, M2 Mac Studio, and 16" M2 MBP - now that they are putting repair manuals online we can easily look up the orderable parts.


6 M2 Mac Pro logic boards
11 M2 Mac Studio logic boards
35 M2 16" MBP logic boards

Apple doesn't sell many Studios or Pros, and I would be willing to bet that Studio / Pro buyers tend to BTO their computers more in order to tailor the SSD size to their needs. If their internal supply chain had to deal with 40 or more logic boards for a low volume product like the Studio, with much less a focus on a few mass produced configs (as happens with MBPs), it'd be a bit of a nightmare. Socketing the flash makes matching supply to demand for each BTO variant much easier.
Makes sense but only serves to illustrate that they've also made their supply chains for the other products harder too with only volume of devices making it feasible. My larger point was that this one choice has had rippling confounding effects across their lineup. I'm sure they've done the math and that it's "worth" it, but I do wonder if they've factored in cumulative customer annoyance into that calculation.
 

avkills

macrumors 65816
Jun 14, 2002
1,182
985
Apple has not given nvidia permission to support their newest Macbook Pro models with the latest NVIDIA Verde drivers. their notebook program is an opt in program in that we support notebook models in which the notebook manufacturer has allowed to support because in case the user has to go back to the notebook maker for support, they will be assurred the company will not turn them away for using a driver which they did not themselves release from their website.
What?

Apple laptops have not had nVidia hardware in them for quite some time, so why should we care or expect nVidia drivers on Apple laptops other than the fact that "some" older ones could possibly use a external thunderbolt chassis. The only other Apple centric demographic that cares are Pre 2023 Mac Pro owners and people with Hackintosh machines.

There isn't really any incentive for nVidia to do it. Besides that, the path Apple is on right now is a unified memory design which doesn't really support the notion of standalone GPUs. I hope Apple finds a way around this because a ~$10k machine really needs to be able to have the GPU system, RAM and storage user serviceable for upgrades.
 
  • Like
Reactions: Chuckeee

ChrisA

macrumors G5
Jan 5, 2006
12,601
1,737
Redondo Beach, California
Apple Silicon itself has proven that it's not good for professionals with high specs just like workstation. Mac Pro will die and I dont think Apple is interested in professional markets at all. Sadly, it will affect Mac's major markets such as video and music due to the hardware limitation.

Truth be told, Apple CAN create their own markets but their GPU performance is dramatically poor and many software aren't even interested in Mac at all. There are so many issues with macOS and Mac itself so as long as Apple is being stubborn, I dont think it's gonna be solved. They are limiting themselves too much and it won't work like iPhone's iOS.

At this point, the pro market will die slowly.
This is exactly true. Apple Silicon is designed for the iPhone. It happens to also work in a consumer facing notebook computers. But it begins to be pointless in desktop computers where battery life is a non-issue. Few ppeole care about saving $5 per year on the power used by their desktop computer and would prefer it to be faster, or cheaper.

At the high end, Apple Silicone simply fails. It does not perform well enough. There are ARM based CPUs thar run circles around Apple Silicon. Yes these other ARM chips use a LOT more power and cost more.

The trouble is that Macs are sold in such low numbers and even Apple is not interested in spending the R&D money on them, so they try to repurpose their phone chips and this falls flat when building high-end work stations.

My guess is that Apple will abandon that market. The Mac Studio will be the top of the line Mac. TRoday the ONLY reason for buying a Mac Pro over the Mac Studio is if you need a specialized network card to access a media library.

If you need more, By a PC and put Linux on it. That is what Google, Microsoft, Amazon, and I bet even Apple do in their data centers.

What do I do? I have an M2-Pro based Mini and on the same Ethernet, a Xeon-based Linux work station with 16-CPU cores, 64GB RAM and a couple of Nvidia cards but no monitor, mouse or keyboard attached When I need to, I can remotely log in, and run on that.
 
  • Like
Reactions: sunny5

leman

macrumors Core
Oct 14, 2008
19,302
19,284
At the high end, Apple Silicone simply fails. It does not perform well enough.

The ultra chips perform quite well compared to similarly priced workstation systems, at least when the CPU is concerned. It indeed falls short in GPU workloads.

There are ARM based CPUs thar run circles around Apple Silicon. Yes these other ARM chips use a LOT more power and cost more.

What CPUs? I hope you are not talking about the ARM server chips. That’s an entirely different type of device and their per-core performance is pitiful compared to Apple.

The trouble is that Macs are sold in such low numbers and even Apple is not interested in spending the R&D money on them, so they try to repurpose their phone chips and this falls flat when building high-end work stations.

Their patent list begs to differ. Yes, they started from a smartphone platform. The last few years however seen some big changes on that front, especially when we look at the GPU, and their patents give you a glimpse at what is coming.

My guess is that Apple will abandon that market. The Mac Studio will be the top of the line Mac. TRoday the ONLY reason for buying a Mac Pro over the Mac Studio is if you need a specialized network card to access a media library.

That is possible. However, we also know that Apple is actively researching ways to scale performance to very high levels. A lot of their recent patents don’t make sense unless they are targeting high-end desktop.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
772
1,652
This is exactly true. Apple Silicon is designed for the iPhone. It happens to also work in a consumer facing notebook computers. But it begins to be pointless in desktop computers where battery life is a non-issue. Few ppeole care about saving $5 per year on the power used by their desktop computer and would prefer it to be faster, or cheaper.
This is a powerfully wrongheaded paragraph.

In modern silicon design, performance is limited by power. It's pretty easy to design a chip which can draw multiple kilowatts of electrical power, but it's very hard to cool it. So when you design a chip, you choose a power budget based on what is economical and practical to cool in the type of device you're targeting, then you try to get as much performance out of that power budget as you can.

This means that the path to better performance is often all about reducing power. If your CPU core uses half the power for the same performance as the competition? Well, you can run twice as many cores at full power. Boom, double the multithreaded throughput in the same class of chip.

Apple Silicon cores were originally designed for the iPhone, but that's not a bad thing. Apple set a goal for themselves of delivering desktop-class performance per core on a phone's power budget, and they achieved it. No, iPhones aren't as powerful as desktops, but note my emphasis on "per core": when you look at one CPU core in isolation, it's close! So when Apple decided to build Mac chips from the same technological base, they were in a good spot. The performance of the building blocks (CPU and GPU cores) was already there, they just needed to add more I/O and more copies of the cores.

At the high end, Apple Silicone simply fails. It does not perform well enough. There are ARM based CPUs thar run circles around Apple Silicon. Yes these other ARM chips use a LOT more power and cost more.
I'm calling your bluff. Which chips are these, please? I haven't heard of anything which "runs circles around" Apple Silicon.

Note: there are indeed server ARM chips out there which are much bigger than anything Apple's building - as many as 192 cores on a single very big chip. A not so bright person might see this as evidence that Apple's way behind. But that's silly, giant server chips are simply a market Apple chooses not to be in. If they wanted to enter it, they have a superior building block - Apple's Arm CPU cores are the fastest in the world, as far as I know, and very power efficient.
 

sunny5

Suspended
Jun 11, 2021
1,712
1,581
This is a powerfully wrongheaded paragraph.

In modern silicon design, performance is limited by power. It's pretty easy to design a chip which can draw multiple kilowatts of electrical power, but it's very hard to cool it. So when you design a chip, you choose a power budget based on what is economical and practical to cool in the type of device you're targeting, then you try to get as much performance out of that power budget as you can.

This means that the path to better performance is often all about reducing power. If your CPU core uses half the power for the same performance as the competition? Well, you can run twice as many cores at full power. Boom, double the multithreaded throughput in the same class of chip.

Apple Silicon cores were originally designed for the iPhone, but that's not a bad thing. Apple set a goal for themselves of delivering desktop-class performance per core on a phone's power budget, and they achieved it. No, iPhones aren't as powerful as desktops, but note my emphasis on "per core": when you look at one CPU core in isolation, it's close! So when Apple decided to build Mac chips from the same technological base, they were in a good spot. The performance of the building blocks (CPU and GPU cores) was already there, they just needed to add more I/O and more copies of the cores.


I'm calling your bluff. Which chips are these, please? I haven't heard of anything which "runs circles around" Apple Silicon.

Note: there are indeed server ARM chips out there which are much bigger than anything Apple's building - as many as 192 cores on a single very big chip. A not so bright person might see this as evidence that Apple's way behind. But that's silly, giant server chips are simply a market Apple chooses not to be in. If they wanted to enter it, they have a superior building block - Apple's Arm CPU cores are the fastest in the world, as far as I know, and very power efficient.
SoC design is already a failure especially for large chips. If not, Apple wouldn't even consider putting M2 Ultra to Mac Pro. They knew it's a failure from the beginning.
 
  • Haha
Reactions: Romain_H

Carrotstick

macrumors member
Mar 25, 2024
82
184
SoC design is already a failure especially for large chips.
Is it? Apple chooses to not make server SoC but instead glues two Mx Max’s which are laptop SoCs.

Nvidia uses SoC design for their data centre. The Mac Pro market is so minimal.
 
  • Haha
Reactions: sunny5

sunny5

Suspended
Jun 11, 2021
1,712
1,581
Is it? Apple chooses to not make server SoC but instead glues two Mx Max’s which are laptop SoCs.

Nvidia uses SoC design for their data centre. The Mac Pro market is so minimal.

It's not just gluing, it actually require two Max chip to manufacture and since the die size is double, the risk is also double than before. Beside, Max chip itself is already big just like RTX 4080's size. Big die means more expensive. Ultra Fusion also requires yield which cost multiple times than Max. Ultra chip is twice bigger and therefore, much harder to mass produce thanks to Ultra Fusion. Someone explained this from 2:35. Beside, big die means low yield which is a basic logic. M1,2 Ultra is way bigger than 4090.

Image - Grace_678x452.jpg
Nvidia uses separate CPU and GPU, not SoC. If you are talking about Nvidia Jetson Xavier, then those are not even for server but embed.
 
Last edited:
  • Like
Reactions: Chuckeee

theorist9

macrumors 68040
May 28, 2015
3,710
2,812
Ok maybe I'm reading this wrong, but that thread actually confirms what I was saying earlier: there is no "software block" (in fact such a thing would be impossible given the nature of the upgrade), it's an economic block on these slotted SSDs by making it difficult to reverse engineer and get the necessary components.....
I never said it was a software block. While I suppose that's a possibility, I don't know how Apple actually blocks others from making third-party modules, whether it's a special firmware code or special hardware. Insted, I've repeatedly focused on the end result, which is that, regardless of how Apple does it, it is a de facto economic block, as evidence by the fact that others, like OWC, haven't been able to reverse-engineer them—as you should have understood perfectly well, since you've quoted the very posts in which I said just that (see below).

I even quoted an interchange with OWC tech support in which they confirmed they weren't able to produce the slotted SSD modules!

So given that you've agreed with me all along (i.e., that you, like me, ALSO believed "it's an economic block on these slotted SSDs by making it difficult to reverse engineer and get the necessary components") why were you repeatedly, and maddenlingly, arguing against me at every turn? That's just wasting both your time and mine.

I was responding specifically to the below:
1712516993682.png

Plus, more broadly, look at the way Apple operates. I think you're looking at this from a technical perspective, but it's really about the money. Apple has been moving increasingly to lock down all sources of profit--that's why it configures its products, as much as it can, so that upgrades can only be purchased from Apple.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,302
19,284
It's not just gluing, it actually require two Max chip to manufacture and since the die size is double, the risk is also double than before. Beside, Max chip itself is already big just like RTX 4080's size. Big die means more expensive. Ultra Fusion also requires yield which cost multiple times than Max. Ultra chip is twice bigger and therefore, much harder to mass produce thanks to Ultra Fusion. Someone explained this from 2:35. Beside, big die means low yield which is a basic logic. M1,2 Ultra is way bigger than 4090.

What you say doesn't make much sense to me. The Ultra is exactly as difficult to produce as two Max chips. No idea why you think that UltraFusion costs multiple of the Max, it's just a small passive bridge that glues together two Max dies. Ultra is not a monolithic chip, so the issue of progressive worse yields in large chips does not apply here.

View attachment 2366239
Nvidia uses separate CPU and GPU, not SoC. If you are talking about Nvidia Jetson Xavier, then those are not even for server but embed.

Nvidia uses two dies. So does Apple. SoC or not SoC does not matter. There is no inherent difference in the amount of risk no matter what kind of multi-chip system you are looking at, be it Apple's UltraFusion, Nvidia's Grace or Blackwell, AMD's Navi 3, or Intels Xeons or Meteor Lake.
 

theorist9

macrumors 68040
May 28, 2015
3,710
2,812
It's not just gluing, it actually require two Max chip to manufacture and since the die size is double, the risk is also double than before. Beside, Max chip itself is already big just like RTX 4080's size. Big die means more expensive. Ultra Fusion also requires yield which cost multiple times than Max. Ultra chip is twice bigger and therefore, much harder to mass produce thanks to Ultra Fusion. Someone explained this from 2:35. Beside, big die means low yield which is a basic logic. M1,2 Ultra is way bigger than 4090.
First, doubling the die size does not double the risk of defects. If the defect rate is z, and the defect rate is uniform, then doubling the die size increases the defect rate by a factor of 2-z.*

Second, you don't make an Ultra by doublng the die size, you make it by fusing two Max's. And there's a fundamental mathematical difference in the risk from fusing two chips of a given die size, vs. that from doubling the die size (and the attendant economic consequences thereof).

Let's use some arbitrary numbers to illustrate:

Suppose a wafer costs $30,000, and has room for either 200 Max chips or 100 monolithic Ultras (probably a bit less than 100, since the tiling ratio for squares onto a fixed circle decreases with the size of the squares, but we'll ignore that here).

Further suppose the critical defect rate is uniform, and that 70% of the Max chips are critical-defect-free (CDF). Then to get a CDF monolithic Ultra, you'd need both "Max halves" to be CDF, and the chance of that is 70% x 70%= 49%.

Thus you can get 200 x 70% = 140 Max chips per wafer, resulting in a cost of 30,000/140 = $214/Max chip. Hence if you make an Ultra by fusing two Max's, it's 2 x 214 = $428/fused Ultra chip plus the cost of fusing (which would need to incorporate the failure rate for this step, which is entirely separate from the chip defect rate).

But if you're making a monolithic Ultra, then you can get 100 x 49% = 49 chips/wafer, and the cost would be $30,000/49= $612/monolithic Ultra chip.

*We see this in the above example: It went from a 1-70% = 30% chance of a critical defect at Max size, to 1-49% = 51% at monolithic Ultra size, and 51%/30% = 1.7 = 2 – 0.3.

*Here's how you derive it: If the defect rate is z, then then the chance of having zero defects is 1-z. And the chance of having zero defects over double the area is (1-z)^2. Then the chance of having a defect in a double-sized chip is:
1 – (1-z)^2. Thus the ratio of the defect rate in a double-sized chip to a regular-sized chip is:
(1 – (1-z)^2)/z = 2 – z.

To those who are wondering why we can't just take the defect rate in the Max chip and square it to get the defect rate in a monolithic Ultra: That would be calculating the wrong thing. There, we'd be calculating the chance that both halves of the monolithic chip have defects. We're not looking for that; we're instead looking for the chance that either half of the monolithic chip has defects. And the simplest way to calculate that is to look at the chance of the opposite case—that neither half has defects—and subtract that from 1, hence 1–z.
 
Last edited:

sunny5

Suspended
Jun 11, 2021
1,712
1,581
First, doubling the die size does not double the risk of defects. If the defect rate is z, and the defect rate is uniform, then doubling the die size increases the defect rate by a factor of 2-z.*

Second, you don't make an Ultra by doublng the die size, you make it by fusing two Max's. And there's a fundamental mathematical difference in the risk from fusing two chips of a given die size, vs. that from doubling the die size (and the attendant economic consequences thereof).

Let's use some arbitrary numbers to illustrate:

Suppose a wafer costs $30,000, and has room for either 200 Max chips or 100 monolithic Ultras (probably a bit less than 100, since the tiling ratio for squares onto a fixed circle decreases with the size of the squares, but we'll ignore that here).

Further suppose the critical defect rate is uniform, and that 70% of the Max chips are critical-defect-free (CDF). Then to get a CDF monolithic Ultra, you'd need both "Max halves" to be CDF, and the chance of that is 70% x 70%= 49%.

Thus you can get 200 x 70% = 140 Max chips per wafer, resulting in a cost of 30,000/140 = $214/Max chip. Hence if you make an Ultra by fusing two Max's, it's 2 x 214 = $428/fused Ultra chip plus the cost of fusing (which would need to incorporate the failure rate for this step, which is entirely separate from the chip defect rate).

But if you're making a monolithic Ultra, then you can get 100 x 49% = 49 chips/wafer, and the cost would be $30,000/49= $612/monolithic Ultra chip.

*We see this in the above example: It went from a 1-70% = 30% chance of a critical defect at Max size, to 1-49% = 51% at monolithic Ultra size, and 51%/30% = 1.7 = 2 – 0.3.

*Here's how you derive it: If the defect rate is z, then then the chance of having zero defects is 1-z. And the chance of having zero defects over double the area is (1-z)^2. Then the chance of having a defect in a double-sized chip is:
1 – (1-z)^2. Thus the ratio of the defect rate in a double-sized chip to a regular-sized chip is:
(1 – (1-z)^2)/z = 2 – z.

To those who are wondering why we can't just take the defect rate in the Max chip and square it to get the defect rate in a monolithic Ultra: That would be calculating the wrong thing. There, we'd be calculating the chance that both halves of the monolithic chip have defects. We're not looking for that; we're instead looking for the chance that either half of the monolithic chip has defects. And the simplest way to calculate that is to look at the chance of the opposite case—that neither half has defects—and subtract that from 1, hence 1–z.
Screenshot 2024-04-07 at 7.44.32 PM.jpg

False, it does double the risk because of ultra fusion and the die size. Bigger die = higher the risk. Do you really think it's as simple as just connecting two dies? Hell no. The YouTuber I posted already mentioned that. You still need to put Max chips on the silicon wafer again in order to connect each other and yields still matter which makes Ultra chip gets double risks than before thanks to the chip size. If not, how come they are not making Extreme chips so far? Ultra fusion itself still requires silicon wafer.

The video I attached clearly explained it and yet, you are ignoring it. If not, tell me how come they easily cant make Extreme chip by just connecting 4x Max chips?

This only explain why Apple keep failing to make Extreme chip by connecting 4x Max chips.
 
Last edited:

sunny5

Suspended
Jun 11, 2021
1,712
1,581
If Apple is going to take AI seriously from now on, I think they should migrate to NVIDIA too.
Not really.

If they ARE interested in AI, then they should make their own AI chip and workflow just like Amazon, Tesla, Microsoft, and more. They already have their own. The only problem is GPU is way more common and widely used which is 90% of all AI servers. That's how Nvidia dominate AI with CUDA workflow. But other than that, Nvidia sucks at performance by watt.

But Apple can't even make a workstation grade CPU and GPU with upgradability and expandability so it's almost impossible for them to compete in AI markets.
 
  • Sad
Reactions: Chuckeee

theorist9

macrumors 68040
May 28, 2015
3,710
2,812
View attachment 2366542

False, it does double the risk because of ultra fusion and the die size. Bigger die = higher the risk. Do you really think it's as simple as just connecting two dies? Hell no. The YouTuber I posted already mentioned that. You still need to put Max chips on the silicon wafer again in order to connect each other and yields still matter which makes Ultra chip gets double risks than before thanks to the chip size. If not, how come they are not making Extreme chips so far? Ultra fusion itself still requires silicon wafer.

The video I attached clearly explained it and yet, you are ignoring it. If not, tell me how come they easily cant make Extreme chip by just connecting 4x Max chips?

This only explain why Apple keep failing to make Extreme chip by connecting 4x Max chips.
I see your response to my serious post was to laugh at it. That's hardly polite or collegial. But if you want to be that way, fine. Let me reply in kind:

I can't respond substantively to you for three reasons:

1) You would need to understand math, which you don't. And I can't teach you enough math for us to have a substantive discussion.

2) You would need to have basic English reading comprehension skills, which you don't (in my first paragraph I was clearly referring to the change in risk in going to a monolithic Ultra die, not an Ultra formed by fusing two Max chips, yet you thought it was the latter). And I can't teach you enough English for us to have a substantive discussion.

3) You would need to have sufficient social skills not to act like a child in response to something you disagree with. And I can't teach you enough social skills for us to have a substantive discussion.
 
Last edited:
  • Haha
  • Like
Reactions: sunny5 and Chuckeee

boswald

macrumors 65816
Jul 21, 2016
1,311
2,187
Florida
Not really.

If they ARE interested in AI, then they should make their own AI chip and workflow just like Amazon, Tesla, Microsoft, and more. They already have their own. The only problem is GPU is way more common and widely used which is 90% of all AI servers. That's how Nvidia dominate AI with CUDA workflow. But other than that, Nvidia sucks at performance by watt.

But Apple can't even make a workstation grade CPU and GPU with upgradability and expandability so it's almost impossible for them to compete in AI markets.
It's almost like they shoot themselves in the foot before the race even starts. I just don't get it. Why would a company make such a bold claim (investing in AI), then sabotage their own efforts by a) beyond late to the party and b) unable to manufacture the necessary components to compete?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.