Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482

No one is discussing this?

I’m hearing there is a strong possibility that the chip in the new iPad Pro will be the M4, not the M3. Better yet, I believe Apple will position the tablet as its first truly AI-powered device — and that it will tout each new product from then on as an AI device. This, of course, is all in response to the AI craze that has swept the tech industry over the last couple years.
 

thenewperson

macrumors 6502a
Mar 27, 2011
992
912
Wild if true. MacBook Air not flagship since M3 and now potentially iPhone not introducing (likely) updated cores? If this is true I’m surprised they didn’t keep it until WWDC to pair with software improvements. Then again, they could just have so much to announce software-wise.
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
Wild if true. MacBook Air not flagship since M3 and now potentially iPhone not introducing (likely) updated cores? If this is true I’m surprised they didn’t keep it until WWDC to pair with software improvements. Then again, they could just have so much to announce software-wise.
I think WWDC will be extremely busy with all the AI changes.

I suspect that they want to announce M4 and the new iPad first and let them have their moment.
 
Last edited:
  • Like
Reactions: iPadified

Chuckeee

macrumors 68040
Aug 18, 2023
3,062
8,722
Southern California
Showing my ignorance [again].

I understand (at least sort of) the difference between CPU and GPU cores but what is the difference between GPU and NPU cores? Seems like lots of overlap
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
I understand (at least sort of) the difference between CPU and GPU cores but what is the difference between GPU and NPU cores? Seems like lots of overlap

A modern GPU is a massively parallel programmable vector processor optimized for simultaneously running a large number of data-parallel programs, with focus on tasks in graphical domain. The NPU (on Apple hardware at least) is a more limited-function processor optimized for calculating convolutions at small area and power cost.

The overlap is that convolutions can be expressed as a data-parallel programs. But a general-purpose vector processor like a GPU is not the most efficient way to do these kinds of calculations. That’s why GPUs that are actually fast at ML have some dedicated circuitry for these tasks.

The primary reason why Apple NPU exists is efficiency. It can only perform limited types of jobs, and it’s not particularly fast, but it uses much Less energy than the GPU, and it also frees the GPU up for other tasks.

I am very curious to see where Apple will take this. Last year they had a flurry of patents describing a more advanced NPU. At the same time, the GPU is the largest processor in the system (by area), and it would make sense to improve its ML capabilities (especially since Apple could achieve major speedups with only minor die area investment).
 

dgdosen

macrumors 68030
Dec 13, 2003
2,817
1,463
Seattle

DaniTheFox

macrumors regular
Nov 24, 2023
198
146
Switzerland
A modern GPU is a massively parallel programmable vector processor optimized for simultaneously running a large number of data-parallel programs, with focus on tasks in graphical domain. The NPU (on Apple hardware at least) is a more limited-function processor optimized for calculating convolutions at small area and power cost.

The overlap is that convolutions can be expressed as a data-parallel programs. But a general-purpose vector processor like a GPU is not the most efficient way to do these kinds of calculations. That’s why GPUs that are actually fast at ML have some dedicated circuitry for these tasks.

The primary reason why Apple NPU exists is efficiency. It can only perform limited types of jobs, and it’s not particularly fast, but it uses much Less energy than the GPU, and it also frees the GPU up for other tasks.

I am very curious to see where Apple will take this. Last year they had a flurry of patents describing a more advanced NPU. At the same time, the GPU is the largest processor in the system (by area), and it would make sense to improve its ML capabilities (especially since Apple could achieve major speedups with only minor die area investment).
Does this mean you can run AI on the GPU fast but power hungry or slow and efficient on the NPU. And only the M4 (and the A18) will be both fast and efficient. So very good for handheld devices like the iPads and iPhones. And for desktop Macs, like the Mac mini, Mac Studio/Pro and the Apple TV! it doesn’t matter running AI on the GPU? So they will stay on the M2/3 series for a little longer and will have no disadvantage except your power bill.
 

MayaUser

macrumors 68040
Nov 22, 2021
3,177
7,196
Speculating:

If the M3 series was built on N3B,
- maybe the M4 series is built on N3E?
- maybe it's ready to go?
- maybe the benefit of the M4 will be it's lower power draw and more attractive use in an iPad?
A18 pro and M3 family will be on N3E
No excuses not to be
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
Does this mean you can run AI on the GPU fast but power hungry or slow and efficient on the NPU.

It’s a bit more complicated than that. The M3 NPU is nominally faster than even the M3 Max GPU, but it only works for some specific tasks. So you can potentially run some models relatively fast and cheap on the NPU, or you can run any task slower and with more power on the GPU.




And only the M4 (and the A18) will be both fast and efficient.

We don’t know anything about the NPU or GPU performance of future Macs, so it’s difficult to speculate. Your take is one of the possibilities. Or it could be that both the NPU and the GPU get much better at ML. Who knows.
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
A18 pro and M3 family will be on N3E
No excuses not to be

To move the M3-series of SoCs to N3E would require Apple to "redo" the SoCs for N3E, it is not a simple cut & paste from N3B; with that in mind, would it not make more sense for Apple to just go with the M4-series of SoCs for N3E...?

Monolithic M4 Ultra please...!
 
  • Like
Reactions: Chuckeee

WC7

macrumors 6502
Dec 13, 2018
427
318
Looks like LLM will be mostly off Apple devices ... ELM will be on devices. Back to servers for LLM? Training is another issue. M4 and later chips will be tuned to some of these throughputs as AI models evolve, and each application will acquire more AI as these 'helpful' changes occur. I guess.
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command

Remove any redundant subsystems, optimize more for desktop high-performance usage, etc.; also no 4-way UltraFusion needed to make a Mn Extreme SoC, just a single UltraFusion to mate two monolithic Mn Ultra SoCs together...?

Looks like LLM will be mostly off Apple devices ... ELM will be on devices. Back to servers for LLM? Training is another issue. M4 and later chips will be tuned to some of these throughputs as AI models evolve, and each application will acquire more AI as these 'helpful' changes occur. I guess.

LLM = Apple iCloudAI subscription service on Apple AI server farm...?

ELM = Localized on end-user device; iPhone, iPad, laptops, desktops...?
 

WC7

macrumors 6502
Dec 13, 2018
427
318
Remove any redundant subsystems, optimize more for desktop high-performance usage, etc.; also no 4-way UltraFusion needed to make a Mn Extreme SoC, just a single UltraFusion to mate two monolithic Mn Ultra SoCs together...?



LLM = Apple iCloudAI subscription service on Apple AI server farm...?

ELM = Localized on end-user device; iPhone, iPad, laptops, desktops...?
A rumor I heard was that Apple was trying to 'enlist' Google or Microsoft for the LLM 'service' ? And ELM was on Apple's machines and devices. Not sure of any of this. I think the most interesting aspect to users is how any of this will affect current applications and features to be added.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
A rumor I heard was that Apple was trying to 'enlist' Google or Microsoft for the LLM 'service' ?

Why would they have to 'enlist' them any more than they toggle outsource the Internet search engine request.
I doubt anyone is going to pay Apple for it (more expensive than simple web search) , but it really makes about zero sense to lock the whole operating system into one single LLM backend. People should pick what they want to pick. Besides the DoJ is about eyeball deep into Apple pimping out the user base to make money for web search. Why would Apple want to dig an even deeper hole with 'user option LLM usage'. All the same problems.


When Siri didn't 'know' something it would 'punt' the answer to the web search engine. Again same stuff different day. Even a 'smarter' Siri still won't know everything and could punt to whatever the users wanted. Apple could stick themselves into the revenue pile by offering to set up up with a 'for pay' Copilot/etc cloud LLM engine with ApplePay. Generally would need same basic API to set that up on the mac user side and just some documented 'glue' layer on the cloud service side. If a company had there own 'private' LLM backend or user already has an account... user points to that (and Apple doesn't take a commission).


And ELM was on Apple's machines and devices. Not sure of any of this. I think the most interesting aspect to users is how any of this will affect current applications and features to be added.

Siri can already rummage through your application data. There are existing APIs already there that apps have and have not leveraged. So are APIs for scripting control of certain actions.
 

vanc

macrumors 6502
Nov 21, 2007
489
154
A modern GPU is a massively parallel programmable vector processor optimized for simultaneously running a large number of data-parallel programs, with focus on tasks in graphical domain. The NPU (on Apple hardware at least) is a more limited-function processor optimized for calculating convolutions at small area and power cost.

The overlap is that convolutions can be expressed as a data-parallel programs. But a general-purpose vector processor like a GPU is not the most efficient way to do these kinds of calculations. That’s why GPUs that are actually fast at ML have some dedicated circuitry for these tasks.

The primary reason why Apple NPU exists is efficiency. It can only perform limited types of jobs, and it’s not particularly fast, but it uses much Less energy than the GPU, and it also frees the GPU up for other tasks.

I am very curious to see where Apple will take this. Last year they had a flurry of patents describing a more advanced NPU. At the same time, the GPU is the largest processor in the system (by area), and it would make sense to improve its ML capabilities (especially since Apple could achieve major speedups with only minor die area investment).
Good point. Here is more info from AI chatbot. :)

What an NPU can do to accelerate neral networks?
  • Matrix Multiplication: This is the workhorse of neural networks. It involves multiplying large matrices, which are essentially grids of numbers. NPUs are optimized for performing these calculations very efficiently in parallel.
  • Vector Operations: Many AI algorithms involve manipulating large vectors, which are one-dimensional arrays of numbers. NPUs can perform vector additions, subtractions, and other operations much faster than CPUs.
  • Convolutional Operations: Convolution is a mathematical operation used extensively in image and video recognition. It involves applying a filter (also a matrix) to an input image to extract features. NPUs are specifically designed to handle convolutions efficiently.
 
  • Wow
  • Like
Reactions: Burnincoco and fp99

altaic

macrumors 6502a
Jan 26, 2004
711
484
Looks like LLM will be mostly off Apple devices ... ELM will be on devices. Back to servers for LLM? Training is another issue. M4 and later chips will be tuned to some of these throughputs as AI models evolve, and each application will acquire more AI as these 'helpful' changes occur. I guess.

No, you’re completely off base. Also, as someone in the ml field, wtf is elm other than some tla mus?
 
Last edited:

altaic

macrumors 6502a
Jan 26, 2004
711
484
Why would they have to 'enlist' them any more than they toggle outsource the Internet search engine request.
I doubt anyone is going to pay Apple for it (more expensive than simple web search) , but it really makes about zero sense to lock the whole operating system into one single LLM backend. People should pick what they want to pick. Besides the DoJ is about eyeball deep into Apple pimping out the user base to make money for web search. Why would Apple want to dig an even deeper hole with 'user option LLM usage'. All the same problems.


When Siri didn't 'know' something it would 'punt' the answer to the web search engine. Again same stuff different day. Even a 'smarter' Siri still won't know everything and could punt to whatever the users wanted. Apple could stick themselves into the revenue pile by offering to set up up with a 'for pay' Copilot/etc cloud LLM engine with ApplePay. Generally would need same basic API to set that up on the mac user side and just some documented 'glue' layer on the cloud service side. If a company had there own 'private' LLM backend or user already has an account... user points to that (and Apple doesn't take a commission).




Siri can already rummage through your application data. There are existing APIs already there that apps have and have not leveraged. So are APIs for scripting control of certain actions.
“Anon influencer entered the chat.”
 

DaniTheFox

macrumors regular
Nov 24, 2023
198
146
Switzerland
I think we can agree that the chip in the next iPad Pro (and the next Ultra) will be based on the N3E process. Since this needs some redesign, why not greatly improve the NPU for AI. You can easily call the Ultra a chip from the M3 family. But this new chip in the next iPad Pro must not only have a new serial number. It must also have a "catchy" name for us mortals. Your guess is as good as mine.
 

Carrotstick

Suspended
Mar 25, 2024
230
418
Nope.

Since Apple will continue to make tons of M3 chips for MacBook Air and iMac (on the current N3B line), many of us believe that a small portion of that existing M3 ongoing production will be used for the new iPad pros.
So you think the iPad Pro next week will use the existing M3?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.