Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It wouldn't be significantly cheaper considering that there are no 9 GB memory modules.

4, 6, 8, 12, 16 GB per 64 bit module. Thats it. With those memory modules the only possibilities are 6x6, or 3x12, for 36 GB of RAM.

Laughably - the cheaper version is 6x6 GB. And if it is - 384 bit bus for M3 Pro. So expect that 36 is base, and 48 GB and 72 GB are CTO configs.

That would be an insane generational increase in RAM capacity. Which would also come with an appropriate increase in price. Already that makes me skeptical. Anyway, why 46GB and not 24GB (6x4) for base? That would be more reasonable.

Regarding the available modules: is 9 GB precluded by the standard or is it just the case that nobody makes them? Apple is probably big enough to make a custom order if they wanted to.

Some other possible explanations: a) tiered RAM with a weird configuration of fast and slow RAM b) 384-bit bus as you say, but with two RAM slots empty in base configuration for even more differentiation between models...
 
Before anyone asks, Apple uses Micron memory chips. And Micron manufactures 64 bit chips in 16 Gb(2 GB), 32 Gb(4 GB), 48 Gb(6 GB), 64 Gb(8GB), 96 Gb(12 GB) and 128 Gb(16 GB) capacities.

They also manufacture 6400 MHz LPDDR5 memory chips, so the only way for Apple to increase memory bandwidth for the GPU cores, which Im sure not only increased in amount of cores, to 24 for Pro from 19, but they also increased in throughput - which will require massively increased bandwidth.

So Apple had a choice, switching to more expensive LPDDR5X, or LPDDR5T chips for up to 50% higher bandwidth on the same memory bus, or simply use cheaper commodity item, and cough up higher complexity of package and manufacturiong costs, that will with time go down, contrary to nieche solution like LPDDR5X/LPDDR5T is.

384 bit bus, 6400 MHz LPDDR5 give us 300 GB/s bandwidth of SYSTEM memory, which is insane. 36 GB's of RAM is only an addition.

IMO, full base die of M3 Pro will have this spec: 8 performance cores/8 efficiency cores, 36 GB of RAM, 6400 MHz on 384 bit bus, and 24 GPU cores.

That will be CTO option.

The leak for this specs open the door for speculation of what M3 will be, and IMO, we are looking at 6P/4E, 192 bit bus, 18 GB of RAM in base spec and 12 GPU cores.
 
That would be an insane generational increase in RAM capacity. Which would also come with an appropriate increase in price. Already that makes me skeptical. Anyway, why 46GB and not 24GB (6x4) for base? That would be more reasonable.

Regarding the available modules: is 9 GB precluded by the standard or is it just the case that nobody makes them? Apple is probably big enough to make a custom order if they wanted to.

Some other possible explanations: a) tiered RAM with a weird configuration of fast and slow RAM b) 384-bit bus as you say, but with two RAM slots empty in base configuration for even more differentiation between models...
The reason for 6 GB chips per memory module may be very simple.

4 GB memory chips may be ending production, and we may only have 6 and higher GB capacities available from 2024, when M3 series chips will be manufactured.

About memory capcities I answered in upper post. And no, Apple will not make any custom order from anybody :). That would be economically unfeasible, I mean manufacturing 9 GB memory chips just for Apple would cost more than manufacturing 16 GB memory chips for everybody.

DRAM is a commodity item. Nobody in their right mind would go here for custom solutions because it would drive manufacturing costs higher than anything else.
 
“ray tracing” is a rendering method, it cannot run. What can “run hot or not” is a concrete hardware implementation. There is no reason why an RT implementation has to “run hot”.

It would make the SoC hot. You know it does regardless of this semantic blah. Everyone knows raytracing does that. Even hot Nvidia RTX GPUs throttle.

Where did you get this notion anyway? From Nvidia? Because it’s like saying “internal combustion engines are slow because this one company makes a big tractor” We literally have consumer chips that output hundreds of watts of heat. Apple is not nearly close to that. They could increase the power consumption of their SoC by 2x and still be below mainstream desktop CPUs.

Apple ships mostly laptops and they don’t want to ship laptops with monstrous energy consumption like Razor or Alienware. Cmon man don’t play word games.

Ray tracing will come to M series when MacBooks can still maintain something near their current power consumption and heat. It would be completely unlike Apple to ship hot power hungry laptops again when users and their own marketing are opposed to it.
 
It would make the SoC hot. You know it does regardless of this semantic blah. Everyone knows raytracing does that.



Apple ships mostly laptops and they don’t want to ship laptops with monstrous energy consumption like Razor or Alienware. Cmon man don’t play word games.

Ray tracing will come to M series when MacBooks can still maintain something near their current power consumption and heat. It would be completely unlike Apple to ship hot power hungry laptops again when users and their own marketing are opposed to it.
Nobody knows it.

Its hilarious thought. Ray Tracing is only consuming a lot of resources but it does not make magically hardware exceed it thermal DESIGN limits.

35W TDP is still 35W TDP no matter the workload you put on it. If M3 series will run hotter - it will be because those chips have higher TDP than the previous gen, again. Not because of Ray Tracing.
 
Nobody knows it.

Its hilarious thought. Ray Tracing is only consuming a lot of resources but it does not make magically hardware exceed it thermal DESIGN limits.

35W TDP is still 35W TDP no matter the workload you put on it. If M3 series will run hotter - it will be because those chips have higher TDP than the previous gen, again. Not because of Ray Tracing.

oh sure you can HAVE ray tracing in 35w CPU today but you know very well that you will get some people like gamers complaining that the ray tracing doesn’t perform as fast as the competition at 120-200w. It becomes a headache.

Better to wait until more respectable ray tracing performance can fit in that power envelope. This is what I said.

That could happen within a year, or maybe a couple more.
 
It would make the SoC hot. You know it does regardless of this semantic blah. Everyone knows raytracing does that.

Any kind of "compute" make the chip hot and how hot it gets depends on the frequency, kind and number of transistors involved, not on what they are actually doing.

Now raytracing does require more "compute" than rasterization, so yeah and chip doing it will either produce slower results or consume more power, but thats also true for every other optional feature.
 
  • Like
Reactions: Realityck
Any kind of "compute" make the chip hot and how hot it gets depends on the frequency, kind and number of transistors involved, not on what they are actually doing.

Now raytracing does require more "compute" than rasterization, so yeah and chip doing it will either produce slower results or consume more power, but thats also true for every other optional feature.

Yes that’s what I said. Anyway I answered koyoot just now.
 
oh sure you can HAVE ray tracing in 35w CPU today but you know very well that you will get some people like gamers complaining that the ray tracing doesn’t perform as fast as the competition at 120-200w. It becomes a headache.

Better to wait until more respectable ray tracing performance can fit in that power envelope. This is what I said.

That could happen within a year, or maybe a couple more.
Do you even know what is required to perform Ray Tracing on Hardware? Or did you got that conclusion that RT magically makes everything run hot, because of COMPUTE heavy Nvidia GPUs consume north of 400W during gaming?
 
  • Like
Reactions: Realityck
It would make the SoC hot. You know it does regardless of this semantic blah. Everyone knows raytracing does that. Even hot Nvidia RTX GPUs throttle.

That's some argument you got there. Care to explain it a bit more? Raytracing is just memory reads and compute. Why would it be "more hot" than other memory reads and compute that has no problem running on Apple SoCs?

Apple ships mostly laptops and they don’t want to ship laptops with monstrous energy consumption like Razor or Alienware. Cmon man don’t play word games.

Who is talking about the monstrous energy consumption? Apple can implement hardware RT without significantly increasing the needed power. They published plenty of patents that describe solutions to the problems you mention.
 
  • Love
Reactions: Realityck
Who is talking about the monstrous energy consumption? Apple can implement hardware RT without significantly increasing the needed power. They published plenty of patents that describe solutions to the problems you mention.
Especially considering that Apple architectures are culling heavy, based on Tile Based Rasterization. There would not be anything different for RT, to offload plenty of work from the cores to make it "current gen" viable. Despite the low thermal envelopes, that Apple loves for their chips.
 
That's some argument you got there. Care to explain it a bit more? Raytracing is just memory reads and compute. Why would it be "more hot" than other memory reads and compute that has no problem running on Apple SoCs?

If raytracing incurs a high percentage of cache misses and have to go 'off-die' for the data access, then it will run hotter. Going off die is incrementally hotter. Similar issue if thrash the cache and make other stuff go off die.
Not all memory reads requests have to go the furtherest possible distance.

However, on Nvidia set ups though generally the ray tracing feature is confined to bigger dies. There is just more stuff.

dedicated raytracing hardware should be less compute though. Doing it with general cores versus fixed function should allow fixed function to win. I think Apple is going to add ray tracing, but primarily to get to better Perf/Watt as opposed to just "warp speed" performance only.

Otherwise kind of comparing apples to oranges. No raytracing at all (and generating a different picture) to having it on. If turned down the resolution rendering at too would reduce power. Just adding "more reads and more compute" is going make it run hotter than not doing it at all.
 
If raytracing incurs a high percentage of cache misses and have to go 'off-die' for the data access, then it will run hotter. Going off die is incrementally hotter. Similar issue if thrash the cache and make other stuff go off die.
Not all memory reads requests have to go the furtherest possible distance.

However, on Nvidia set ups though generally the ray tracing feature is confined to bigger dies. There is just more stuff.

dedicated raytracing hardware should be less compute though. Doing it with general cores versus fixed function should allow fixed function to win. I think Apple is going to add ray tracing, but primarily to get to better Perf/Watt as opposed to just "warp speed" performance only.

Otherwise kind of comparing apples to oranges. No raytracing at all (and generating a different picture) to having it on. If turned down the resolution rendering at too would reduce power. Just adding "more reads and more compute" is going make it run hotter than not doing it at all.

Yes, I can sort of agree, Apple will pursue ray tracing hardware because it makes ray tracing more efficient than trying to do it through the slow software compute route, they will probably try to lead on performance per watt while ray tracing, but leave the absolute ray tracing beast performance to Nvidia as they just won't go that high on wattage.
 
"could be the base-level M3". pretty good chance that is just arm-flapping by Gurman. Compare apples to oranges to present some significant core count jump just to build buzz and clickbait. The M2 Pro can do. 12 core CPU AND 18 core. The binned entry model isn't primarily a yield issue. It is a make fatter profits issue. Apple is still going to want fat profits during the M3 generation. That is highly unlikely to go away.

"could be" isn't a leak from Apple. More likely Gurman seeing what he wants to see as opposed to something someone at Apple showed/informed him.

The N3 wafers cost more. So if Apple keeps the core counts the same there is a good chance they can control the cost increases for the more expensive process.

Had a chance to read some other articles and got a link to what went out on bloomberg site. Gurman seems to be more so pointing at the increase of E cores in the "entry" line up more so than the core count total

M1 Pro 6P and 2E
M2 Pro 6P and 4E
M3 Pro 6P and 6E

the significant change there over time is the E cores; not the P cores. So not another two or four core P cluster coming... but probably another core E cluster (and pretty decent chance it is a full 4 core cluster). Spacewise that is much easier to do (lower cache die space overhead and much heavier AMX facility sharing, etc. ).

If Apple is chasing higher price points for the MBP then not only are the P cores binned in that M3 , but decent chance likely the E cores are binned there also. That the full die would be 16 core package. If Apple is trying to 'keep up with the Jones" in the x86-64 laptop processor land of core count wars , then might want to market 16 core laptops to compete with the 16 core Intel/AMD offerings. Folks doing superficial spec comparisons are just counting 'cores' ; not what type of cores.

The notion that this ArsTechnica article goes into:

"...
Though Apple has (mostly) ditched Intel, the two companies have taken a similar approach to improving their processors' performance in recent years: lean on architectural upgrades and small clock speed boosts to improve single-threaded performance on the big CPU cores while adding an increasing number of small high-efficiency cores to bolster multi-threaded performance for pro-level workloads that can use every CPU core you throw at them.
..."



M1 and M2 Pro were both on N5 so the partial to full E core cluster was on a single node (grew die size bigger). N3 would be an opportunity to reverse the die size bloat that M2 generally rolled out across the line up. If Apple keeps the die size the same then their package costs will go up. To offset, they will likely look for more mark-ups in BTO configs. If two more P cores is $100-200 then two more E cores could easily be another $50-100 . The M3 Pro/Max would have an even longer pricing ladder where gooses the full die price up even further. Incrementally thinner margins on the entry SoC, but incrementally fatter margins on the top half of the configurations. That could/would balance out the cost increases.


M3 Pro variants just on CPU core counts could be:

12 ( 6P 6E)
14 ( 6P 8E)
16 ( 8P 8E)

and then whatever permutations they want to add in on GPU cores.

If they can price ladder the CPU cores more, then Apple doesn't have to price ladder the GPU cores as much. May be the case the entry GPU is less binned down than the M1/M2 GPUs were. ( the fact that AMD/Intel iGPUs are getting much better in upcoming generations (AMD Strix and Intel Adamantine boosted iGPU) Apple has much less room to play overpriced incremental GPU core cost games there. ) . Could start at 18 but caps at 20 for the Pro (and just be bigger transistor budget GPUs with more capabilities (some perf/watt effective ray tracing) and overall performance). [ M2 Pro has one GPU core binned down all the time. Clusters of 10. topping out at 19 means something is just off all the time ( similar to A14X -> A14Z only that was exact same die) . Going to N3 could uncork that. ]


Also with bigger E core budget probably a better "save power mode" where turn off vast majority of P cores and just run most stuff that isn't absolute foreground on E cores. Or even all of it on E cores (watching movie full screen on single embedded screen. ).



The 36GB memory likely isn't an entry configuration where this particular system was the 'entry' model being testing.
It is a useful leak for Apple though that the Pro isn't 'stuck' at 32GB at max capacity ( M2 went up but the M2 Pro didn't. Pretty good chance that was component supply constraints and/or cost issue. It also helps to 'walk' more users into buy a Max variant in the short term ... so Apple makes more also. ).
 


It is a useful leak for Apple though that the Pro isn't 'stuck' at 32GB at max capacity ( M2 went up but the M2 Pro didn't. Pretty good chance that was component supply constraints and/or cost issue. It also helps to 'walk' more users into buy a Max variant in the short term ... so Apple makes more also. ).

I don’t think it’s any more complicated than “we don’t want to design an M2 Pro Package with room for three or more memory chips when 1) most customers who need that much RAM will buy the Max anyway, and 2) most of those who don’t need the Max but do need its RAM will be willing to upgrade”.

(If the M3 Pro does come with a 36 GiB config, I guess that means it’ll suddenly feature six memory chips instead of two?)
 
I don’t think it’s any more complicated than “we don’t want to design an M2 Pro Package with room for three or more memory chips when 1) most customers who need that much RAM will buy the Max anyway, and 2) most of those who don’t need the Max but do need its RAM will be willing to upgrade”.

The plain M2 didn't need more chips. The M2 Pro wouldn't either. Same dies stacked in a slight different external package. M1 to M2 generation the number of memory controllers and channels didn't change.
 
The 36GB memory likely isn't an entry configuration where this particular system was the 'entry' model being testing.
It is a useful leak for Apple though that the Pro isn't 'stuck' at 32GB at max capacity ( M2 went up but the M2 Pro didn't. Pretty good chance that was component supply constraints and/or cost issue. It also helps to 'walk' more users into buy a Max variant in the short term ... so Apple makes more also. ).
It actually has everything pointing to possibility that it really is base configuration.

M2 Pro was 8P/4E CPU config. But base was 6P/4E. Going to 6P/6E for base config is very simple if you simply increased the number of Effiiciency cores, and slapped new architecture for Performance cores.

And 36GB of RAM? If in 2024 everything that will be available to manufacturers is 6 GB per 64 bit chip of LPDDR5 6400 MHz memory, and higher - its no brainer that base config would have 36 GB of RAM if it has 384 bit bus.

So base config: 6P/6E, 36 GB of RAM, 18 GPU cores. CTO option: 8P/8E+24 GPU cores, 48 and 72 GB of RAM.
 
If 36 is indeed the base, I imagine the next 14-inch will start at $2399. So you only really get 4 Gigs more.
 
It actually has everything pointing to possibility that it really is base configuration.


And 36GB of RAM? If in 2024 everything that will be available to manufacturers is 6 GB per 64 bit chip of LPDDR5 6400 MHz memory, and higher - its no brainer that base config would have 36 GB of RAM if it has 384 bit bus.

The M1/M2 Pro has a 256 bit wide memory bus. Pretty likely the M3 has the same bus width. N3 isn't going to change that. Likely puts it more "set in stone" ( no shrink there on off die connections and bus width). Analog (and much of I/O ) and Cache aren't shrinking much at all.

256 / 64 = 4 . 4 * 6GB = 24GB ; not 36GB.

Second, I don't think Apple is using off-the-shelf generic LPDDR packaging. Apple has more active memory controllers/channels that most folks are attaching to generic off the shelf LPDDR . If it is 6GB minimal completely non banked then fine. If the semi custom stuff gets delivered to Apple as the old 16GB aggregate prices then yeah Apple could go that path. But if it costs more... then somewhat skeptical. ( Apple isn't going to take a margin hit. ) RAM vendors have cut their prices in half with the new packages?

similarly a wider bus that requires more packages isn't going to hold costs in control either. 384 wide bus doesn't make it any easier to control costs.
 
The M1/M2 Pro has a 256 bit wide memory bus. Pretty likely the M3 has the same bus width. N3 isn't going to change that. Likely puts it more "set in stone" ( no shrink there on off die connections and bus width). Analog (and much of I/O ) and Cache aren't shrinking much at all.

256 / 64 = 4 . 4 * 6GB = 24GB ; not 36GB.

Second, I don't think Apple is using off-the-shelf generic LPDDR packaging. Apple has more active memory controllers/channels that most folks are attaching to generic off the shelf LPDDR . If it is 6GB minimal completely non banked then fine. If the semi custom stuff gets delivered to Apple as the old 16GB aggregate prices then yeah Apple could go that path. But if it costs more... then somewhat skeptical. ( Apple isn't going to take a margin hit. ) RAM vendors have cut their prices in half with the new packages?

similarly a wider bus that requires more packages isn't going to hold costs in control either. 384 wide bus doesn't make it any easier to control costs.
You do realize that 36 GB is IMPOSSIBLE configuration on 256 bit bus?

The only possibilities are 192 and 384 bit buses for such configuration.

Cheapest one from manufacturing and manufacturing costs point of view is 384 bit bus.

384 bit bus is solely to feed the new Performance CPU architecture, and GPU cores. If it has ray tracing - you need all of the bandwidth possible to feed those capabilities.
 
Last edited:
We know the Mac Pro is coming, Apple has told us. We also know it is going to be M3 since TSMC confirmed this with their financial disclosure requirements that require them to disclose information to their shareholders. No way Apple can prevent this disclosure though I am sure their respective legal departments have a tug of war over what gets disclosed.

What Mark Gurman may have uncovered was a compute module for the upcoming Mac Pro. The module will have to have ECC memory. The ECC controller will be within the AS. So, externally, it looks like there is 36 GB of RAM but 4GB is used for ECC. So, this also suggests there will be a unique Mac Pro SiP. I have come to realize this is necessary if they use ECC. If they ditch the ECC then they could use the same modules. Any reason they would ditch the ECC? Does the memory being tightly coupled give them the same reliability?
 
I have a hunch, an educated guess these devices with storage in the chip will start dying after 4-5 years. The age of 10 year old Macs still kicking is coming to and end.
What makes you assume that? It’s not like the machines without onboard chips are failing at that point. Doesn’t really matter if they are on chip or not. Macs have had non-replaceable SSDs for several years, anyway.
 
Last edited:
  • Like
Reactions: Pinkyyy 💜🍎
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.