Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
There’s no amount money I wouldn’t bet that this isn’t the case. New microarchitecture for higher end Mac chips? There is zero evidence for this. If there were a new microarchitecture about to be released we would know by now and have codenames.

Zero. Chance.



I’m talking about die sizes when using new process nodes. New nodes tend to start off with smaller chips. Just look at AMD waiting years to put their new very large GPU dies on 7nm when they released smaller ones before that. It’s more economical to wait for yields to improve.

That’s not how chip making works? Thats just not true. Intel always releases a new microarchitecture and process node with something like their smaller Y-series laptop chips before their huge Xeon chips. That’s just how it works.

Yes… theoretically they could debut a new microarchitecture on the Mac before the iPhone... doesn’t mean they will. iPhone is the priority and (in my opinion) is always going to get the latest node and microarch just like it always has. Unless some reliable reporting disputes this I will assume it will remain to be the case instead of baselessly speculating otherwise.

First of all, by the time these macs ship it will have been a year since firestorm/ice storm. How long do you expect them not to innovate on those?

Second, no, new nodes don’t “tend to start off with smaller chips.” Sometimes they do, sometimes they don’t. Again, I worked at AMD and designed many chips, and I gave the example of sledgehammer and clawhammer. (Hint: sledgehammer big, claw hammer small) Second, N5P does not really qualify as a new node. It’s the same node with some minor changes to the front end.

The fact that Intel does things a certain way doesn’t mean anyone else has to - unless they want to end up where Intel is.

As for “just like it always has,” do you mean the ONE TIME that happened? I mean, only ONCE did a core microarchitecture appear on iPhone prior to Mac. From that one data point you claim to have proven a trend?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
True. But of course there is no evidence for anything else. All there is is just rumors. Given @cmaier ‘s background I tend to assume he‘s in possession of certain information… which in turn makes me assume he might be correct

I can neither confirm nor deny. I will say that if I had any information, it would not be coming from marketing people or people who are responsible for final products (i.e. macs), and that people designing CPUs know when they are done with their own work, but might not know when a product using that work is coming out. So, for example, they may know that the new cores they designed were fabbed in a big (Mac) chip before they were fabbed in a little (phone) chip. Could apple nonetheless hold those mac chips in their pocket for a year for whatever reason? Sure.

I also know that if yield is a problem on N5P (it’s not), you’d rather fab big chips first, because you need far fewer of them, it allows you to debug the issues, and the extra cost caused by the poor yield is easier to absorb in the price of a Mac than in the price of a phone.

But, of course, it’s hard for those facts to live up to the fact that Intel does something different, or that we have one datapoint where apple did iphones before macs.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,521
19,678
There’s no amount money I wouldn’t bet that this isn’t the case. New microarchitecture for higher end Mac chips? There is zero evidence for this. If there were a new microarchitecture about to be released we would know by now and have codenames.

As others already said, there is not much evidence for anything. Apple is quite good at secrecy. However, we do have some evidence. We "know" that the new Mac chips are codenamed Jade Die. We know that Apple has been working on a more powerful GPU core (Lifuka) that is supposed to launch this year. We know that the prosumer chips must be faster than the entry-level chips. And finally, we know that Apple is already making chips on the N5P node. So at tis point we could really get anything.

That’s a lot of money invested for no real reason. The current cores are already world class.

They are world class in their bracket, true. But what about prosumer hardware? In single-threaded performance M1 can match fastest x86 cores out there, but that's about it. Intel just brought out Tiger Lake refresh with higher single-core turbo. They are bringing a new CPU architecture (Alder Lake) this fall. AMD just showed a chip with 3D-stacked 192MB cache the other day. New entry-level Nvidia and AMD GPUs are coming to the laptops, with 30% improvements over the last generation.

It's not enough for Apple to produce a CPU that consumes much less power. It's great for a MacBook Air, not so much for a Pro. They have to show that their product is faster, better, more efficient. They need to be faster than anything that Intel or AMD can put in a laptop (or even in a compact desktop). Can they do it with Firestorm? I don't know.We have seen no evidence that Firestorm can reach more than 3.2ghz. A new architecture, designed for performance, with more execution units, more cache, higher clocks, deeper execution window, that could do the trick though.
 

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
There’s no amount money I wouldn’t bet that this isn’t the case. New microarchitecture for higher end Mac chips? There is zero evidence for this. If there were a new microarchitecture about to be released we would know by now and have codenames.

Zero. Chance.



I’m talking about die sizes when using new process nodes. New nodes tend to start off with smaller chips. Just look at AMD waiting years to put their new very large GPU dies on 7nm when they released smaller ones before that. It’s more economical to wait for yields to improve.

That’s not how chip making works? Thats just not true. Intel always releases a new microarchitecture and process node with something like their smaller Y-series laptop chips before their huge Xeon chips. That’s just how it works.

Yes… theoretically they could debut a new microarchitecture on the Mac before the iPhone... doesn’t mean they will. iPhone is the priority and (in my opinion) is always going to get the latest node and microarch just like it always has. Unless some reliable reporting disputes this I will assume it will remain to be the case instead of baselessly speculating otherwise.
I would be careful arguing about chip design and fabrication with @cmaier. He worked at AMD for 10 years as a designer and other places. He very probably still has industry contacts connected to Apple, and may well have some inside information (which he quite properly, isn't sharing).

He sounds like a learned fellow, and we are fortunate to have his experience on this forum:

 
  • Love
Reactions: dustSafa

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I would be careful arguing about chip design and fabrication with @cmaier. He worked at AMD for 10 years as a designer and other places. He very probably still has industry contacts connected to Apple, and may well have some inside information (which he quite properly, isn't sharing).

He sounds like a learned fellow, and we are fortunate to have his experience on this forum:


Who’s that guy? Handsome.
 

Serban55

Suspended
Oct 18, 2020
2,153
4,344
@cmaier
Tell us your prediction for these new Mbp chip that probably will make an entrance into the next mac mini and bigger imac
 

Serban55

Suspended
Oct 18, 2020
2,153
4,344
Kind of what we all predict. Core design you meant it will be a bigger SoC?

now i want to know predictions from Johny Srouji :)
 
Last edited:

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Kind of what we all predict. Core design you meant it will be a bigger SoC?

now i want to know predictions from Johny Srouji :)

Well I think the SoC will definitely be bigger as it will certainly have more cores. Each core may be bigger than the existing cores, though they may not be. Depends on whether Apple is emphasizing higher IPC or higher clock rate. I think they will emphasize higher IPC, since that’s more power efficient, in which case each core would likely be bigger. But the clock rate will probably be higher too (especially since they can get a little bit better cooling out of these entirely-new case designs). The GPU may be on a separate die, though. Not sure - it’s not really necessary given the size of the reticle, but if they want to offer a range of graphics capabilities it may be easier for them to do it that way.
 

Colstan

macrumors 6502
Jul 30, 2020
330
711
Who’s that guy? Handsome.
I admit that it's a guilty pleasure of mine to watch people come in here (especially AMD fanboys) who then start arguing with cmaier over chip design and performance, having no idea that they are talking to the chief architect of Opteron and a primary driving force behind x86-64. I'm perfectly happy to sit under the learning tree and just listen sometimes, because I often find out something I didn't know. I would have loved to pick his brain back in the early Athlon days. (It also doesn't hurt that he "may or may not" have source(s) within Apple's semiconductor design team, from his previous life at AMD and related companies.)
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
I admit that it's a guilty pleasure of mine to watch people come in here (especially AMD fanboys) who then start arguing with cmaier over chip design and performance, having no idea that they are talking to the chief architect of Opteron and a primary driving force behind x86-64. I'm perfectly happy to sit under the learning tree and just listen sometimes, because I often find out something I didn't know. I would have loved to pick his brain back in the early Athlon days. (It also doesn't hurt that he "may or may not" have source(s) within Apple's semiconductor design team, from his previous life at AMD and related companies.)
So in a way it is @cmaier fault that AMD didn’t adopt IA-64 and kept x86 alive?
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,521
19,678
I admit that it's a guilty pleasure of mine to watch people come in here (especially AMD fanboys) who then start arguing with cmaier over chip design and performance, having no idea that they are talking to the chief architect of Opteron and a primary driving force behind x86-64. I'm perfectly happy to sit under the learning tree and just listen sometimes, because I often find out something I didn't know. I would have loved to pick his brain back in the early Athlon days. (It also doesn't hurt that he "may or may not" have source(s) within Apple's semiconductor design team, from his previous life at AMD and related companies.)

Not to diminish our @cmaier's achievements, but I think in this particular passage you might be confusing him with Jim Keller ;) If my memory serves me right though, Cliff was the manager in charge for AMD Bulldozer. Hope that he can clarify.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Well, thank god for that. IA-64 was a train wreck. Although I agree that AMD64 was a missed opportunity that has stalled x86's potential.
Why was it a train wreck? And what could AMD have done to make x86's potential shine?

Note, some of my comment about @cmaier was tongue and cheek. But now I am genuinely curious about it. I know IA64 was slow and the compilers were not great, I am not sure why those things couldn't improve over time though.
 

Colstan

macrumors 6502
Jul 30, 2020
330
711
Not to diminish our @cmaier's achievements, but I think in this particular passage you might be confusing him with Jim Keller ;) If my memory serves me right though, Cliff was the manager in charge for AMD Bulldozer. Hope that he can clarify.
I was vaguely familiar with the AMD team structure of the time, but generalized in that post, I regret the inaccuracy. Some of that was based upon my recollection of what @cmaier said in a past post, but I wasn't attempting to convey the exact structure of the team, just a generality.

Regardless, when I posted about the ARMv9 announcement a few months ago, I specifically tagged him and yourself in that post. I knew that the two of you would be both knowledgeable and honest. Too many people appear to fall into the Dunning-Kruger trap, but neither of you suffer from that blight. You gave me a truthful and honest response without bloviating or embellishment. My point being, I include @leman under that same learning tree that you share with @cmaier.
 

Colstan

macrumors 6502
Jul 30, 2020
330
711
So in a way it is @cmaier fault that AMD didn’t adopt IA-64 and kept x86 alive?
It's unlikely that AMD would have adopted IA-64. To use a highly technical legal term, Intel patented the hell out of the instruction set and related technologies, assuming I recall correctly. I believe AMD's contract with Intel only covers x86. It was another one of Intel's monopolistic tactics of that era, so AMD didn't have a choice but to go for x86-64. For them, it was the right decision, because the promise of Itanium and the reality were two very different things. I'm sure others could elaborate more on the issue.
 
  • Like
Reactions: jdb8167

leman

macrumors Core
Original poster
Oct 14, 2008
19,521
19,678
Why was it a train wreck?

It appears that most experts agree that Itanium was designed based on a flawed premise and that the architecture would not be scalable. It put too much burden on the compiler and limited the potential advancements in the CPU backend. In the end, out-of-order execution has advanced far beyond what Itanium's architects though would be possible and made Itanium's approach obsolete. Interestingly enough, same thing happened in the GPU space. Both Nvidia and AMD abandoned VLIW designs in favor of more simple scalar-like programing models with latency hiding.

And what could AMD have done to make x86's potential shine?

You know, it's easy to criticize touch decisions in the hindsight and I don't want to be that guy. AMD64 was the pragmatic choice, I just think it was a missed opportunity.

My point being, I include @leman under that same learning tree that you share with @cmaier.

Thanks, that's very kind of you! Please do take my effusions with a grain of salt though, unlike many other users here I am merely a hobbyist. My PhD is in theoretical language science, not engineering ;)
 

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
Well I think the SoC will definitely be bigger as it will certainly have more cores. Each core may be bigger than the existing cores, though they may not be. Depends on whether Apple is emphasizing higher IPC or higher clock rate. I think they will emphasize higher IPC, since that’s more power efficient, in which case each core would likely be bigger. But the clock rate will probably be higher too (especially since they can get a little bit better cooling out of these entirely-new case designs). The GPU may be on a separate die, though. Not sure - it’s not really necessary given the size of the reticle, but if they want to offer a range of graphics capabilities it may be easier for them to do it that way.
In a practical sense how wider can they get? They are already 8 wide when the closest competitor is 4 wide.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
It appears that most experts agree that Itanium was designed based on a flawed premise and that the architecture would not be scalable. It put too much burden on the compiler and limited the potential advancements in the CPU backend. In the end, out-of-order execution has advanced far beyond what Itanium's architects though would be possible and made Itanium's approach obsolete. Interestingly enough, same thing happened in the GPU space. Both Nvidia and AMD abandoned VLIW designs in favor of more simple scalar-like programing models with latency hiding.

Aye people still research VLIW with the hope that more modern compilers would be up to the task and for certain applications back in the day Itanium was actually quite good. But, in general, it was a dead end and so far remains so.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
First of all, by the time these macs ship it will have been a year since firestorm/ice storm. How long do you expect them not to innovate on those?

Second, no, new nodes don’t “tend to start off with smaller chips.” Sometimes they do, sometimes they don’t. Again, I worked at AMD and designed many chips, and I gave the example of sledgehammer and clawhammer. (Hint: sledgehammer big, claw hammer small) Second, N5P does not really qualify as a new node. It’s the same node with some minor changes to the front end.

The fact that Intel does things a certain way doesn’t mean anyone else has to - unless they want to end up where Intel is.

As for “just like it always has,” do you mean the ONE TIME that happened? I mean, only ONCE did a core microarchitecture appear on iPhone prior to Mac. From that one data point you claim to have proven a trend?

Interesting ... previously when we discussed this possibility months ago you weren’t sure what their timing would be ... now you’re more certain ... very interesting ? ?

-----------

Personally, as I don't have any inside information, I'm unsure as to what cores we'll see (assuming, fingers crossed, that we get a release of new hardware). Given the timing and various the rumors, I could see either firestorm or avalanche (a rumor that's what the next big cores will be called). But more cores, more ram, and a bigger GPU should be a given for the rumored form factors ... maybe lpddr5 if the cores are next-gen.

-----------

[IMG alt="cmaier"]https://forums.macrumors.com/data/avatars/m/118/118078.jpg?1593355091[/IMG]

cmaier

Suspended​


Edit: oh dear ... what did you do? ;)
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
As others already said, there is not much evidence for anything. Apple is quite good at secrecy. However, we do have some evidence. We "know" that the new Mac chips are codenamed Jade Die. We know that Apple has been working on a more powerful GPU core (Lifuka) that is supposed to launch this year.

The code words "Jade" "Tonga" "Lifuka" seems to be more so consistently names for dies not "cores". Or core microarchteture names. Pretty decent chance that Lifuka could be a GPU focused chiplet. So more of the same cores with some cache and inter-die communication mechanism.

Or a die that is just waaaaaaaaay skewed toward GPU cores. ( If Apple is highly focused on monolithic dies to incrementally same power and 2D-3D packaging costs ).



We know that the prosumer chips must be faster than the entry-level chips. And finally, we know that Apple is already making chips on the N5P node. So at tis point we could really get anything.

"Must be faster". Required? the Mini , Air , iMac 24" , and lower end MBP 13" all have just one chip. ( back in the Intel variants that would be 9-12 different cpu speed variations. ). Apple has shifted this more so to picking the better fit container than buying peformance. At very least shifted ti off of geeky CPU benchmark scores being the 'guide'.

What is more necessary is that these be more competative with non-Mac competitors.


Already making doesn't necessarily mean shipping soon. If the "bake" time is 90-100 days than making N5P now means have a sizable inventory in late Aug early September. .... right around yearly iPhone time.

To have stuff shipping in July Apple would have had to start N5P production back a month or so back earlier in the Spring when it was still solidly in the "at risk" (not "High volume") status stage.

It is unlikely that Apple is going to pull N5P wafer starts from the iPhone SoC for Mac SoC to cause a slide in iPhone release. As long as the iPhone is on a strict 12 month release cycle they are probably going to get higher priority. [ Apple has shown zero rigidly hard commitment to 12 month cycles for the Mac . Folks trying to throw all the blame on Intel but MBA sliding for 2-3 years . MP 3, 6 years snooze cycles , Mini snooze cycle. etc. That wasn't all Intel. ]

N5P is very highly likely A15. The "don't know" part is if there is a relatively small 'side piece' of those wafers that got allocated to Mac SoCs.




They are world class in their bracket, true. But what about prosumer hardware? In single-threaded performance M1 can match fastest x86 cores out there, but that's about it. Intel just brought out Tiger Lake refresh with higher single-core turbo. They are bringing a new CPU architecture (Alder Lake) this fall. AMD just showed a chip with 3D-stacked 192MB cache the other day. New entry-level Nvidia and AMD GPUs are coming to the laptops,

So they are "winning" on single thread performance so need to add efatures to boost single thread and limit power (e.g., the obj_c_ dispatch prediction discussed earlier in the thread). That isn't going to help much. If the core count on 10-16 on the AMD/Intel side then the primary missing piece is "more area for more stuff " far more than fewer "more magical" cores.

The mainstream x86 market primarily leverage incrementally higher clockspeeds to charge higher pricing. If Apple is solidly on this "world best" iGPU kick then they have a another lever to charge higher prices on. (and already do in the MBP 16" and iMac 27" with dGPUs.). Customer needs a bigger GPU ... pay more. Need a bigger integrated screen ... pay more. More CPU cores ... well pay more for more GPU cores too. Need more RAM... pay more only from us as a source (and buy up front because pragmatically can't add more later).

In short, they have a huge 'hook' to generate more revenue in the "max integrated" aspects of the SoC design.

10 cores ( split 8 P 2 E ) probably quite well against most , if not all, of Alder Lake ( which maxes out at 8 "big"). Apple isn't particularly behind there is scale up the memory subsystem to keep those fed with data (and keep the hit rate approximately the same. )







It's not enough for Apple to produce a CPU that consumes much less power. It's great for a MacBook Air, not so much for a Pro.

Presuming MBPro here. Leaks so far point to Apple "half sizing" the Mac Pro with the transition. That is less power ( will result in less high power add in cards).

Certainly not on the iMac 24" which had its cooling capability crippled relative to the 21.5" model.



They have to show that their product is faster, better, more efficient. They need to be faster than anything that Intel or AMD can put in a laptop (or even in a compact desktop). Can they do it with Firestorm? I don't know.We have seen no evidence that Firestorm can reach more than 3.2ghz.

Cranking the clock higher isn't gong to help if can't keep the core fed. Apple probably isn't going to go after the desktop-replacement-luggable market. Mostly likely they are going to compete on "best performance solely on battery". Unplug you high end AMD laptop and the performance takes a hit. And Adler Lake at max configuration , at max compute is likely in a different zip code to battery consumption.

Apple didn't do that "move from desk to desk" laptop product before. Why would they be trying to do that now with M-series.


A new architecture, designed for performance, with more execution units, more cache, higher clocks, deeper execution window, that could do the trick though.

Like the current one isn't design for high single threaded performance. Higher clocks with LPDDR4 (or even DR5) memory isn't a good match. More cache and more execution units doesn't necessarily require a significant jump in core design. The "wider" is more likely to just do that to the memory subsystem on that same "LPDDR" path.

Pretty likely these same cores are heading the for the A15 with a different uncore on the die when Apple does do the N5P iteration.

Apple is quick unlikely to jump into the desktop, high end CPU overclocker competition. Largely eschewed overclocking support with INtel chips. Not likely they are putting windows in for thst with their own stuff. So those "top fuel dragster" single thread speed freak.... are going to be able to push more out of the x86 with edge case, exotic set ups.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.