Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
I feel the multi-SoC ASi (Jade 2C / 4C) offerings will be reserved for the Mac Pro, Mac Pro Cube, and possibly the iMac Pro...

iMac Pro for those wanting the top-end all-in-one workstation, Mac Pro Cube for those who don't really need PCI slots & don't want the attached monitor of an all-in-one, full-size Mac Pro for those who do need PCI slots (but still don't want the attached monitor of an all-in-one)...?!?
On that I agree but there is a segment (or was) of middle range iMacs (current 27 inch) that is not pro. I meant Mac mini "pro", not Mac Pro mini ;).
 
  • Like
Reactions: Boil

Serban55

Suspended
Oct 18, 2020
2,153
4,344
On that I agree but there is a segment (or was) of middle range iMacs (current 27 inch) that is not pro. I meant Mac mini "pro", not Mac Pro mini ;).
so are you sure guys we will have just 3 segments for the silicon macs? i mean the Mn series for the macbook air, 24" imac, ipad pro etc, the middle of the road MnX for the 14" 16" macbook pro, bigger imac and the silicon for the mac pro?
So only 3 segments? i am ok with this too if it will be so but just wondering
i wonder if the mac pro will be very priced configurable with 2x Mn or 3x Mn(for the base prices of 2 possible configurations) , other step up in perf and price - config 2x MnX or 3x Mnx and so on OR will be different SoC's entirely ?!

Mac Pro Starting from $2000 - dual Mn SoC with up to 32 gb Ram, up to 4T ssd etc
$2500 - triple Mn SoC with up to 48 gb ram , 4T ssd etc

Mac Pro - $2999dual MnX SoC with up to 64 gb Ram, 8T ssd etc
- $3500 triple MnX SoC with up to 128 gb ram and so on

is this possible ? to be so config from a cheap price starting and supporting up to over $5000 based on the SoC and number of them?
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,522
19,679
so are you sure guys we will have just 3 segments for the silicon macs? i mean the Mn series for the macbook air, 24" imac, ipad pro etc, the middle of the road MnX for the 14" 16" macbook pro, bigger imac and the silicon for the mac pro?
So only 3 segments?

I believe there will be only two chips/platforms: vanilla M-series for low-power consumer Macs and a scalable professional chip for everything else. Bigger Macs will use multiple scalable chips to build more powerful systems.

Mac Pro Starting from $2000 - dual Mn SoC with up to 32 gb Ram, up to 4T ssd etc
$2500 - triple Mn SoC with up to 48 gb ram , 4T ssd etc

Mac Pro - $2999dual MnX SoC with up to 64 gb Ram, 8T ssd etc
- $3500 triple MnX SoC with up to 128 gb ram and so on

is this possible ? to be so config from a cheap price starting and supporting up to over $5000 based on the SoC and number of them?

I am very skeptical about Apple changing their price structure this much. The Mac Pro will most likely remain at close to $5000 as it is now. But we might get a smaller new intermediate model.
 
  • Like
Reactions: ader42

playtech1

macrumors 6502a
Oct 10, 2014
695
889
I believe there will be only two chips/platforms: vanilla M-series for low-power consumer Macs and a scalable professional chip for everything else. Bigger Macs will use multiple scalable chips to build more powerful systems.
So, on balance, I think you are probably right.

But... playing devil's advocate for a bit, doing multiple chips in a single system is potentially quite hard. Apple will have multiple SoCs with a lot of stuff doubled up for no obvious benefit.

So I still wonder whether it would be cheaper/easier for Apple to have a 'big M' chip that adds extra cores.

A three chip line-up could have some merits in having one model that is 'desktop' oriented and not engineered with power-constraints in mind.

I think the barrier to this is economics - I have read that it costs $1bn to tape out a 7nm chip (maybe the same for 5N?), but if it's just the 'same but different', perhaps it gets to a place that makes sense? The GPU makers seem to have halo models that can't sell in the millions, yet which seem to be profitable, so it must be at least possible.
 

cmaier

Suspended
Original poster
Jul 25, 2007
25,405
33,474
California
So, on balance, I think you are probably right.

But... playing devil's advocate for a bit, doing multiple chips in a single system is potentially quite hard. Apple will have multiple SoCs with a lot of stuff doubled up for no obvious benefit.

So I still wonder whether it would be cheaper/easier for Apple to have a 'big M' chip that adds extra cores.

A three chip line-up could have some merits in having one model that is 'desktop' oriented and not engineered with power-constraints in mind.

I think the barrier to this is economics - I have read that it costs $1bn to tape out a 7nm chip (maybe the same for 5N?), but if it's just the 'same but different', perhaps it gets to a place that makes sense? The GPU makers seem to have halo models that can't sell in the millions, yet which seem to be profitable, so it must be at least possible.

It doesn’t cost a billion to tape out a 7nm chip. Not sure what is supposed to be included in that billion dollar figure, but, just, nope.

An inefficient team designing one of these, and taking a year to do it, might have, say, 300 people on it. If each one gets $600,000 in salary and benefits and overhead costs (they don’t), that’s only $180M. In reality, a variation of an existing SoC takes a lot fewer than 300 people a lot less than a year. (And the cost of each employee, on average, is a lot less than $600 grand)
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
But... playing devil's advocate for a bit, doing multiple chips in a single system is potentially quite hard. Apple will have multiple SoCs with a lot of stuff doubled up for no obvious benefit.

So I still wonder whether it would be cheaper/easier for Apple to have a 'big M' chip that adds extra cores.

I didn‘t mean multiple SoCs, I meant a single system on a package, stitched together from multiple „tiles“ aka. chiplets. As to redundant components (SSD controller, display engine etc.), they could be either off-loaded to a separate chip (on the same package of course), or just disabled. It’s not like they are taking much space. Or maybe, multiple display engines can be used to enable more displays etc.

I think the barrier to this is economics - I have read that it costs $1bn to tape out a 7nm chip (maybe the same for 5N?), but if it's just the 'same but different', perhaps it gets to a place that makes sense? The GPU makers seem to have halo models that can't sell in the millions, yet which seem to be profitable, so it must be at least possible.

The problem with big chips is that they are getting progressively more and more expensive, not just because of the size, but also because it gets very hard to make a defect-free chip. Smaller chips simply have higher yields. At the current level of complexity, it really starts to be one a problem. That’s why industry is moving towards scalable technologies, where you make small chips and connect the, together to act as one big chip. AMD is a success story in this area, Intel is introducing their scalable tech next year, and it will probably come to GPUs sooner rather than later too.

Apple is the owner of some very advanced chiplet-related patents, they have a focused product line and a lot of experience with economies of scale (just look at M1, it’s a marvelous product). Chiplets would be exactly their style as they would allow them to offer multiple configurations at various performance levels at a very reasonable amortized R&D cost. And of course, the leaks point towards this as well.
 

cmaier

Suspended
Original poster
Jul 25, 2007
25,405
33,474
California
I didn‘t mean multiple SoCs, I meant a single system on a package, stitched together from multiple „tiles“ aka. chiplets. As to redundant components (SSD controller, display engine etc.), they could be either off-loaded to a separate chip (on the same package of course), or just disabled. It’s not like they are taking much space. Or maybe, multiple display engines can be used to enable more displays etc.



The problem with big chips is that they are getting progressively more and more expensive, not just because of the size, but also because it gets very hard to make a defect-free chip. Smaller chips simply have higher yields. At the current level of complexity, it really starts to be one a problem. That’s why industry is moving towards scalable technologies, where you make small chips and connect the, together to act as one big chip. AMD is a success story in this area, Intel is introducing their scalable tech next year, and it will probably come to GPUs sooner rather than later too.

Apple is the owner of some very advanced chiplet-related patents, they have a focused product line and a lot of experience with economies of scale (just look at M1, it’s a marvelous product). Chiplets would be exactly their style as they would allow them to offer multiple configurations at various performance levels at a very reasonable amortized R&D cost. And of course, the leaks point towards this as well.

Tell that to cerebras :)
 

nquinn

macrumors 6502a
Jun 25, 2020
829
621
I think the next 10 years will be insane in technology advancement.

Curious why you think that? Hardware improvements look to be slowing down as we hit really small lithography limits and transistor counts.

The biggest changes will most likely come from more feature specific silicon and improved algorithms (ML, cryptography, etc).

A couple of things coming that do look pretty awesome though:

- Wi-fi 7 up to 30Gbps (and wi-fi 6e now even looks capable of near 2Gbps)
- AV1 decoding should hopefully keep cpu utilization low for streaming

I really just want a cool/quiet machine. Soooooo tired of laptops burning my lap and fans spinning up.
 
  • Like
Reactions: JMacHack

cmaier

Suspended
Original poster
Jul 25, 2007
25,405
33,474
California
Curious why you think that? Hardware improvements look to be slowing down as we hit really small lithography limits and transistor counts.

The biggest changes will most likely come from more feature specific silicon and improved algorithms (ML, cryptography, etc).

A couple of things coming that do look pretty awesome though:

- Wi-fi 7 up to 30Gbps (and wi-fi 6e now even looks capable of near 2Gbps)
- AV1 decoding should hopefully keep cpu utilization low for streaming

I really just want a cool/quiet machine. Soooooo tired of laptops burning my lap and fans spinning up.

I agree with him, actually. Compare the period from the late ’70s to ’80s to what happened when the world settled on Wintel. Everything stagnated - everything became faster, better, more capable, but it was all still really the same thing over and over again.
 
  • Like
Reactions: altaic and AdamNC

Kpjoslee

macrumors 6502
Sep 11, 2007
417
269

Geekbench 5 CPU
A15 0% IPC change
Faster from 2.99->3.23GHz
Efficiency is worse at that higher clock
ST 8% faster at 17% more power
MT 15% faster at 28% more power
MT power similar to s888
A15 vs A12X/Z: ST+56%, MT+7%

In low power mode A15 more efficient than A14
(big cores 1.3->1.4GHz at slightly lower power, little cores pull 18.4% higher power than A14 for 14% higher score)
ST 8% faster at 9% less power
MT 12% faster at 2% more power

GFXBench Aztec Offscreen
A15-4GPU 12.5% faster than A14 at 1440p, 16.7% faster at 1080p
A15-5GPU 35/36% faster than A14 (20/16.7% faster than A15-4GPU)
A15-5GPU is between A12X/A12Z
A15-5GPU power similar to A14 ~8W
A15-4GPU pulls 18% less power than A15-5GPU
GPU efficiency therefore 35% better
Low power mode has 3W power cap, 5GPU only 10% faster than 4GPU

Nimian Legends: Bright Ridge
A15-5GPU: 50fps peak 32fps sustain 4.4W 38.3c
A15-4GPU: 40fps peak 30fps sustain
A14: 40fps peak 27fps sustain
All draw around the 4W mark

Genshin Impact High 25c ambient 300 nit
A15-5GPU still cannot maintain 60fps
A15-5GPU 50fps average (low 40fps) after 22mins (vs A14 ~43/32 fps)
A15-4GPU throttles earlier but maintains same 50/40fps
Screen dimming issue sub-300 nit after 7 mins
Like the A14, low power mode sustains better perf in games
Tcase remains mid-40c

Genshin Impact Medium 25c ambient 200 nit
A15-4GPU & 5GPU 60fps locked (A14 starts dropping after 22 mins)

Genshin Impact High 30c ambient 300 nit 5G enabled
A15-4GPU 20fps sustain, low power mode identical
A15-5GPU 20fps susatin, low power mode takes longer to throttle
A14 20fps sustain, low power mode takes longer to throttle

Write up from the first benchmark video (Chinese) came out.
 

5425642

Cancelled
Jan 19, 2019
983
554
Starting to think that apple won’t release any new MacBooks this year. We don’t even have any date or similar for the event
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Starting to think that apple won’t release any new MacBooks this year. We don’t even have any date or similar for the event
They usually don’t announce any events until two weeks in advance. And last year’s event was end of October or into November iirc.
 
  • Like
Reactions: Tagbert

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
I agree with him, actually. Compare the period from the late ’70s to ’80s to what happened when the world settled on Wintel. Everything stagnated - everything became faster, better, more capable, but it was all still really the same thing over and over again.
I expect interesting things to come out of advanced packaging more than lithography advances. Of course, they’ll both contribute. And there are interesting possibilities in GAA MBCFET, but from a laymans perspective the cool structures also look non-trivial to mass produce. Really non-trivial. So maybe holding ones breath is not recommended. :-D
 

Serban55

Suspended
Oct 18, 2020
2,153
4,344
Starting to think that apple won’t release any new MacBooks this year. We don’t even have any date or similar for the event
the date is always coming around 7 days earlier to the event
So, keep an eye after first of October, around 4 or 5th of October
 
  • Like
Reactions: dustSafa

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
The problem with big chips is that they are getting progressively more and more expensive, not just because of the size, but also because it gets very hard to make a defect-free chip.
Tell that to cerebras :)

More like tell that to Cerebras's customers.

"... Current customers of CS-2 units include national laboratories, supercomputing centers, pharmacology, biotechnology, the military, and other intelligence services. At cost of several million each, it’s a large bite to take all at once, hence the announcement today. ...

.... however cloud rental costs at Cirrascale will run to $60k a week, or $180k a month,...."


Their WSE chips aren't defect free. They build in extra redundant components on the wafer so can they work around the majority of the defects. But that just folds back into the chips being more expensive.

Apple would probably prefer to make $60K per day selling six $10K systems, rather than shrink the volume as low as Cerebras does.

As long as Apple can cover the whole laptop line up with monolithic dies ( that use less power) they will probably will be willing to grow as big as necessary to do that. The desktops that are "left over" at this point , ( iMac large screen , Mac Pro , current case Mini ) , could have more "give" on power consumption and Apple switch over to. "cheaper for us". If they can cover the desktop line-up with some small changes to a "big" laptop chip then can mean more saved costs for Apple.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Hm, what’s the difference to A14 then? Weird…

Yeah this is basically what you would expect from upclocked firestorms like in the M1 - in some ways even worse as where did the power savings/extra performance from the new node go? They were expected to be small but where are they?

I should state I’m not against them taking a year off in core designs - neither Intel nor AMD else releases brand new cores every year (well Intel starting to because reportedly they have such a huge backlog of designs from their manufacturing woes). CPU designers deserve a good work-life balance too.

Well let’s see Andrei’s results when he gets them @anandtech.

Edit: low power mode is more efficient but why are there no savings anywhere else except when it’s downclocked?
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Hm, what’s the difference to A14 then? Weird…

1. Better energy profile when not single thread drag racing for bragging rights.
( vast majority of folks would rather have a phone with enough charge to last a busy day that is not primarily occupied with gaming )

2. Much higher synergy with cameras on most of the systems with A15's inside them.
( e.g., ProRes video content creation. )

3. Better ML/AI. ( which these benchmarks are almost completely opaque to. )

4. Fairly decent chance has higher synergy with VR goggles at the end of 2022 for ones with 5th GPU.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
1. Better energy profile when not single thread drag racing for bragging rights.
( vast majority of folks would rather have a phone with enough charge to last a busy day that is not primarily occupied with gaming )

I think he meant the CPU cores so it should read how is avalanche different from firestorm? Which means the other points don’t really apply and 1) is only true *in low power mode* which most people don’t keep their phones in constantly. Otherwise, the default is that it goes faster for more power and for pretty much the same power increase you’d expect from an upclocked firestorm core on N5.

Edit: isn’t this GB MT score for the A15 lower than what we were seeing previously? Wasn’t it closer to a 18-21% increase compared to the A14? I mean I expect variation between models and units and even runs but even so … that’s not insignificant.
 
Last edited:

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Okay ... am I going crazy or did the A15 geekbench scores get removed?

Screen Shot 2021-09-22 at 1.06.58 PM.png
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Yeah this is basically what you would expect from upclocked firestorms like in the M1 - in some ways even worse as where did the power savings/extra performance from the new node go? They were expected to be small but where are they?

I should state I’m not against them taking a year off in core designs - neither Intel nor AMD else releases brand new cores every year (well Intel starting to because reportedly they have such a huge backlog of designs from their manufacturing woes). CPU designers deserve a good work-life balance too.

Well let’s see Andrei’s results when he gets them @anandtech.

Edit: low power mode is more efficient but why is there no savings anywhere else except when it’s downclocked?

I meant the CPU cores, sorry. So far it looks just like Firestorm at 3.2ghz. Why claim new cores if you just ship old cores at higher clock?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.