Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Oct 14, 2008
19,523
19,679
The real "RAM limitations" is that almost all on-package smartphone RAM are modules at or below 8GB in capacity, which is why M1 launched at 8GB (2x4GB) and 16GB (2x8GB).

Production availability of 16GB, 24GB and 32GB modules is improving so this should allow "M1X" (and M1/M2, for that matter) to offer 32GB and 64GB capacities, but supply might be very constrained (especially in the 24GB and 32GB capacities) which could be impact the release of machines.

The larger chips will use more memory modules however. For example, the iPhone uses one RAM module, M1 uses two, the prosumer chip will likely use four LPDDR5 modules etc.
 

Kpjoslee

macrumors 6502
Sep 11, 2007
417
269
AMD has to sell chips to customers. Apple doesn’t. Any increase in cost (which could only be do to yield fall-out) is easily absorbed in the price of the devices Apple sells.

Well, as in scaling up all the way up to CPUs for high end datacenter servers. Rumors also points to Apple going multiple chiplets on high-end iMac and Mac Pro so we will see.
 

dgdosen

macrumors 68030
Dec 13, 2003
2,817
1,463
Seattle
If Apple is building out new chips using the same cores of existing M1s, and then doing a die shrink for the 3nm node, you might say they're adopting a tick tock strategy... :)
 

Homy

macrumors 68030
Jan 14, 2006
2,510
2,462
Sweden
Sounds promising:

"Apple sees big things ahead for Apple Silicon, both in terms of achieving new designs and perhaps appealing to the most demanding audience of all — gamers. After all, many of the engineers building Apple’s chips are gamers themselves. Apple is now setting a third goal for its M-series processors: Bringing gaming to the Mac.

“Of course, you can imagine the pride of some of the GPU folks and imagining, ‘Hey, wouldn't it be great if it hits a broader set of those really intense gamers,’” said Milet. “It's a natural place for us to be looking, to be working closely with our Metal team and our Developer team. We love the challenge.”"

 

Joe The Dragon

macrumors 65816
Jul 26, 2006
1,031
524
Sounds promising:

"Apple sees big things ahead for Apple Silicon, both in terms of achieving new designs and perhaps appealing to the most demanding audience of all — gamers. After all, many of the engineers building Apple’s chips are gamers themselves. Apple is now setting a third goal for its M-series processors: Bringing gaming to the Mac.

“Of course, you can imagine the pride of some of the GPU folks and imagining, ‘Hey, wouldn't it be great if it hits a broader set of those really intense gamers,’” said Milet. “It's a natural place for us to be looking, to be working closely with our Metal team and our Developer team. We love the challenge.”"

games locked to the app store with
NO MODS
NO USER MAPS
NO USER CONTENT
30% of all DLC?
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,632
Another rumour for an August launch. Not sure they how they will justify an M1X with a one year old core architecture.
I know how they’ll justify it.

”This absolutely STOMPS your current MacBook Pro, regardless of what your Intel specs are. All any Mac has to be is “faster than the previous one”.
 
  • Like
Reactions: ader42

UBS28

macrumors 68030
Oct 2, 2012
2,893
2,340
Waiting more than 1 year (because the 14” and 16“ MBP will not be readily available) for the M1X which could have been released in Q1 2021, will be extremely disappointing by Apple.

So basically the latest iPhone with the A15 can then outperform any Mac in single-core performance when running at the same clock speed.

I am not buying the M1X rumour, Apple doesn’t need 1 year just to add a few cores to the M1. If Apple comes with the 14” and 16” MBP this late, it will probably be the M2X.

I think I will have to reconsider my purchase plan if Apple does show up with the M1X.
 
Last edited:

thenewperson

macrumors 6502a
Mar 27, 2011
992
912
Waiting more than 1 year (because the 14” and 16“ MBP will not be readily available) for the M1X which could have been released in Q1 2021, will be extremely disappointing by Apple.

So basically the latest iPhone with the A15 can then outperform any Mac in single-core performance when running at the same clock speed.

I am not buying the M1X rumour, Apple doesn’t need 1 year just to add a few cores to the M1. If Apple comes with the 14” and 16” MBP this late, it will probably be the M2X.
Based on all information regarding the delay, the chips were not the reason for it. It's mainly been down to the displays.
 
  • Like
Reactions: CWallace and ader42

UBS28

macrumors 68030
Oct 2, 2012
2,893
2,340
Based on all information regarding the delay, the chips were not the reason for it. It's mainly been down to the displays.

I see. The mini-LED display on my M1 iPad Pro is very nice actually. So I guess it might be worth to move my purchasing plan to 2022 then if this is the show-stopper.
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
At a lower cost. If the cost wasn't the issue, then AMD wouldn't have gone chiplets and Intel wouldn't be hell bent on working on foveros for their future processors.
"Cost" is a very flexible word.
Chiplets have one success story. AMD.
Using chiplets allowed them to use a single design to adress all x86 markets from mid-end desktop to the most powerful data center processors. They initially scaled from half a CCD to eight (now twelve).
At the time this decision was made, AMD was severely constrained in terms of financials and engineering resources and saved immensely in design time and cost by going this route, while still allowing them to compete for profit margin over an extremely wide range, and offer a multitude of tiers for all these markets (which the beancounters love).
This was the "cost" it saved - engineering resources and time to market.
(It had fringe benefits for AMD as well, for instance it allowed them to keep using Global Foundries for certain parts of the processor, helping them fulfill their wafer purchase obligations, while still taking advantage of TSMC:s process advantage for the CCD:s themselves.)
Since power draw came up before, note where AMD still uses this - from mid/high end desktop processors and up. All their laptop to mid-end desktop processors are monolithic today, and that part of their x86 offerings have expanded upwards in market range.

None of the above applies to Apple. They have no interest in being able to market a very wide range of server CPU:s as AMD and Intel. Given their formfactors they only need two or possibly three processors, and for everything but the Mac Pro (if it continues in its current form), power draw is a prime design consideration.

When the target silicon die area becomes really large, chiplets do offer yield advantages, but carries costs in power draw, performance issues (such latencies within vs. between CCXs and between CCDs), packaging cost and complexity (which introduces a new point of failure).

We can safely assume Apples engineers are in a much better position than any of us to balance these factors, but personally I can’t see that the approach makes a lot of sense for their products.
 

leman

macrumors Core
Oct 14, 2008
19,523
19,679
We can safely assume Apples engineers are in a much better position than any of us to balance these factors, but personally I can’t see that the approach makes a lot of sense for their products.

At the same time Apple has registered patents to chiplet-based technology that is vastly superior to what AMD is doing. AMD‘s approach is relatively simple - they connect chips containing CPU cores to an interface/cache chip using traditional substrate routing. Apple patents instead mention connecting individual chips directly using an interface bar.
 

Piggie

macrumors G3
Feb 23, 2010
9,192
4,154
Putting aside any brand loyalty feeling for the moment.

I've been thinking, the only real sensible way for the PC market to evolve is for Microsoft, and the chip makers to get together, and actually design new CPU's that are built from the ground up, with special hardware internally to basically run windows.
All the code that windows needs to run, and accelerate itself, be hardware built into the CPU's themselves.

This is basically what Apple is doing and it's an excellent idea.
Do you think this will ever dawn on Microsoft that this is what they are going to need to do?

I know Microsoft themselves, like Apple won't physically build the chips, but they, along with other companies couod basally design "Windows CPU's"

Do you feel this is ever going to happen?
 

leman

macrumors Core
Oct 14, 2008
19,523
19,679
I've been thinking, the only real sensible way for the PC market to evolve is for Microsoft, and the chip makers to get together, and actually design new CPU's that are built from the ground up, with special hardware internally to basically run windows.
All the code that windows needs to run, and accelerate itself, be hardware built into the CPU's themselves.

I feel that people tend to completely misunderstand how these things work. There is a no reasonable way to build “hardware to run Windows”. You speak about hardware-accelerating an OS - what does it even mean? It definitely makes sense to design the hardware with software applications in mind, and it makes sense to introduce new features that offer more value to the user. But what does it have to do with “hardware to run an OS”?


This is basically what Apple is doing and it's an excellent idea.

That’s not at all what Apple is doing? Yes, Apple has introduced some hardware optimizations to execute common software patterns more efficiently. But there is no hardware in Apple CPUs to “accelerate macOS”, these optimizations will work in any OS and with any application that uses similar abstractions.

Do you think this will ever dawn on Microsoft that this is what they are going to need to do? I know Microsoft themselves, like Apple won't physically build the chips, but they, along with other companies couod basally design "Windows CPU's"

Windows is traditionally an OS that runs on an a common denominator hardware. You choose the components that support a certain standard, and you can use Windows. This is the cornerstone of their business. The way how “acceleration” works is that Microsoft designs programming interfaces, and third-party hardware makers build hardware that is compatible with those interfaces. It’s a good model since it gives user choice while allowing healthy competition, but there has to be certain agreement between the main players (e.g. certain limitations that DirectX is because either Nvidia it AMD GPUs simply work that way).

Apple has much more freedom to innovate because they cherry-pick the hardware. And now that they have completely abandoned the standard components they can go completely proprietary. And since they control the software stack as well, they can rapidly deploy new features and encourage the developers to use them. Microsoft does not have this freedom - if they drop hardware compatibility they only anger the users and if they introduce optimal features, developers won’t rush to adopt them because only small portion of hardware supports them anyway.
 

michalm

macrumors member
Apr 17, 2014
72
66
With those chiplet designs that may come, can those be power gated when not in use?
 

AgentMcGeek

macrumors 6502
Jan 18, 2016
374
305
London, UK
If Apple is building out new chips using the same cores of existing M1s, and then doing a die shrink for the 3nm node, you might say they're adopting a tick tock strategy... :)
Well, they’re limited by TSMC tech and availability. 3nm is harder to produce than expected (Samsung is struggling as well) and so they’ve had to add 4nm between N5P and N3.

And yes, TSMC has had a tick tock strategy for a while with its P variants.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Kind of irconic that when Netburst hit the wall with thermals, the Core microarchitecture "saved the day" by performing as fast, but running so much cooler.

Now Core has hit the wall with thermals,

You are deeply confusing marketing names for microarchitecture names. "Core" as a microarchitecture was left behind long ago. The mainstream CPU products are called "Core i__ " , but that is not because it is the 'Core' microarchitecture.

Intel is managing to stay within a couple of percentage points of AMD who is on a substantively better process in single to moderate core performance even with fab densisites that are lower. That isn't a failure of microarchitecture design. That far is more a fabritcation production skills. Intel doesn't have a 'design' problem, they have a 'making' problem. ( more of a hubris problem of trying to change too many factors in the fab process at the same time. )


yet Intel does not appear to have anything to save them again.

That is just plain nonsense. Intel just can't use the "we are 2-3 years ahead on fab process" playbook like they did a decade ago. Also just need to stop trying to go 'a bridge too far' to try to jump back into that old state also. Increment the density increases at a manageable pace will put them back on track. They would need some kind of moonshot process improvement to jump back into a shared bleeding edge in 1-3 years , but they don't need to do that to survive. Nor do they need to hard couple that to the main subset of their primary product line.





+
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
With those chiplet designs that may come, can those be power gated when not in use?

Apple already power gates subsystems in their monolithic designs. Not sure how they would "loose" that if went to chiplets working off the same design foundations.

Some parts of the chiplets won't likely turn off though. The interconnect between the chiplets probably won't. If Apple spreads the memory controllers over the chiplets those won't either. Physically split, but logically unified L3 cache.. same thing... probably would not shut down. In short, if the other chiplet(s) are using services out of another chiplet then then those services are still in use so not really eligible to be shut down.

Chiplets will consume more power in aggregate. Apple will probably miditigate that by pushing them as close as possible together. In order to get them closer they will need to manage the power consumption ( so no, they aren't going to throw power gating wins they already have completely out the window. They actually need them to reduce the physical gaps. )
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
One worry I have is that Alder Lake is supposed to be a good tick faster than Tiger Lake, so if the prosumer Silicon is not faster than M1 in single core, 2022 Intel multimedia laptops might take the lead. For multi-core performance and the like, it should be obvious that Apple will have a large advantage regar dless.

However is general multimedia processing a single threaded gated process?

Decoding RED RAW gated by single core? DAW of 5-20 instruments a single threaded process ? Decoding/encoding 4 video streams single threaded?

Mainstream video codecs ( h.264 , h.265 ) run through the fixed function logic; not particularly a 'general instruction stream core' thing. Adler Lake has AV1 decdoe coverage ( not sure M1 does so that would be a gap but not really a core function unit clock speed gap. )


The notion that Apple is gong to "loose out" on their major, modern multimedia apps ( Logic , Final cut pro , Resolve , etc. ) apps is relatively weak. Their relatively outsized L3 cache size and memory bandwidth boost (to handle the higher performance iGPU ) substantively offset having a lower uber turbo boost clock speed on vast majority of workloads.

IMHO, I doubt Apple is going to chuck their core base design methodology to chase some relatively small corner cases. Adobe has some ancient infrastructure apps that are constipated on single threaded critical sections, but I don't see Apple bending over backwards to make Adobe CS6 go faster.
 
  • Like
Reactions: PeterJP

leman

macrumors Core
Oct 14, 2008
19,523
19,679
However is general multimedia processing a single threaded gated process?

Decoding RED RAW gated by single core? DAW of 5-20 instruments a single threaded process ? Decoding/encoding 4 video streams single threaded?

Mainstream video codecs ( h.264 , h.265 ) run through the fixed function logic; not particularly a 'general instruction stream core' thing. Adler Lake has AV1 decdoe coverage ( not sure M1 does so that would be a gap but not really a core function unit clock speed gap. )


The notion that Apple is gong to "loose out" on their major, modern multimedia apps ( Logic , Final cut pro , Resolve , etc. ) apps is relatively weak. Their relatively outsized L3 cache size and memory bandwidth boost (to handle the higher performance iGPU ) substantively offset having a lower uber turbo boost clock speed on vast majority of workloads.

IMHO, I doubt Apple is going to chuck their core base design methodology to chase some relatively small corner cases. Adobe has some ancient infrastructure apps that are constipated on single threaded critical sections, but I don't see Apple bending over backwards to make Adobe CS6 go faster.

Video content creation is just only one of many uses for a powerful laptop. Software development, data analysis, academia - these are all examples where strong single-core performance is critical. And it’s obvious that Apple recognizes this since they have always focused on delivering good balance between single-core and multi-core performance.
 
  • Like
Reactions: dustSafa

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I disagree. Apple has long made best effort to deliver fastest possible CPU in a ultraimpact form factor. Their new entry level CPU core competes toe to toe with much more expensive fastest available desktop designs. As I’ve said before, I see no world where Apple will be satisfied in holding the second place.

Regarding GPUs, yes, Apple will be ok with delivering mid-range to lower-high-end performance, but they will do so in a form factor and with power consumption others can only dream about.

I think you are mutating "fastest" into a monomaniacal focus on solely single threaded. If in its laptops class M1X is 80% faster iGPU . 30% faster , and 5% slower on single threaded then that isn't a diaster from a balanced, overall system metrics perspective. That is winning two out of three by a large margin and incrementally under on just one of the three.

Apple's single thread progress has mostly fallen out of their perf/watt push as a side effect. I think you have the 'cart in front of the horse'. Apple is highly unlikely to toss aside and de-priority perf/watt just to only get to better single thread scores.

If the M1X has more cores it should also have a large unified L3 cache. If put the whole system in to a benchmarking mode where largely push the increased Performance cores into a sleep state so that a single P core can leverage a 20-100% larger L3 cache just for its own data stream then there isn't a huge drive to push clock speed far higher ( or change the microarchecture). Pretty sure I saw one M1 versus A14 floorplan analysis that said the M1's L3 was smaller than the A14. ( partially offset on the M1 by going to twice as many memory controllers. ) If true, there is an extremely good chance that are not seeing absolute maximum single thread performance out of the core design that is already present in the M1. ( there are probably a decent, but not overly large, number of cache miss pipeline stalls that put bubbles into the instruction stream completion). Higher effective IPC can offset brute force clock count cranking.

There is a pretty good chance the M1 is die size constrained more than the A14 is. (otherwise wouldn't have had to take hit on L3 size (or relative size)). (it had to fit into the iPad Pro). A bigger M1-generation die probably won't be on tight of a leash. If minimally shooting for a MBP 14" , Mac Mini, and MBP 16" those are much bigger board space targets.

"Tortise and the hare" . Slow and steady cores that get work done on every single cycle can get higher throughput on non micro-short timeline benchmarks than cores that out race the coupled memory hierarchy on a regular basis ( 1,000 cycles of work , 400 cycles of effective no-ops waiting for memory system to 'catch up'. 1,000 cycles of work , rinse and repeat). It is like sending a top fuel dragster down an Indy car / NASCAR oval track. burns rubber down the straights and performs badly in the turns.

Apple is extremely likely not switching gears to focus on top fuel dragsters in the M-series pro line up. Attaching a better ( more cache hits and higher bandwidth) power efficient memory subsystem to the cores they have is a far more likely move. That actually will help in the multi-core contexts also ( which includes iGPU ad well as concurrent active P cores. ). That multiple dimensional beneficial impact is the more likely path Apple's design are going to follow. It is more efficient overall and efficiency is one of the primary design objectives.

if bumping the L3 size gets them 3-6% increase in cache hits than a 2-5% increase in clock speed would effectively take advantage of that. Apple takes a modest power increase of running the larger L3 cache but needed it anyway for the multiple core stuff ( which also is going to ake more space and power. )

On the next half or full node fab process improvement Apple can take a clock speed bump. If those come on a regular basis then they can take them as they come along without any fundamentally radical change to commitment to perf/watt as being the number of design objective.


Apple having to through perf/watt out the window because Intel and/or AMD threw it out the window???? Probably not going to happen. Especially for laptop SoCs ( which is likely all that Apple is primarily interested in because that is all they primarily sell in the Mac product space. )


P.S. the other part of "Apple sells fastest Intel laptop CPUs" that is being missed here is that just because they are "fast". Another very substantive role here is that they are also the most expensive. Apple slaps another 15-20% tax on top of Intel's pricing. So

$300 Intel laptop CPU is a $345-360 ( $45-60 for Apple) contributor to system cost.
$500 Intel laptop CPU is a 575-600 ( 75-100 for Apple ).

Apple doesn't pick lowest cost Intel CPUs not because they are slower , but far more so because they don't want to sell laptops below a certain self selected price threshold.

That whole "fastest Intel single thread speeds" is in part a spoon full of marketing sugar to help the Apple tax go down.

For the M1 Apple has largely left the system prices the same while the Intel mark-up disappeared. They have also dropped the higher profit BTO choices. That wasn't to take in less money overall. They are just cranking the higher margin take to the whole product line instead of skewing it to the top end BTO systems.

Intel offered overclocking CPUs on the desktop product mix and Apple regularly skipped those for fixed clocked version and never officially supported overclocking where they did happen to use those SKUs ( because no other good option to push prices higher).

The notion that the MPB 16" system design started with the mobile i9 and then Apple wrapped the whole overall system design around belies the obvious mismatches between the chassis design , power supply , and other major system components. It is the most expensive option. It isn't the best fit option.
[ And frankly if Apple was OCD on max clock and max thermal loads during engineering acdeptance tests they would have caught the fact that the clock limiter setting were wrong before they shipped 10's of thousands of units out the door. ]

For the M1-much-bigger-die also is also probably going to push out higher payouts to all MBP 16" users, not just the upper BTO tiers. Get the extra BTO money without offering as many configuration options.
 
  • Like
Reactions: AgentMcGeek
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.