Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
There were mentions of a new GPU revision with codename “Lifuka”. Also, I doubt very much that Apple will still be using the same A14-based tech a year after. The delays most likely mean that we will see a substantial hardware revision. On the GPU side, I expect at least hardware ray tracing and larger register files/caches, possibly also double FP16 throughput.
Not disagreeing with what you wrote here about the likely hardware changes, but:

1) Gurman wrote the delays were due to the screen (mini-LED production), not the silicon:
"Gurman notes that Apple had originally planned to release the new MacBook Pros earlier this year, but continued complications around mini-LED production delayed their launch."

[This wouldn't explain the delay in the Mini, unless Apple decided to launch the new Mini+MBP's at the same time.]

2) Not sure, but I had the impression the Lifuka was the GPU for the Mac Pro, not not the MBP. Is that not the case?
 
Last edited:
Not disagreeing with what you wrote here about the likely hardware changes, but:

1) Gurnman wrote the delays were due to the screen (mini-LED production), not the silicon:
"Gurman notes that Apple had originally planned to release the new MacBook Pros earlier this year, but continued complications around mini-LED production delayed their launch."

[This wouldn't explain the delay in the Mini, unless Apple decided to launch the new Mini+MBP's at the same time.]

2) Not sure, but I had the impression the Lifuka was the GPU for the Mac Pro, not not the MBP. Is that not the case?

Mac Pro is a low-volume product, developing a GPU only for it probably makes little economical sense. Besides, Apples “secret sauce” is tight integration of heterogeneous processors within the same memory hierarchy. The best way to get there while achieving economy of scale is using chiplets. That’s where the industry is moving anyway. AMD had simpler version of chiplets for a while, Intel is basing its next-gen Xeons on a more advanced “tiles”, and Apple is know to have advanced research in this areas for years. It all comes together when you look at rumored core counts for upcoming Mac Pros - these are likely the 14/16” MacBook chips stitched together into one SoC using fast bridges.
 
Mac Pro is a low-volume product, developing a GPU only for it probably makes little economical sense. Besides, Apples “secret sauce” is tight integration of heterogeneous processors within the same memory hierarchy. The best way to get there while achieving economy of scale is using chiplets. That’s where the industry is moving anyway. AMD had simpler version of chiplets for a while, Intel is basing its next-gen Xeons on a more advanced “tiles”, and Apple is know to have advanced research in this areas for years. It all comes together when you look at rumored core counts for upcoming Mac Pros - these are likely the 14/16” MacBook chips stitched together into one SoC using fast bridges.
What about the side-band ECC communication channels the Mac Pro will (presumably) need between the RAM and CPU? Can Apple simply add this to stitched-together MBP chips?

And what if Apple wants to offer higher-end HBM memory in the Mac Pro? To what extent can it make use of the existing MBP chips?
 
What about the side-band ECC communication channels the Mac Pro will (presumably) need between the RAM and CPU? Can Apple simply add this to stitched-together MBP chips?

And what if Apple wants to offer higher-end HBM memory in the Mac Pro? To what extent can it make use of the existing MBP chips?

The chiplet memory controllers would need to support all these features, but I don’t see much problem here. Support for multiple memory standards is a common thing. For example AMD Vega Supports both GDDR5 and HMB2.
 
What about the side-band ECC communication channels the Mac Pro will (presumably) need between the RAM and CPU? Can Apple simply add this to stitched-together MBP chips?

And what if Apple wants to offer higher-end HBM memory in the Mac Pro? To what extent can it make use of the existing MBP chips?
Apple's been in the custom silicon game for over a decade, and the Apple Silicon Mac releases are just the latest incarnations.

Heck, their phones have custom NVMe controllers - and from all appearances it looks like Apple's been making preparations to leave Intel since they requested ARM put together ARMv8 and the 64 bit Instruction Set Architecture (AArch64) used in the A7 and the iPhone 5s released in 2013 (so planning would've had to have started around 2011).

Here's my standard blurb on the Apple Silicon Team for those unfamiliar with the evolution of Apple Silicon:

M1 has 4 high performance Firestorm and 4 high efficiency Icestorm cores - it was designed for the low-end MacBook Air (fanless) and 13" MacBook Pro models as part of their annual spec bump.

Rumor has it the M1x slated for release real soon now will have 8 Firestorm cores (depending on binning) and 2 Icestorm cores and will be targeted at machines like the 14" and 16" MacBook Pros and possibly the high end Mac Mini.

In 2008, Apple acquired PA Semi and worked with cash strapped Intrinsity and Samsung to produce a FastCore Cortex-A8; the frenemies famously split and Apple used their IP and Imagination's PowerVR to create the A4 and Samsung took their tech to produce the Exynos 3. Apple acquired Intrinsity and continued to hire engineering talent from IBM's Cell and XCPU design teams, and hired Johny Srouji from IBM who worked on the POWER7 line to direct the effort.

This divergence from standard ARM designs was continued by Apple who continued to nurture and build their Silicon Design Team (capitalized out of respect) for a decade, ignoring standard ARM designs building their own architecture, improving and optimizing it year by year for the last decade.

Whereas other ARM processor makers like Qualcomm and Samsung pretty much now use standard ARM designed cores - Apple has their own designs and architecture and has greatly expanded their own processor acumen to the point where the Firestorm cores in the A14 and M1 are the most sophisticated processors in the world with an eight wide processor design with a 690 instruction execution queue with a massive reorder buffer and the arithmetic units to back it up - which means its out-of-order execution unit can execute up to eight instructions simultaneously.

x86 processor makers are hampered by the CISC design and a variable instruction length. This means that at most they can produce a three or four wide design for an instruction subset, and even for that the decoder would have to be fiendishly clever, as it would have to guess where one instruction ended and the next began.

There's a problem shared with x86-64 processor makers and Windows - they never met an instruction or feature they didn't like. What happens then is you get a build-up of crud that no one uses, but it still consumes energy and engineering time to keep working.

AMD can get better single core speed by pushing up clocks (and dealing with the exponentially increased heat though chiplets are probably much harder to cool), and Intel by reducing the number of cores (the top of the 10 core 20 thread 10900K actually had to be shaved to achieve enough surface area to cool the chip so it at 14nm had reached the limits of physics). Both run so hot they are soon in danger of running into Moore's Wall.

Apple OTOH ruthlessly pares underused or unoptimizable features.

When Apple determined that ARMv7 (32 bit ARM) was unoptimizable, they wrote it out of iOS, and removed those logic blocks from their CPUs in two years, repurposing the silicon real estate for more productive things. Intel, AMD, and yes even Qualcomm couldn't do that in a decade.

Apple continues that with everything - not enough people using Force Touch - deprecate it, remove it from the hardware, and replace it with Haptic Touch. Gone.

Here's another secret of efficiency - make it a goal. Two years ago on the A13 Bionic used in the iPhone 11s, the Apple Silicon Team introduced hundreds of voltage domains so they could turn off parts of the chip not in use. Following their annual cadence, they increased the speed of the Lightning high performance and the Thunder high efficiency cores by 20% despite no change in the 7nm mask size. As an aside, they increased the speed of matrix multiplication and division by six times (used in machine learning).

Last year they increased the speed of the Firestorm high performance and Icestorm high efficiency cores by another 20% while dropping the mask size from 7nm to 5nm. That's a hell of a compounding rate and explains how they got to where they are. Rumor has it they've bought all the 3nm capacity from TSMC for the A16 (and probably M3) next year.

Wintel fans would deny the efficacy of the A series processors and say they were mobile chips, as if they used slower silicon with wheels on the bottom or more sluggish electrons.

What they were were high efficiency chips which were passively cooled and living in a glass sandwich. Remove them from that environment where they could breathe more easily and boost the clocks a tad and they became a raging beast.

People say that the other processor makers will catch up in a couple of years, but that's really tough to see. Apple Silicon is the culmination of a decade of intense processor design financed by a company with very deep pockets - who is fully cognizant of the competitive advantage Apple Silicon affords. Here's an article in Anandtech comparing the Firestorm cores to the competing ARM and x86 cores. It's very readable for an article of its ilk:


Of course these are the Firestorm cores used in the A14, and are not as performant as the cores in the M1 due to the M1's higher 3.2 ghz clock speed.
 
Apple's been in the custom silicon game for over a decade, and the Apple Silicon Mac releases are just the latest incarnations.

Heck, their phones have custom NVMe controllers - and from all appearances it looks like Apple's been making preparations to leave Intel since they requested ARM put together ARMv8 and the 64 bit Instruction Set Architecture (AArch64) used in the A7 and the iPhone 5s released in 2013 (so planning would've had to have started around 2011).

Here's my standard blurb on the Apple Silicon Team for those unfamiliar with the evolution of Apple Silicon:

M1 has 4 high performance Firestorm and 4 high efficiency Icestorm cores - it was designed for the low-end MacBook Air (fanless) and 13" MacBook Pro models as part of their annual spec bump.

Rumor has it the M1x slated for release real soon now will have 8 Firestorm cores (depending on binning) and 2 Icestorm cores and will be targeted at machines like the 14" and 16" MacBook Pros and possibly the high end Mac Mini.

In 2008, Apple acquired PA Semi and worked with cash strapped Intrinsity and Samsung to produce a FastCore Cortex-A8; the frenemies famously split and Apple used their IP and Imagination's PowerVR to create the A4 and Samsung took their tech to produce the Exynos 3. Apple acquired Intrinsity and continued to hire engineering talent from IBM's Cell and XCPU design teams, and hired Johny Srouji from IBM who worked on the POWER7 line to direct the effort.

This divergence from standard ARM designs was continued by Apple who continued to nurture and build their Silicon Design Team (capitalized out of respect) for a decade, ignoring standard ARM designs building their own architecture, improving and optimizing it year by year for the last decade.

Whereas other ARM processor makers like Qualcomm and Samsung pretty much now use standard ARM designed cores - Apple has their own designs and architecture and has greatly expanded their own processor acumen to the point where the Firestorm cores in the A14 and M1 are the most sophisticated processors in the world with an eight wide processor design with a 690 instruction execution queue with a massive reorder buffer and the arithmetic units to back it up - which means its out-of-order execution unit can execute up to eight instructions simultaneously.

x86 processor makers are hampered by the CISC design and a variable instruction length. This means that at most they can produce a three or four wide design for an instruction subset, and even for that the decoder would have to be fiendishly clever, as it would have to guess where one instruction ended and the next began.

There's a problem shared with x86-64 processor makers and Windows - they never met an instruction or feature they didn't like. What happens then is you get a build-up of crud that no one uses, but it still consumes energy and engineering time to keep working.

AMD can get better single core speed by pushing up clocks (and dealing with the exponentially increased heat though chiplets are probably much harder to cool), and Intel by reducing the number of cores (the top of the 10 core 20 thread 10900K actually had to be shaved to achieve enough surface area to cool the chip so it at 14nm had reached the limits of physics). Both run so hot they are soon in danger of running into Moore's Wall.

Apple OTOH ruthlessly pares underused or unoptimizable features.

When Apple determined that ARMv7 (32 bit ARM) was unoptimizable, they wrote it out of iOS, and removed those logic blocks from their CPUs in two years, repurposing the silicon real estate for more productive things. Intel, AMD, and yes even Qualcomm couldn't do that in a decade.

Apple continues that with everything - not enough people using Force Touch - deprecate it, remove it from the hardware, and replace it with Haptic Touch. Gone.

Here's another secret of efficiency - make it a goal. Two years ago on the A13 Bionic used in the iPhone 11s, the Apple Silicon Team introduced hundreds of voltage domains so they could turn off parts of the chip not in use. Following their annual cadence, they increased the speed of the Lightning high performance and the Thunder high efficiency cores by 20% despite no change in the 7nm mask size. As an aside, they increased the speed of matrix multiplication and division by six times (used in machine learning).

Last year they increased the speed of the Firestorm high performance and Icestorm high efficiency cores by another 20% while dropping the mask size from 7nm to 5nm. That's a hell of a compounding rate and explains how they got to where they are. Rumor has it they've bought all the 3nm capacity from TSMC for the A16 (and probably M3) next year.

Wintel fans would deny the efficacy of the A series processors and say they were mobile chips, as if they used slower silicon with wheels on the bottom or more sluggish electrons.

What they were were high efficiency chips which were passively cooled and living in a glass sandwich. Remove them from that environment where they could breathe more easily and boost the clocks a tad and they became a raging beast.

People say that the other processor makers will catch up in a couple of years, but that's really tough to see. Apple Silicon is the culmination of a decade of intense processor design financed by a company with very deep pockets - who is fully cognizant of the competitive advantage Apple Silicon affords. Here's an article in Anandtech comparing the Firestorm cores to the competing ARM and x86 cores. It's very readable for an article of its ilk:


Of course these are the Firestorm cores used in the A14, and are not as performant as the cores in the M1 due to the M1's higher 3.2 ghz clock speed.
Sorry, but with a quick scan I wasn't able to find anything in here that responds directly to either of the two questions I asked. Might have been better to post it separately rather than as a reply to my post. Also, there's a lot of claims in here -- to make this more credible I'd recommend adding supporting references.
 
Sorry, but with a quick scan I wasn't able to find anything in here that responds directly to either of the two questions I asked. Might have been better to post it separately rather than as a reply to my post. Also, there's a lot of claims in here -- to make this more credible I'd recommend adding supporting references.
You make it sound like Apple has cobbled together their SoCs and are blowing away everyone else by accident, while the Apple Silicon Team is probably at this moment the premier silicon design team of the world.

Find your own references - it's all public record.

And to answer your question, if Apple wants to use ECC memory they'll use ECC memory.

It's not like Apple takes someone else's designs and cobbles something workable together for their SoCs - they design their own silicon from the ground up, like Intel, AMD, and ARM. It's how they've got the world's leading price/performance/efficiency package.
 
Last edited:
M1 is already gaming worthy, M1X will likely be more of the same (unless it gets RT and MLSS hardware). We really are going to end up circling back to why the type of games that actually can take advantage of the hardware is sparse.
 
It's all Unified Memory now anyway, assets are accessible by the CPU and the GPU indiscriminately. So whether you go for the 16GB or 32GB versions, you should have more than enough for your graphics needs.
 
M1 is already gaming worthy, M1X will likely be more of the same (unless it gets RT and MLSS hardware). We really are going to end up circling back to why the type of games that actually can take advantage of the hardware is sparse.
Yeah, that's basically the TLDR answer to OP's question. That the M1X (or M2 even) hardware is gaming-worthy shouldn't really be in question. The bigger issue here will always be what is Apple doing to invite gaming companies (and Indies) that have deserted the Mac to reconsider.
 
  • Like
Reactions: AgentMcGeek
M1 fixed the hardware problem to a point where I could see PS4 / Xbox One games being ported and running well enough at 30fps and low resolutions. This will only get better, but it's a good start.

The indie software side of the equation has improved and I find it fairly rare now for an indie game to not eventually end up on Mac (thanks Unity and, I guess, Apple Arcade).

The AAA software problem remains though and I can't see it being fixed unless Apple wades waving lots of cash - either to incentivise devs to do a Mac port, or to simply buy up some studios and get some exclusives. The problem with the latter approach is that AAA games cost a ton of money so you need that multiplatform release to get a decent return on investment. And I'm not sure whether AAA developers will be amenable to the cost of porting their engines to Mac.

I feel that when it comes to gaming Apple is a bit Google-like and seems to drop support for things that can be important (like OpenGL and 32-bit apps).
 
  • Like
Reactions: leman
I feel that when it comes to gaming Apple is a bit Google-like and seems to drop support for things that can be important (like OpenGL and 32-bit apps).
Yeah, the push for Metal and 64 bit mode were simply to get compiling to a universal binary down to a single option in Xcode.

Apple was secretly herding developers on the path to Apple Silicon for quite some time.
 
Last edited:
Yeah, the push for Metal and x86-64 were simply to get compiling to a universal binary down to a single option in Xcode.

Apple was secretly herding developers on the path to Apple Silicon for quite some time.
It sure doesn't feel like they are working closely enough with Unity and Epic for making sure their platforms have full support for all the features their API's provide.
 
It sure doesn't feel like they are working closely enough with Unity and Epic for making sure their platforms have full support for all the features their API's provide.
I have no idea about that - I do know that WWDC has had a Metal track for quite some time (which like all the tracks is supposed to have Apple engineers available).
 
I have no idea about that - I do know that WWDC has had a Metal track for quite some time (which like all the tracks is supposed to have Apple engineers available).
My example: Apple says Metal has had Ray Tracing support since 2019, yet neither Unity or Unreal Engine supports it. Lumen and Nanite are not supported on macOS in UE5 (still).
 
My example: Apple says Metal has had Ray Tracing support since 2019, yet neither Unity or Unreal Engine supports it. Lumen and Nanite are not supported on macOS in UE5 (still).
Doesn't make sense for a feature to exist and not tell anyone about it - Apple certainly couldn't use such a feature in the interface UI.

This is the Overview taken from the Metal Performance Shaders section of Apple Documentation on the Apple Developer site:

Overview​

The Metal Performance Shaders framework contains a collection of highly optimized compute and graphics shaders that are designed to integrate easily and efficiently into your Metal app. These data-parallel primitives are specially tuned to take advantage of the unique hardware characteristics of each GPU family to ensure optimal performance.

Apps adopting the Metal Performance Shaders framework achieve great performance without needing to create and maintain hand-written shaders for each GPU family. Metal Performance Shaders can be used along with your app’s existing Metal resources (such as the MTLCommandBuffer, MTLTexture, and MTLBuffer objects) and shaders.

The Metal Performance Shaders framework supports the following functionality:

  • Apply high-performance filters to, and extract statistical and histogram data from images.
  • Implement and run neural networks for machine learning training and inference.
  • Solve systems of equations, factorize matrices and multiply matrices and vectors.
  • Accelerate ray tracing with high-performance ray-geometry intersection testing.
Doesn't sound like much of a secret.
 
Last edited:
Doesn't make sense for a feature to exist and not tell anyone about it - Apple certainly couldn't use such a feature in the interface UI.

This is the Overview taken from the Metal Performance Shaders section of Apple Documentation on the Apple Developer site:

Overview​

The Metal Performance Shaders framework contains a collection of highly optimized compute and graphics shaders that are designed to integrate easily and efficiently into your Metal app. These data-parallel primitives are specially tuned to take advantage of the unique hardware characteristics of each GPU family to ensure optimal performance.

Apps adopting the Metal Performance Shaders framework achieve great performance without needing to create and maintain hand-written shaders for each GPU family. Metal Performance Shaders can be used along with your app’s existing Metal resources (such as the MTLCommandBuffer, MTLTexture, and MTLBuffer objects) and shaders.

The Metal Performance Shaders framework supports the following functionality:

  • Apply high-performance filters to, and extract statistical and histogram data from images.
  • Implement and run neural networks for machine learning training and inference.
  • Solve systems of equations, factorize matrices and multiply matrices and vectors.
  • Accelerate ray tracing with high-performance ray-geometry intersection testing.
I can't tell if you are disagreeing or agreeing, lol. Lets put it this way 4AGames did Metro Exodus for the Mac. No one at Apple who were helping them port the game to macOS thought to tell 4AGames that macOS supports Ray Tracing (Metal Performance Shaders) and that they could get the Enhanced Edition working on macOS as a showcase game.
 
I can't tell if you are disagreeing or agreeing, lol. Lets put it this way 4AGames did Metro Exodus for the Mac. No one at Apple who were helping them port the game to macOS thought to tell 4AGames that macOS supports Ray Tracing (Metal Performance Shaders) and that they could get the Enhanced Edition working on macOS as a showcase game.
I don't know what to say.

Every year Apple sponsors the World Wide Developer Conference and gives presentations on all their software developments and APIs. I would assume that any developer doing Apple development would have at least one person attending, especially to see if there were changes in their area of interest - or failing that at least check Apple documentation on the developer website.

Even if you didn't win the lottery for personal attendance (pre-COVID), Apple's been offering online video of sessions for quite some time.

I know before I retired I was Senior Software Engineer for our mainframe systems, and we'd send someone to SHARE to see what was new with IBM mainframe systems and operating systems.

Maybe 4AGames didn't ask and the Apple engineers thought they'd be aware what was available from WWDC or Apple documentation. Did 4AGames ask the engineers about Ray Tracing and let them know they wanted to implement it in their game?
 
Last edited:
I don't know what to say.

Every year Apple sponsors the World Wide Developer Conference and gives presentations on all their software developments and APIs. I would assume that any developer doing Apple development would have at least one person attending, especially to see if there were changes in their area of interest - or failing that at least check Apple documentation on the developer website.

Even if you didn't win the lottery for personal attendance (pre-COVID), Apple's been offering online video of sessions for quite some time.

I know before I retired I was Senior Software Engineer for our mainframe systems, and we'd send someone to SHARE to see what was new with IBM mainframe systems and operating systems.

Maybe 4AGames didn't ask and the Apple engineers thought they'd be aware what was available from WWDC or Apple documentation. Did 4AGames ask the engineers about Ray Tracing and let them know they wanted to implement it in their game?
Maybe they did ask and determined it was a garbage api and wouldn't work for the Enhanced Edition. I would have to defer to @leman for the deets on MPS Ray Tracing viability.
 
Maybe they did ask and determined it was a garbage api and wouldn't work for the Enhanced Edition. I would have to defer to @leman for the deets on MPS Ray Tracing viability.

I don’t think there is much mystery in this. Apple does not have hardware RT support, so it boils down to performance. You can use real-time RT on M1 in some circumstances (e.g. shadows on simpler geometries), but its not nearly fast enough for demanding games.

The Metal RT API itself is solid. It seems feature-complete with DX12 Ultimate and it’s more capable for professional rendering applications (higher memory limits, built-in object animation for motion blur).
 
Basically the entire argument hinges on whether the market share of the Mac increases enough to warrant attention from devs.

We’ve established that Macs have been “gaming capable” for a long time, yet still the platform is largely ignored.

Apple never understood gaming or the gaming demographic. And they’ve never put serious effort into courting developers. And the effort they have made has been mobile-focused.

Personally, I don’t think the situation will change much in the future. MacOS will get ports about a year after a game releases, not very well optimized. Prices for Macs aren’t going to come down and even with the increased grunt of the m series it’ll be wasted on poorly optimized ports.
 
We’ve established that Macs have been “gaming capable” for a long time, yet still the platform is largely ignored.
The majority of macs sold haven’t been gaming capable until just recently. Both the air, mini, 24” iMac, and two port MBP have all been using terrible Intel integrated graphics for years. Yet the air and 2-port pro make up a substantial portion of macs sold.

It wasn’t until M1 that the most common macs sold actually had hardware that could handle decent gaming, but Apple silicon is still relatively new. What I will be interested in is reassessing this after 2-3 years when the entire lineup has transitioned over and apple silicon has had a few years to settle.
 
  • Like
Reactions: AgentMcGeek
any intel mac can benfit from egpu since TB3, so they are gaming very capable

I disagree. The current macs lack the 8.6 pound combination of cheap polycarbonate plastics, color changing LEDs around the venting and on the lid, Transformer-like industrial design, letter typefaces on the keys that appear to be done by a freshman design student, and various decals - such as a dragon, a cool Hotwheels-like flame, or everybody's favorite - who wouldn't want "Predator" written on their computer?

Fact is, and it's been this way since day one and is NEVER going to change. If you want a gaming PC, go Windows. If want to game at home on a Mac, the best solution is purchasing a $300 Xbox Series S. If you want to game on the go, get the $250 Switch and bring it along with you. If you want to game occasionally but the primary reason you have a device is to make money or just do computer things, Macs will do just fine.

Macs have never, will never, and can never be as capable at gaming as those three options.
 

Attachments

  • 71LXTAab-_L._SL1500__01.jpg
    71LXTAab-_L._SL1500__01.jpg
    96.5 KB · Views: 67
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.