Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So no RT on Macs (so far) either.

So it seems, but we will need to wait for more detailed developer docs. I wouldn't expect RT to be priority for this release though, we only got some entry-level hardware. You won't be running RT games on that anyway.

Are they trying to hide the RAM type?

They are hiding a lot of hardware details: clocks, RAM type, TDP... RAM will most likely be LPDDR4/5 with some sort of wide multi-channel configuration. Probably around 80GBps bandwidth, something that will be plenty for a chip of this spec.
 
Are they trying to hide the RAM type?
Yeah, I think so. I'm pretty sure this is dual channel LPDDR4X. Compare the image you saw today to this image of the A12X:

Apple_A12X.jpg


Which uses LPDDR4X. I think, frankly, LPDDR5 supplies were more limited than they expected.
 
  • Like
Reactions: PortoMavericks
You can read the footnotes/fineprint on the GPU for performance comparisons:


"Testing conducted by Apple in October 2020 using preproduction 13-inch MacBook Pro systems with Apple M1 chip and 16GB of RAM using select industry-standard benchmarks. Comparison made against the highest-performing integrated GPUs for notebooks and desktops commercially available at the time of testing. Integrated GPU is defined as a GPU located on a monolithic silicon die along with a CPU and memory controller, behind a unified memory subsystem. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro."

The definition basically excludes Ryzen APU's since the Ryzen chiplet is on a separate silicon die from the CPU and memory controller
 
You can read the footnotes/fineprint on the GPU for performance comparisons:


"Testing conducted by Apple in October 2020 using preproduction 13-inch MacBook Pro systems with Apple M1 chip and 16GB of RAM using select industry-standard benchmarks. Comparison made against the highest-performing integrated GPUs for notebooks and desktops commercially available at the time of testing. Integrated GPU is defined as a GPU located on a monolithic silicon die along with a CPU and memory controller, behind a unified memory subsystem. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro."

The definition basically excludes Ryzen APU's since the Ryzen chiplet is on a separate silicon die from the CPU and memory controller
That should mean the comparison is against the 96 EU Tiger Lake part. Yet somehow I don't think that's what they used. An 8 core Apple GPU would probably squeak by a 96 EU Xe tuned to 28W, but it would be close.
 
"Testing conducted by Apple in October 2020 using preproduction 13-inch MacBook Pro systems with Apple M1 chip and 16GB of RAM using select industry-standard benchmarks. Comparison made against the highest-performing integrated GPUs for notebooks and desktops commercially available at the time of testing. Integrated GPU is defined as a GPU located on a monolithic silicon die along with a CPU and memory controller, behind a unified memory subsystem. Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro."

Ahaha, sneaky Apple marketing :D Yeah, we desperately need some benchmarks. It's probably a 15W Tiger Lake, since I don't think 30W parts are commercials available?
 
That should mean the comparison is against the 96 EU Tiger Lake part. Yet somehow I don't think that's what they used. An 8 core Apple GPU would probably squeak by a 96 EU Xe tuned to 28W, but it would be close.
I'm not even sure about that. It said the product was tested in October 2020 with commercially available at the time of testing. I don't believe the 96 EU Tiger Lake part was available
 
  • Like
Reactions: Unregistered 4U
Ahaha, sneaky Apple marketing :D Yeah, we desperately need some benchmarks. It's probably a 15W Tiger Lake, since I don't think 30W parts are commercials available?
I believe you are correct. They weren't available yet - heck, Tiger Lake 15W was barely shipping in October 2020 and it said "at the time of testing" which could have been October 1st!
 
That should mean the comparison is against the 96 EU Tiger Lake part. Yet somehow I don't think that's what they used. An 8 core Apple GPU would probably squeak by a 96 EU Xe tuned to 28W, but it would be close.

Based on other indirect evidence (flops, comparisons to previous models), I'd expect the highest-performing M1 (estimated 15W) to be a bit faster than the 30W Tiger Lake Xe in synthetics, and 30-50% faster in real-world gaming.
 
I believe you are correct. They weren't available yet - heck, Tiger Lake 15W was barely shipping in October 2020 and it said "at the time of testing" which could have been October 1st!

I think important thing to consider is that this marketing stuff is not written for us nerds. It's written for an average consumer. It is entirely possible that the real performance will disappoint, but based on what we saw so far I choose to be optimistic.
 
Yes, I would assume the comparison is against the same Ice Lake 28w chip in the Intel Macbook Pro, which I think in fact was the fastest shipping integrated chip in early October (at least based on the careful wording they used). That meets expectations and is in line with what leman said. Tiger Lake GPU is 2xish faster than Ice Lake, and that would make the M1 a bit faster than that.
 
So no RT on Macs (so far) either.
I wouldn't expect it. Right now RT is only available on mid-to-high end GPUs (of which have ridiculous TDP and cost nearly as much or more than these new Macs). Expecting Apple to come out with a laptop SoC with competitive RT is unrealistic.
 
I wouldn't expect it. Right now RT is only available on mid-to-high end GPUs (of which have ridiculous TDP and cost nearly as much or more than these new Macs). Expecting Apple to come out with a laptop SoC with competitive RT is unrealistic.
What’s the form factor of RT capable laptops nowadays?
 
Are they trying to hide the RAM type?

They are hiding a lot of hardware details: clocks, RAM type, TDP... RAM will most likely be LPDDR4/5 with some sort of wide multi-channel configuration. Probably around 80GBps bandwidth, something that will be plenty for a chip of this spec.

Grain of salt...?!?

Memory type:LPDDR4X-4266
LPDDR5-5500
Max. Memory:16 GB
Memory channels:2ECC:No
 
Based on other indirect evidence (flops, comparisons to previous models), I'd expect the highest-performing M1 (estimated 15W) to be a bit faster than the 30W Tiger Lake Xe in synthetics, and 30-50% faster in real-world gaming.
Yep. As we said before, the A12Z is already faster than the 28W Intel Xe at 3DMark Wild Life. The M1 should beat any competing laptop iGPU with ease.
 
So no RT on Macs (so far) either.

Interesting observation: Apple does claim RT support in Metal for M1, but I doubt that it has dedicated RT support hardware. Some limited RT for basic shadows etc. would still be possible in games, would be interesting to run some benchmarks. This is a similar approach Chrysis Remastered uses, only that Metal has some advanced shader support to make it happen (like recursive shader functions etc.)
 
Interesting observation: Apple does claim RT support in Metal for M1, but I doubt that it has dedicated RT support hardware. Some limited RT for basic shadows etc. would still be possible in games, would be interesting to run some benchmarks. This is a similar approach Chrysis Remastered uses, only that Metal has some advanced shader support to make it happen (like recursive shader functions etc.)
Yeah I saw that in the presentation. Sadly none of the 3dmark benchmarks that support RT work on the Mac or iOS so 🤷🏽‍♂️. I doubt GFXBench will ever add RT support but would love to be surprised.
SotTR has RT support on the PC, maybe Feral can release a patch to add it to the Mac.
 
Yeah, I too wonder why Apple specificaly mentions ray tracing in their M1 video. If ray tracing is just implemented in compute shaders, how is it different from any other current Mac and AX GPU?
 
Yeah, I too wonder why Apple specificaly mentions ray tracing in their M1 video. If ray tracing is just implemented in compute shaders, how is it different from any other current Mac and AX GPU?
Did they? I missed it, where is it in the video?
 
Yeah, I too wonder why Apple specificaly mentions ray tracing in their M1 video. If ray tracing is just implemented in compute shaders, how is it different from any other current Mac and AX GPU?
It's not. I mean, A14 might be slightly more efficient with recursive shader invocation and async compute that one needs for this functionality, but as I mentioned before, I won't expect any fixed-function intersection hardware.

It's important to stress however that DX12 and Nvidia raytracing is also implemented in compute shaders — they just have some dedicated hardware to find ray-geometry intersection faster.

Did they? I missed it, where is it in the video?

At 2:40
 
It's not. I mean, A14 might be slightly more efficient with recursive shader invocation and async compute that one needs for this functionality, but as I mentioned before, I won't expect any fixed-function intersection hardware.

It's important to stress however that DX12 and Nvidia raytracing is also implemented in compute shaders — they just have some dedicated hardware to find ray-geometry intersection faster.



At 2:40
is supposed to isolate RT performance from regular rasterization performance. I think Quake RTX and Minecraft RTX are also able to approximate RT performance while minimizing rasterization performance differences.

of course none of these are currently available on macOS so I dunno how one can test the RT abilities.
 
is supposed to isolate RT performance from regular rasterization performance. I think Quake RTX and Minecraft RTX are also able to approximate RT performance while minimizing rasterization performance differences.

of course none of these are currently available on macOS so I dunno how one can test the RT abilities.

You can download the free source code of one of WWDC RT demos and compile it in Xcode for example ;)

I wouldn't expect UL to go through the trouble of porting these benchmarks to Apple Silicon, because... why would they? Nobody really cares except nerds technology enthusiasts like us. Once (and if) there is a sizable gaming market for Macs, then we will see these benchmarks popping up, but even than it's questionable (as there is no competing hardware).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.