Ha ha being trained in chemistry I should have guesses.Yes...
Ha ha being trained in chemistry I should have guesses.Yes...
You're thinking about this from the standpoint of it still being an Intel-based Mac. The Apple GPU doesn't even work in the same way that your typical AMD, NVIDIA, or even Intel integrated GPU does. The Apple GPUs work more efficiently, so while their on-paper specs won't be impressive compared to, say, the AMD Radeon Pro 5000 series that's present in the 16" MacBook Pro or the 27" iMac, it'll still match or exceed performance because it is running many times more efficiently by design. The caveat to that is that developers need to build around Metal and actually optimize for that kind of a GPU (also that this kind of way of designing GPUs and engineering GPU performance is not terribly common and that developers may find difficulty/annoyance in doing so that wouldn't otherwise be present with AMD GPUs because that mode of operation has been around much longer).
Apple has already stated that all GPUs in Apple Silicon Macs will be integrated.
True, but "performance" is not one dimensional. See my post above. I agree that is seems nearly impossible not to get better or equal performance (as in compute) of AS compared to Intel as things stands. However remember that GPU scaling is still an issue to consider.No - Apple has made it clear that they are going for higher performance AND much higher efficiency.
Apple's slide clearly shows this - greater than desktop performance while using less than current laptop systems. We will see when they release end of the year.
In any case, what is the use of switching if they don't get higher performance? The A13 core with no active cooling in a phone already matches the top Intel processor - It isn't a great leap to imagine an improved A14 core and giving it much more cooling would exceed the best Intel has to offer.
That sounds like a pretty big gamble....it reminds me of the days of the Wii U and the Playstation 3. 2 game consoles that failed to attract a significant portion of the gaming developer community and what resulted was a lack of games on both consoles and in the case of the Wii U, a shortened lifecycle, which Nintendo quickly replaced with much easier to develop for console, called the Switch.
There are downsides to making a bespoke and completely proprietary platform that developers have to spend time learning - Wii U and PS 3 are just 2 examples that spring to my mind.
Yes I know but are these benchmarks running Mac OS on AS? I am not fully convinced that there is 1:1 relation between iPadOS and MAcOS number in for instance Geekbench. If it is, Apple will have hard time not to outperform Intel. However, I do not think that raw performance as in will be the primary goal for the first release.There are numerous benchmarks of the A12X (which is basically the A12Z with one less GPU core enabled) benchmarked on iPadOS (so not running emulated in Rosetta 2) and it wiping the floor with every contemporary 2018 Mac excluding the 8th Gen Hexa-core Core i9 15" MacBook Pro, and iMac Pro. Figure that, from today's standpoint, that mostly translates to the A12Z wiping the floor with anything that isn't a 16" MacBook Pro, 27" iMac (from either 2019 or 2020), iMac Pro, or 2019 Mac Pro. Pretty sure that the quad-core 10th Gen Intel chips in the 2020 4-port 13" MacBook Pro aren't THAT much faster that they beat the native performance of either the A12X or A12Z and certainly no CPU that has ever graced a MacBook Air can beat it.
So, yeah, we're there already! And Apple has all but said that we're not getting the A12Z in a Mac, but rather something newer and faster.
Yes I know but are these benchmarks running Mac OS on AS? I am not fully convinced that there is 1:1 relation between iPadOS and MAcOS number in for instance Geekbench. If it is, Apple will have hard time not to outperform Intel. However, I do not think that raw performance as in will be the primary goal for the first release.
Not sure if this counts - it's MacOS running the ipad version of geekbench:Yes I know but are these benchmarks running Mac OS on AS? I am not fully convinced that there is 1:1 relation between iPadOS and MAcOS number in for instance Geekbench. If it is, Apple will have hard time not to outperform Intel. However, I do not think that raw performance as in will be the primary goal for the first release.
No, it's not.
Absolutely not.
Most ARM designs are highly specific designs to do one particular thing really well. Those chips in those super computers would do nothing in a Mac Pro. Your photoshop would probably underperform.
Not sure if this counts - it's MacOS running the ipad version of geekbench:
New benchmarks and details about iPhone and iPad apps emerge from Apple Silicon Macs - 9to5Mac
At WWDC last month, Apple officially detailed its plans to transition the Mac lineup to custom Apple Silicon processors. As...9to5mac.com
More or less 1:1 mapping of geekbench CPU results. It does look like the DTK is running Geekbench compute faster than an ipad though (about 25%).
In any case, Apple has already stated that the ASi chips coming out far exceed the DTK A12Z chip - that's one of the reason why they don't allow benchmarking on the DTK - it is in no way representative of what is coming out.
Apple don't want people to judge the DTK by benchmarks running on it and concluding this is what Apple will come out at the end of the year.
If you are using Mac for living and not for web browsing, Office typing & whatever you will jump to Windows a lot quicker than you think when ARM comes into play. Software gap will be huge for a very long time. 3D rendering has been Mac's Achilles but with ARM and ditching of OpenGL it will become even more so, Metal is a joke.
People are fantasizing about some Mac Pro running ARM desktop silicone.. my god. That's all I'm gonna say.
Any ASi Mac needs to outperform its predecessor to be seen as a tangible replacement...
Agreed. The question is by how much. The first AS iMac will be the 21"/24" and it should outperform the current 21" Intel iMac, but not by so much that it outperforms the current 27" Intel iMac that was just released! Now, a year from now, when the next iteration of 21"/24" comes out (which should be after the first 27"/30"/32" AS iMac, they can go to town with the improvement, at that point it can be as dramatic as all get out.
There are very unlikely to release a machine that is slower than the current Mac Pro, and there is no indication that they will exclude the Mac Pro from the transition.
BTW, the semiconductor element used in microprocessors is "silicon". Silicone is used in breast implants.
I agree with this. New ASi Macs need to demonstrate an improvement over the equivalent Intel models, but not eclipse the flagship models. This is why the entry-level / mid-level machines will be the first to transition.
The ASi MacBook Pro 13 (maybe 14?) will need to beat the current one by a reasonable margin - maybe 20% single-core, 50% multi-core, 50-100% better GPU, and better battery. But it still needs to be somewhat less powerful than the MBP 16.
Same story with an ASi iMac 24" and the current Intel 8-10 core iMac 27"
There are downsides to making a bespoke and completely proprietary platform that developers have to spend time learning - Wii U and PS 3 are just 2 examples that spring to my mind.
Exactly, second to performance/watt ratio is the king. Performance will follow the improved performance/watt ratio.If there is a differential, it's not substantial. It's mostly the same OS anyway.
Also, I'm not sure why you think that matching or exceeding performance of their Intel models won't be a main objective. It's THE objective, second only to performance per watt improvements.
Why is everyone assuming we will get a 13” MacBook Pro with Apple Silicon? Rumors have consistently claimed it will have a 14” screen.
You're arguing semantics at this point. Apple made it a point to keep much more of a supply in those channels for the reasons I stated than is usually done. They didn't ordinarily do that (I think you agree with me on that, at least).
They showed all of the apps I use to make a living in the frickin' keynote. Better than my current Intel MBP I might add.If you are using Mac for living and not for web browsing, Office typing & whatever you will jump to Windows a lot quicker than you think when ARM comes into play. Software gap will be huge for a very long time. 3D rendering has been Mac's Achilles but with ARM and ditching of OpenGL it will become even more so, Metal is a joke.
People are fantasizing about some Mac Pro running ARM desktop silicone.. my god. That's all I'm gonna say.
I agree. As this thread points out, Apple seems to have carefully positioned its current offerings, and it's clear where an APU + LPDDR5 will cut it. It's the exceptions that are the most interesting.What is required for Apple to be able to credibly ditch discrete GPUs is higher bandwidth memory subsystems.
The upcoming game consoles (PS5/XBox sx) show that you can achieve excellent graphics performance from a SoC - if you can feed it. The die area won't be a problem on TSMC 5nm. Memory bandwidth still needs to be adressed.
On the low end, I guess iPad Pro level is fine, which should mean an 128-bit wider bus to (upgraded) LPDDR5. That should allow pretty much twice the graphics performance of the A12z.
The next step up, if you want to match the dGPU of the 16" MacBook Pro, is either 256-bit wide to LPDDR5 or HBM, just as on the dGPU. HBM will be a bit restrictive as far as total RAM size, but offers great bandwidth obviously.
As far as iMacs go, Apple pretty much needs to go with GDDR6 (as the new generation consoles) or HBM to match the currently best build to order dGPUs Apple offers. And those dGPUs are what Apple offers right now, not what will be on the market at the time the new AS iMacs will be introduced.
Matching dGPU performance is a much taller order than CPU performance, and absolutely requires higher bandwidth memory subsystems. It will still be difficult due to power draw concerns.
I'd also like to add to this that Unity and Unreal have already expressed that they'll support Apple Silicon.If you want to get best performance out of Apple GPUs (and simplify your code), yes, you need to use Metal and Apple-specific rendering techniques. At the same time, most developers use one of the popular game engines (Unity, UE etc.) that take care all the platform-specific stuff for you. There are also open source wrappers that allow you to use standard APIs such as Vulcan on Apple's platforms. Finally, let's not forget WebGPU — an upcoming standard for high-performance GPU programming for web, which is partly based on Apple Metal.
1. Using HBM2E as all-system-memory. I think the MBP16 is going to be deeply uncomfortable with four stacks of 16GB HBM2E. Would it work if they used 2.4Gbps or 2Gbps to limit power consumption?
2. Using HBM2E as cache. Apple could just put a stack of HBM on the GPU / APU package, call it cache and stick with LPDDR5 as main memory? This seems like the efficient option.
There is no way in hell that ARM based CPUs will match the performance of high end x86 processors like Xeon within just two years. That would "break" Moore's law by such a wide margin that it is next to inconceivable. Even Apple's marketing is limited by physics, not to mention their engineers. And don't forget, high end x86 CPUs will make substantial progress in these two years as well. Not just Intel's Xeon, but also (and probably more importantly) AMD's Epyc.
Limiting bandwidth to save energy won’t be necessary. Remember that the current MacBook Pro already has a GPU option that has two stacks of HBM, with an intel CPU and memory subsystem in addition!I agree. As this thread points out, Apple seems to have carefully positioned its current offerings, and it's clear where an APU + LPDDR5 will cut it. It's the exceptions that are the most interesting.
Do you think either of these solutions would be viable?
1. Using HBM2E as all-system-memory. I think the MBP16 is going to be deeply uncomfortable with four stacks of 16GB HBM2E. Would it work if they used 2.4Gbps or 2Gbps to limit power consumption?
I would tend to agree with leman that this is unnecessarily complex for an iMac. For a Mac Pro, I’m not quite as sure. The Mac Pro is by far more difficult to asses unless you have Apples data on how much memory is actually installed in the systems that are currently in use.2. Using HBM2E as cache. Apple could just put a stack of HBM on the GPU / APU package, call it cache and stick with LPDDR5 as main memory? This seems like the efficient option.
You know it escaped me that the MBP16 used DDR4 and not LPDDR4. I might have known it at one point and forgotten it again later...From what I know, HBM2 uses less energy than DDR4, so that shouldn't be much of a factor (if a 16" can handle 64GB of DDR4 it should be able to handle 64GB of HBM). I assume that multiple HBM stacks can be connected via a single bus? And even at 2.0 Gbps you are looking at whopping 260GB/sec of bandwidth — 5x faster than DDR4.
I don't think that option 2. is viable. Using HBM for cache makes the entire system much more complex and I am not sure that the benefit would be that great compared to a large SoC-level cache. Once you are using HBM, you might as well go full HBM.
Limiting bandwidth to save energy won’t be necessary. Remember that the current MacBook Pro already has a GPU option that has two stacks of HBM, with an intel CPU and memory subsystem in addition!
Dual stacks of HBM2e offers a maximum of 2x24GB or 48 GB in total. That’s passable for a top end MacBook Pro or iMac, but it may be that capacities can double by the time a big iMac is ready to introduced.
A price we have to pay for all these high bandwidth options is that RAM won’t be user installable, and there will be hard upper limits to capacity. I’d say the compromise is easily worth it though.
I would tend to agree with leman that this is unnecessarily complex for an iMac. For a Mac Pro, I’m not quite as sure. The Mac Pro is by far more difficult to asses unless you have Apples data on how much memory is actually installed in the systems that are currently in use.
Is manageable with a desktop GPU, since GPU cores don't get too hot and there's lots of active cooling. But it's really not ideal when you have CPU cores in the center whose performance is directly constrained by heat.