Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
How does the W6900X compare to the RTX A6000 for GPGPU computing? Assuming linear scaling, a 128-core AS GPU (which is the largest size rumored for the Mac Pro) would have GPGPU procesisng capability approximately comparable to the latter.

So if the W6900X is comparable to the RTX A6000, the AS Mac Pro would need to effectively have dual-128 core AS GPU's to be comparable to the dual-W6900X that's now available for the Intel MacPro.

Of course, they might have faster AS GPU cores when the AS Mac Pro is released. Then again, the AMD and NVIDIA cards will be faster as well.
As far as I know the A6000 is faster than the W6900X (because RDNA isn't actually that good at compute as I keep saying, lol).
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
Regular RX 6900XT is roughly between RTX 3080 and 3080Ti, W6900XT could be slightly slower with double the VRAM as it is the case with RX 5700XT and W5700X.



Could you please elaborate on how you came up with that estimation? Because, if your GPGPU criteria is TFLOPS, then a 128-core Apple GPU, even with the current M1 architecture, would be just on RTX A6000 level. This is with perfect scaling though. If the criteria is something else, I would like to know.
For first part: Don't know about those other GPU's. My question was how the how the W6900XT compares to the RTX A6000 (using a TFLOPS GPGPU criterial), since I have the calc. to compare the RTX A6000 to the AS GPU. Thus, by transitivity, I could use that to estimate how many AS GPU cores would be needed to equal dual W6900XT's.

For the second part: Yes, it was based on linear scaling and, IIRC, TFLOPS. I can find the calculation if you'd like (forget if I did it or Leman did it), but it seems like you already arrived at the same nos. I have, based on the same assumption: "if your GPGPU criteria is TFLOPS, then a 128-core Apple GPU, even with the current M1 architecture, would be just on RTX A6000 level. This is with perfect scaling though."
 
Last edited:

jeanlain

macrumors 68020
Mar 14, 2009
2,461
954
Apple may have the capability to design a GPU solution that can beat four W6800 GPUs, but it is viable? Such configuration will be sold to a tiny fraction of end users. Apple won't sell it to other manufacturers.
Unless they have found a solution to use the same (putative) dGPUs equipping MacBook Pros and high-end iMacs and to combine them in Mac Pros.
 

aeronatis

macrumors regular
Sep 9, 2015
198
152
For first part: Don't know about those other GPU's. My question was how the how the W6900XT compares to the RTX A6000 (using a TFLOPS GPGPU criterial), since I have the calc. to compare the RTX A6000 to the AS GPU. Thus, by transitivity, I could use that to estimate how many AS GPU cores would be needed to equal dual W6900XT's.

That is my bad! I overlooked the dual W6900X ? Then, the estimate is spot on ??

Do dual W6900X perform twice as good as W6900X though? I am yet to see a detailed comparison of previos sinle and dual cards ?
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
That is my bad! I overlooked the dual W6900X ? Then, the estimate is spot on ??

Do dual W6900X perform twice as good as W6900X though? I am yet to see a detailed comparison of previos sinle and dual cards ?
No problem! Since the question at hand was what would Apple neeed to do with an AS Mac Pro to give it GPU capability comparable to the best the Intel Mac Pro can offer, I compared it to the latter outfitted with dual W6900X's.

Don't know how performance scales when you go to dual cards. But since, for the purposes of this rough estimate, we're assuming simple linear scaling for the Apple GPU cores, we might as well do the same for the two AMD GPU cards.

I did a little digging, and this is what I have (single precision):

128-core AS GPU = 42 TFLOPS
dual W6900X = 44 TFLOPS
single RTX A6000 = 39 TFLOPS.

So, based on this simple (simplistic?) model, a 128 core AS Mac Pro would be in the same GPGPU performance ballpark as either a dual-W6900X or a single RTX A6000.

But to equal the performance of a dual-RTX A6000 workstation, you'd need dual-128 core AS GPU's.
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I know those are AMD cards, and I already mentioned that this was for the Intel Mac Pro. What I mean is that Apple updating the graphics options with current top cards from AMD rather than keeping Vega II cards before they release their own graphics solutions indicates they are confident enough for what they are developing. Do you seriously believe a new model will be less powerful than the model it replaces when their motto was "we want to use our own chip, because we wan't to make better products" when they announced Apple Silicon transition on WWDC 2020?

You are making a huge assumption leap that Apple is trying to do a one-for-one replacement with their systems.
The leaks though Bloomberg have already pointed to the M-series Mac Pro is "half sized" of the current one.
There is now way they are going to fix two GPU in a volume that is half the size.

Apple would have to out hustle four 6800's . with one GPU. That probably is not going to happen. The half sized model will perhaps out compute on the CPU core side, but for workloads that are 90+% GPU ... push the initial data into g4GBGB of VRAM and iterate .... Apple's MB magnitude cache isn't going to help them.

More likely, Apple is going to "move the goal posts". Pick only single GPU benchmarks. Side step apps that can linearly apply GPU resources to solving problems ( pick Adobe or Final Cut versus DaVinci Resolve ). Apple will pick some H.265 codec 442 processing that drops into the Apple media decoder. Several steps like that. Worse case they'll position it as a replacement for the iMac Pro and will get to slap around 4-5 year old configurations.


Edit: By their own cards they are currently developing, I didn't mean the rumoured 32-core GPU for the 16" MacBook Pro. That first paragraph was about the rumoured graphics solutions for the Apple Silicon Mac Pro (64 and 128-core GPU).

the rumors about 64 and 128 core GPU aren't "cards". Those are iGPUs. That's the issue iGPU isn't going to scale because extremely likely not going to go past one. More effort put into "unifying" those GPU cores with the CPU cores than in the modularlization.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
In terms of bandwidth per GPU ALU, the difference between the A6000 and M1 is practically negligible. And M1 has larger cache to offset the rest.

Which ALUs counting? The CUDA 'cores" or all of them ( including the Tensor and RT cores ) ? In the latter case, Apple can't compete with both of those very well with just a cache "offset". in the former, counting to different subsets of the cores directly present on the bandwidth pulls is a bit of misdirection. The P & E , NPU , AMX aren't sitting on the same memory bandwidth pipe? In typical app balanced workload the Apple GPU doesn't get the whole bandwidth and no where near the whole L3 cache.




Of course they will be aiming for those levels - and beyond. Do you really think they will want to release a Mac Pro slower than Intel machines? You are perfectly correct with regards to bandwidth, but you are neglecting the effect the cache has on the total result.

Are they? On the Intel systems Apple isn't covering the Nvidia performance zone. Apple skipped those performance options. Threadripper versus Intel options in late 2022? It is cheaper for Apple to do small updates to their 2018-2019 motherboard and they do not expose the thread limitations of macOS. It is not the best x86_64 baseline to build off of if money is no object and only the top most performance option is selected.

If selecting what is convenient is the current modus operandi .... why is that going to change on Apple silicon.

Pretty good chance Apple is going to dump ECC along the transition way also.

FP64 on the GPU? Apple hasn't even "heard" of that. ( largely because they don't need it on iOS devices ).

As for peak bandwidth.....

M1 Mini drives fewer monitors than the INtel ones do.
The top end Memory capacity is lower.

Apple has backslided on a couple of the non laptop systems they have delivered.

More than likely Apple is going to "move the goal posts" on half sized Mac Pro they do to narrow things down to just what they want. Worse case throw it out there against a iMac Pro so they can slap around some 4-5 year old hardware.



And by the way, Nvidia‘s next supercomputer platform utilizes an iGPU. They have publicly acknowledged that they have reached the wall with dGPU designs. Modern compute and ML workloads require a high level of horizontal interaction, so just throwing more and more high-latency RAM bandwidth is not working anymore.

That isn't really true. Nvidia's Grace CPU doesn't have a GPU.

"... The fourth-generation NVIDIA® NVLink® delivers 900 gigabytes per second (GB/s) of bidirectional bandwidth between the NVIDIA Grace CPU and NVIDIA GPUs. ..."

Some Nvidia GPUs with NVLink can talk to IBM Power CPUs. That wasn't iGPUs 3-4 years ago when that rolled out. It isn't going to really be the case when these Grace CPUs roll out in the future. Primarily, what Nvidia is trying to get back to what they had and lost when they dumped IBM Power to go with AMD EPYC on their last generation.

There is little indication there that the GPUs don't have their own Memory. It can get addressed in some contexts in a shared memory space, but that is a stretch to lable that as iGPU. iGPU has implication that working out of the aame working store. Quite doubtful that Nvidia is moving their next gen HPC GPU's to LPDDR5X

"... Grace CPU is the first server CPU to harness LPDDR5x memory with server-class reliability through mechanisms like error-correcting code (ECC) to meet the demands of the data center while delivering 2X the memory bandwidth and up to 10X better energy efficiency compared to today’s server memory. ..."


There is difference between sharing some limited memory and sharing all memory and cache. AMD (via Infinity Fabric) , Intel (via PCI-e v5 (v6) and CXL ) , and Nvidia ( via NVLink) aren't trying to share all.

The "copy data" gap that Apple is leveraging will get narrowed, but the advantage for Apple isn't one way though. Especially when doing problems that require scalaibity across multiple nodes. Apple's appraoch doesn't scale across nodes ( because it is focused on unified/unification inside a single set of memory direct attached memory modules. "Scale" limited to the intra-package communication. ). The fabrics presented by IF/CXL/NvLink allow to grow problems where still have better contact with nearest neighbors that Apple won't be able to touch with a ten foot pole.
 

aeronatis

macrumors regular
Sep 9, 2015
198
152
I did a little digging, and this is what I have (single precision):

128-core AS GPU = 42 TFLOPS
dual W6900X = 44 TFLOPS
single RTX A6000 = 39 TFLOPS.

So, based on this simple (simplistic?) model, a 128 core AS Mac Pro would be in the same GPGPU performance ballpark as either a dual-W6900X or a single RTX A6000.

It is quite possible that, even though they will have no higher raw power, Apple could very well market them as "x2 faster in Final Cut Pro". They could just focus on the scenarios it performs much more efficiently than the AMD cards ?

You are making a huge assumption leap that Apple is trying to do a one-for-one replacement with their systems.
The leaks though Bloomberg have already pointed to the M-series Mac Pro is "half sized" of the current one.
There is now way they are going to fix two GPU in a volume that is half the size.

Like I mentioned, I overlooked the multiple graphics cards part. I am pretty much talking about replacing a single card. I believe they can pull off a graphics option better than a single W6900X with less power draw. That alone would be "a much better product" for some and not for the others. We will see. Of course, anything we say is an assumption. I believe they could simply focus on how having multiple graphics cards is inefficient and how any software optimised for their hardware will provide a higher performance with a much lower power draw.

More likely, Apple is going to "move the goal posts". Pick only single GPU benchmarks. Side step apps that can linearly apply GPU resources to solving problems ( pick Adobe or Final Cut versus DaVinci Resolve ). Apple will pick some H.265 codec 442 processing that drops into the Apple media decoder. Several steps like that. Worse case they'll position it as a replacement for the iMac Pro and will get to slap around 4-5 year old configurations.

This part I agree with. They will probably market the new graphics solution as "x2 faster in FCP export" or something. For the iMac Pro part, I do not agree as for the use cases you exampled above, the new M1X, X1, whatever the name will be chip will already be noticeably faster than iMac Pro.

the rumors about 64 and 128 core GPU aren't "cards". Those are iGPUs. That's the issue iGPU isn't going to scale because extremely likely not going to go past one. More effort put into "unifying" those GPU cores with the CPU cores than in the modularlization.

I know those aren't cards. That's why I keep saying "graphics solution" quite carefully. We still don't know the Mac Pro hardware will have the exact same design as the consumer/prosumer products, though. Even if they will, saying that they will not go past a certain point is also a huge assumption leap as we have concrete clue on what they are coming up with.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
The leaks though Bloomberg have already pointed to the M-series Mac Pro is "half sized" of the current one.
There is now way they are going to fix two GPU in a volume that is half the size.
They could if the thermals allow it. And others have said that Apple's GPU is, for equal perfomance, much more thermally efficient than those from AMD and NVIDIA.

More importantly, with the trashcan model, Apple made the mistake of producing a Mac Pro whose thermals limited its GPU capability*, took enormous heat (no pun intended) from the pro community for it, and (very unusually) publicly admitted it was a mistake. Do you think they are going to turn around and make that same mistake again with the AS Mac Pro?

*More precisely, its future capability/upgradability; as Craig Federighi said "...we designed ourselves into bit of a thermal corner....the architecture, over time, proved to be less flexible to take us where we wanted to go to address that audience. In hindsight, we would’ve done that differently. Now we are....We need an architecture that can deliver across a wide dynamic range of performance and that we can efficiently keep it up to date with the best technologies over years."

I don't know how big the AS Mac Pro will be. I think the AS Mac Pro will be whatever size it needs to be to accommodate substantial GPU capabilities—not only comparable to what the current Intel Mac Pro offers, but also keep up with future increases in competitors' GPU capabilities. To do otherwise would be a step backwards.

Indeed, a key weakness of the Mac Pro has been Apple's divorce from NVIDIA, since it has limited it to AMD GPU's, which have been less capable than NVIDIA's. I'd like to think Apple is targeting the AS Mac Pro to equal or beat the GPU capability of a higher-end NVIDIA-equipped workstation, thus putting that limitation to rest for most of its customers (i.e., with the possible exception of those who, if they were getting a PC, would order the very highest-end NVIDIA configurations**) (at least on the hardware side; this won't address the lack of CUDA).

**According to Dell's website, at the extreme end, the Dell Precision 7920 Workstation can be ordered with either a triple Quadro RTX 6000, a dual Quadro RTX A6000, or a triple Quadro RTX 8000 (specialized for ray tracing). [These would be the types of cards media professionals would order, which is the principal market Apple is targeting (though ray tracing is also used in scientific visualization, e.g., for protein and viral structures); for specialized data science work, the Dell can be ordered with a triple NVLink GV100.]

A complication is that AS GPU's allow for Tile-Based Deferred Rendering, which may give efficiencies not available with AMD/NVIDIA. Thus it's possible that, for a broad class of rendering tasks, it won't need the same processing power to achieve the same effective performance.

That's the issue iGPU isn't going to scale because extremely likely not going to go past one. More effort put into "unifying" those GPU cores with the CPU cores than in the modularlization.
This is speculative, but it could maintain scalability and modularity by accommodating more than one CPU/GPU module. I.e., you buy it with one module, and then when you want to upgrade you add a second.
 
Last edited:

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
It is quite possible that, even though they will have no higher raw power, Apple could very well market them as "x2 faster in Final Cut Pro". They could just focus on the scenarios it performs much more efficiently than the AMD cards ?
Hopefully it won't only be faster for limited tasks! But they've certainly played that game—and worse—in the past. ;) In introducing the Power Mac G5 at the 2003 WWDC, Jobs used a Mathematica benchmark to support his claim that it was faster than the fastest Intel processor. What he actually did was cherry-pick a single Mathematica operation for which the PPC was faster (an integer calculation), ignoring the others (floating point calculations) for which Intel was faster.
 
  • Like
Reactions: aeronatis

leman

macrumors Core
Original poster
Oct 14, 2008
19,521
19,677
bad no CPU improvement.

We don't know that yet. My guess is that there are CPU improvements, but the CPU itself is clocked lower to have better battery life. A14 is already more than fast enough for a phone, seems reasonable to trade some performance wins for power efficiency at this point. A faster phone doesn't do much, but better battery life is more than welcome. Especially considering the more power-hungry display panels on the new phones...
 
  • Like
Reactions: jdb8167

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
bad no CPU improvement.
I’m not sure that matters for the ASi Mac SoC. Different priorities. I can’t think of a single instance where I thought my iPhone 12 Pro was too slow. A faster SoC doesn’t make me want an iPhone 13. I don’t have problems with battery life but as we all know, it is a perennial problem for a lot of people. It looks like Apple went with improved graphics performance on the A15 and lower power CPU cores.

I doubt that the A15 CPU performance core is going to represent the next ASi CPU performance.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,521
19,677
What are your thoughts on the A15 Bionic everyone? Looks like they're scaling up the GPU cores on some chips.

My thoughts is that we don't know anything. We ned to wait for benchmarks and analysis to say more. Anyway, my predictions didn't change. I still think we are going to get an entirely different chip for the prosumer hardware.
 
  • Like
Reactions: ader42

AgentMcGeek

macrumors 6502
Jan 18, 2016
374
305
London, UK
Anandtech has been crushing some numbers.

For the CPU:

Here, they’re claiming that the new A15 will be +50% better than the next-best competitor. The next-best competitor is Qualcomm’s Snapdragon 888 – if we look up our benchmark result set, we can see that the A14 is +41% more performant than the Snapdragon 888 in SPECint2017 – for the A15 to grow that gap to 50% it really would only need to be roughly 6% faster than the A14, which is indeed not a very large upgrade.

As for the GPU:

Taking GFXBench Aztec as a baseline, we see the A14 was around +18% faster than the Snapdragon 888. The slower A15 would need to be +10% faster than the A14 to get to that margin.

The faster 5-core A15 is advertised as being +50% faster than the competition, this would actually be a more sizeable +28% performance improvement over the A14 and would be more in line with Apple’s generational gains over the last few years.

They also mentioned "double the cache" in the presentation.
 
  • Like
Reactions: ader42

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
I wonder what the over/under is on these new GPU cores having hardware Ray Tracing. Or if they are going to leave it out of the A Series until next year but add it in for the M Series.
Anandtech has been crushing some numbers.

For the CPU:



As for the GPU:



They also mentioned "double the cache" in the presentation.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,521
19,677
I wonder what the over/under is on these new GPU cores having hardware Ray Tracing. Or if they are going to leave it out of the A Series until next year but add it in for the M Series.

No information. Also, no new developer info on A15 from Apple. Personally, I think they will keep hush until the Mac event (if there is even anything new…).
 

satcomer

Suspended
Feb 19, 2008
9,115
1,977
The Finger Lakes Region
No information. Also, no new developer info on A15 from Apple. Personally, I think they will keep hush until the Mac event (if there is even anything new…).

I tend to thing the A15 thing for new IOS devices and Apple will have another Mac release before the holiday season so probably late November!
 

Falhófnir

macrumors 603
Aug 19, 2017
6,146
7,001
M2 (next gen M1): same CPU design as M1 but with the new A15 Avalanche and Blizzard cores; 10 core GPU (up from 8 on the M1). Think they are somewhat limited with these chips if they're also still going to be used for the iPads going forward (they obviously can't go nuts and need them to run well with passive cooling). It will be interesting to see how much extra performance they have squeezed out of the new core designs when the iPhones and especially iPad mini start being tested.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Is anyone else concerned that Apple didn't talk about the performance advantage A14 -> A15? They compared the A15 to "Competitor CPU/GPU" without actually specifying what competitor they were talking about. Do we think that the M series is going to be a completely different core design that isn't going to be used in the A series?
 

Falhófnir

macrumors 603
Aug 19, 2017
6,146
7,001
Is anyone else concerned that Apple didn't talk about the performance advantage A14 -> A15? They compared the A15 to "Competitor CPU/GPU" without actually specifying what competitor they were talking about. Do we think that the M series is going to be a completely different core design that isn't going to be used in the A series?
The increase is quite linear from generation to generation, so the %age increase gets lower each time. Unfortunately people think a 10% improvement is ‘small’ so they chose to talk about other parts of the chip, and compare to what you get with android (even though the A14 is also very much a competitor being in the iPhone 12).
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,521
19,677
M2 (next gen M1): same CPU design as M1 but with the new A15 Avalanche and Blizzard cores; 10 core GPU (up from 8 on the M1). Think they are somewhat limited with these chips if they're also still going to be used for the iPads going forward (they obviously can't go nuts and need them to run well with passive cooling). It will be interesting to see how much extra performance they have squeezed out of the new core designs when the iPhones and especially iPad mini start being tested.

I agree. It seems fairly obvious at this point that M-series will use the same "doubling" technology as the previous iPad chips. The A15 is also consistent with the rumors that M2 will have 10 GPU cores.

Is anyone else concerned that Apple didn't talk about the performance advantage A14 -> A15? They compared the A15 to "Competitor CPU/GPU" without actually specifying what competitor they were talking about.

I am not. As I mentioned before, I think that A15 focuses on energy efficiency. Its sustained performance is probably going to be very similar to A14, but with better power consumption. Better battery life is more important for a phone than making already fastest CPU go even faster.

Do we think that the M series is going to be a completely different core design that isn't going to be used in the A series?

I still expect the M2 to be a scaled up A15 (just like M1 is a scaled up A14), and a prosumer chip to be a more radical departure in design (although obviously sharing the same foundation).

Anyway, we will know more in a couple of weeks.
 
  • Like
Reactions: ader42 and dustSafa

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
I agree. It seems fairly obvious at this point that M-series will use the same "doubling" technology as the previous iPad chips. The A15 is also consistent with the rumors that M2 will have 10 GPU cores.



I am not. As I mentioned before, I think that A15 focuses on energy efficiency. Its sustained performance is probably going to be very similar to A14, but with better power consumption. Better battery life is more important for a phone than making already fastest CPU go even faster.



I still expect the M2 to be a scaled up A15 (just like M1 is a scaled up A14), and a prosumer chip to be a more radical departure in design (although obviously sharing the same foundation).

Anyway, we will know more in a couple of weeks.
Yeah I hope you guys are right, but when Apple "has something exceptional" they tend to shout it from the rooftops. Not doing so in this case (comparing their new cpu/gpu to their old one) just feels weird.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.