Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What’s interesting about the geekbench link that @deconstruct60 shared is that there are three ‘dual’ GPU ‘cards’ in that list (third one is more than a GPU featuring very fast interconnect than the first two )
- Vega II/ Vega II duo
- Radeon w6800x/ Radeon w6800x duo
- Apple M1 max/ Apple M1 Ultra.

The GB5 score of the first two variety is almost identical between the solo and dual cards (= the benchmark doesn’t detect/ignores the 2nd GPU)
While in Apple’s case it offers some 45-50% improvement between the ‘solo’ and the dual version.

Nope. That's comparing Apple and Orange.
 
Nope. That's comparing Apple and Orange.
Maybe, so why is there a difference in scaling between two different benchmark types for the same device - Apples or oranges (take your pick)

See updated reply.
 
Last edited:
AMD will launch Zen4 desktop processors Aug/Sep. Naturally RDNA3 GPUs have to follow after that. At the earliest it's likely to be around Oct/Nov. So I expect the launch schedule for RNDA3/Radeon 7000 series will follow the same pace as Radeon 6000 series. If Apple is going to release its own W7900x, it'll be a long wait of another 12 months from now. MacOS (x86_64) driver support may appear a couple of months earlier, perhaps in Ventura 13.3 or 13.4.

MacPro8,1 will launch Fall 2022 and start shipping early 2023. This gives Apple plentiful time to avoid direct comparison between M2 Extreme GPUs and a potential W7900x (or even RX7900XT due to lack of MacOS driver) if it does matter to Apple (which I think it doesn't because Apple has a bigger problem).

The bigger problem for Apple's GPUs is clearly demonstrated in M1 series that its design/software is not scalable:

View attachment 2038900

We don't know what exactly caused the sub-par performance in higher core counts. It's perhaps lack of software optimisation. It's perhaps 'poorly' designed hardware that caused thrashing in translation look-aside buffer.

Apple was obviously targeting M1 Ultra GPU at W6900x in their planning. And the unreleased M1 Extreme GPU was planned to be equivalent to two W6900x. Both didn't work out as planned. I would speculate M1 Extreme GPU would score the equivalent of ONE W6900x in GB5 Metal. It was perhaps one of the reasons not released to avoid any obvious embarrassment.

Apple may conveniently tap into AMD's Radeon roadmap for business intelligence on where the industry is heading. By working on W7900x and W8900x is excellent convenience. On the other hand, MacPro8,1 appears not to be a direct replacement of the Intel Mac Pro. It's illogical to discontinue 7,1 immediately. GPU refreshes for 7,1 are minimal cost to Apple to hold everything together for a smooth transition and not abruptly demolishing the existing Mac Pro ecosystem.

BTW, Metal 3 support for Radeon 5000 and 6000 series seem to be another indication that AMD GPUs won't be abandoned on Intel Mac Pro anytime soon.
I agree with this and it wasn't lost on me that it didn't scale the way I believe they intended it to. That was a HUGE problem and likely pushed a lot of buyers away from the top of the line Mac Studio.

And I agree, it would be of minimal cost for Apple to simply keep the 7.1 in it's own lane and update drivers and let AMD do it's thing and keep that sucker screaming to kingdom come because quite frankly, a 7.1 Mac Pro with dual w7800x Duo's would be the only way they actually directly crush the RTX dual and triple RTX 4090 machines that are going to be popping up around the same time.

To be honest, why a 7.1 and an 8.1 can't co-exist and branch into parallel paths that don't exactly replace or compete with one another is actually a plausible question.
 
I am preparing my workflow towards Windows/Linux just in case. If Apple fails to deliver a CPU is soldered and dGPUs that are weak in regards to RDNA 3 and RTX 4000 I personally will leave and I believe many here will too.

I recently brought an Xbox series S for my gaming needs/TV streaming in the living room and I am suprised at how powerful it is for $300. The Apple TV costs $199 and comes with a measly 64GB storage and does not even come close to the Series S and the Xbox comes with 512SSD as standard.

Similarly, I expect the new Mac Pro to be expensive beyond belief and there will be MUCH cheaper alternatives that are faster and better and thats why I am looking at other platfroms.
That's fair to say. However, I have an Xbox Series X and an Apple TV and neither crosses the other. Both are used for extremely different things and I would never get rid of either.

Similarly my PC and my Mac Pro 7.1 serve insanely different need and are not in competition with one another.

But I understand that you basically are just ready for cheaper alternatives that are faster and better for you.

My Studio is plug and play for anyone walking in with a MacBook and that's 99% of my clients so no leaving Mac for me. But enjoy your new setup!
 
One way, Apple might dig itself out of the GPU bottle neck is perhaps go the dedicated accelerators route, thereby circumventing certain advantages that traditional GPUs have enjoyed.
These accelerators can perhaps find their way into MBPs, Mac studios etc.
What would be the downside to that? Pros? Cons?
 
I agree with this and it wasn't lost on me that it didn't scale the way I believe they intended it to. That was a HUGE problem and likely pushed a lot of buyers away from the top of the line Mac Studio.

And the GB5 metal scores are likely the result of Geekbench developers receiving guidance & best practices from Apple on how to programming their GPUs.

And I agree, it would be of minimal cost for Apple to simply keep the 7.1 in it's own lane and update drivers and let AMD do it's thing and keep that sucker screaming to kingdom come because quite frankly, a 7.1 Mac Pro with dual w7800x Duo's would be the only way they actually directly crush the RTX dual and triple RTX 4090 machines that are going to be popping up around the same time.

Apple will have to port AMD's driver to MacOS as I believe AMD won't trust small 3rd-party firms to uphold their crown jewel of proprietary codes. Apple has done many times and so porting work will be standard operations.

RX7900XT vs RTX4090? Sure will be interesting to watch. Nvidia goes back to TSMC and will be using '4nm'. RDNA3 will be on '5nm'.

To be honest, why a 7.1 and an 8.1 can't co-exist and branch into parallel paths that don't exactly replace or compete with one another is actually a plausible question.

If 7,1 silently lives on for 2-3 more years, I guess this is only a tactical setback by Apple so that their GPUs could catch up in the next Mac Pro after 8,1. Apple's ambition is clearly to get rid of Intel Macs asap. BTW, for professional use, I would be very surprised in 10 years time you will still be able to buy PCIe dGPUs in the PC world.
 
However, I have an Xbox Series X and an Apple TV and neither crosses the other. Both are used for extremely different things and I would never get rid of either.
What I am trying to say is that Apple's products are overpriced and the for the majority of consumers the Xbox will be a more appealing product. The Xbox offers everything the Apple TV does and more like actual gaming, streaming apps, NAS playback and emulation.

Apple is offering "discounts" in the form of giftcards because the Apple TVs are not selling well and why would they. The product is overpriced.
Similarly my PC and my Mac Pro 7.1 serve insanely different need and are not in competition with one another.

But I understand that you basically are just ready for cheaper alternatives that are faster and better for you.

My Studio is plug and play for anyone walking in with a MacBook and that's 99% of my clients so no leaving Mac for me. But enjoy your new setup!
Yeah for some leaving is not an option and are stuck. Enjoy the 7,1 and it will likely be the last proper upgradeable Mac that Apple makes.

What would be the downside to that? Pros? Cons?
Those who need real GPU power and workflows that don't depend on Apple's accelerators will not benefit. Apple needs proper GPUs as well as really good encoders and decorders.
a 7.1 Mac Pro with dual w7800x Duo's would be the only way they actually directly crush the RTX dual and triple RTX 4090 machines that are going to be popping up around the same time.
Are you sure? The only reason why AMD can match Nvidia this gen is because RTX 3000 is on a much older samsung node. The next gen will have the RTX 4000 on TSMC 4nm(which is still in the 5nm family but 3rd gen) and TSMC AMD 7000 on 5nm. It will be very close but Nvidia will get better RT.
 
Last edited:
About $74.644.20 tax included. ;)

Oh wait... I forgot AppleCare. 🤣

MP 7.1.png
 
Last edited:
Maybe Apple comes up with a blade system that can be used for multiple generations of ASi Mac Pro SoC blades; you go for a first gen chassis with two M2 Extreme blades, then you added a M3 Extreme blade to see what was up with hardware ray tracing, and then added two M4 Extreme blades for the second gen hardware ray tracing...

Then that new second gen chassis comes out, with a backplane three times the speed of the first gen chassis...
 
Kinda like the Radius Skylab (scroll down a bit), only different...? ;^p

You have a chassis with the latest & greatest Mn blade, that's your working compute; same chassis also has expansion slots (a/v i/o, network, storage, etc.; everything but discrete GPUs) and storage...

Same desktop chassis that is your daily driver is also your console to the blade farm, where all your previous latest & greatest Mn blades go to distributed compute themselves to death...?
 
Kinda like the Radius Skylab (scroll down a bit), only different...? ;^p

You have a chassis with the latest & greatest Mn blade, that's your working compute; same chassis also has expansion slots (a/v i/o, network, storage, etc.; everything but discrete GPUs) and storage...

Same desktop chassis that is your daily driver is also your console to the blade farm, where all your previous latest & greatest Mn blades go to distributed compute themselves to death...?
Yeah, or Intel's current NUCs, which put a whole (socketed processor & ram) PC on a card, that plugs into a common slotbus, which can also accept traditional GPUs.

None of these system topologies are new, or particularly novel. But if one thinks Apple aren't going to let us have user-upgradable GPUs, presumably so they can force us to buy a whole new computer every obsolescence cycle, why would one think they'll build a chassis topology that lets users keep old hardware (which Apple are motivated to want returned so they can scavenge the materials for their greenwashing campaigns) in service, or upgrade to new hardware without the full system purchase?
 
That is very telling table. I put up a quick graph and extrapolated. Everything (GPU core, EU, ALU) doubles at the same interval, but the Metal score doesn't.
View attachment 2038931
M1 Extreme (Dual Ultra) would have given ~122000.
M1 Unbelievable (Dual Extreme) would have given ~144000.

The M1 pretty likely shouldn't be plotted against the later M1 generation. The LPDDR RAM is different. The internal chip bus network is substantially different. If wanted a low end datapoint the M2 is higher. That would adjust partially adjust that green line pointing off into never-never land.

But to be consistent (with same shared infrastructure ; apples to apples ), it should be just a graph of the measured Pro/Max/Ultra.

The 122000 and 144000 are a circular logic introduction of a parabolic curve into the graph because choose a parabolic curve into the graph. Sure they are projections, but based on what?? ( Subtracting around 0.2 to the multiple each time? ) So it looks like a nice curve?
And the 144000 is just plain physically ridiculous. There is no practical way to make that SoC so why project it? It just introduces "noise" into the chart. If trying to hand wave about the "discrete" SoC/GPU once again are looping in completely different interconnect technology which projected off of different tech in Pro/Max/Ultra isn't going to be accurate.

Doubtful would get yet another 0.20 "fall" off going to quad set up. Yes there are more tiles but there should also be 3x as many UltraFusion connectors coming off of each tile to compensate. (without having a known interconnect topology and bisection bandwidth, it is mostly "shooting in the dark". Some leaks have shown 3x increase so if those hold up there is compensation being added that isn't in your chart. ) A 1.35-1.40 multiple put the quad up in the 127,00-132,00 range. ( ball park with the 6800XT (between 6800 and 6900 ) . Toss in some incrementally bigger cache ( not hard if go to TSMC N3 ) and a mid double digit increase in GPU cores ( go to 10 core GPU clusters from 8 ), that would have decent chance up near the 6900.

That is enough to get a very wide range of work done quickly with code that is at least halfway decently optimized for the Apple GPU tech, but it is no top end, ultra expensive AMD/Nvidia 'killer' in the 7900/4090 generation. Even more so as the storage->GPU VRAM APIs mature on the non macOS platforms.
However, there will be some cases where they are competitive ( not so much for "more horsepower" but more working smarter-not-harder algorithms on a subset of workloads. (e.g., substantially less copying overhead, better synchronization with the CPU cores, etc. ) )
 
When the 8,1 launches, I guess its details will be as interesting as the fate of 7,1 immediate discontinue or not, for how long it'll be quietly on sale..



M1 GPUs have the same GPU clock (I believe, haven't checked). I also think memory bandwidth not an issue in GB5 metal benchmark. Have a look at @mikas plot above based on my data retrieved from GB browser. The scaling tapers off logarithmically. While I don't expect exact linear scaling, the curve does indicate something gone wrong in programming guideline, hardware design..

Take a look at Radeon 6000 GPUs natively supported in MacOS (x86_64):
View attachment 2039092

and the corresponding plot, GeekBench 5 Metal score vs number of GPU shaders:
View attachment 2039094

AMD RDNA2 scales really well. Don't they? Go figure what's gone wrong with M1 GPUs.

One quirk of that graph is that although there are 5 entries there it is only really just two different dies. How is there going to be huge variability in a line between just two different sized dies? There isn't.


Also not really monotonic memory speeds there. What is 'wrong' with the M1 GPUs? Can the AMD GPUs fit in the same power envelope as the M1's ? 'Wrong' if objective is to create largest, fire breathing, highest power consuming GPU practical. In the laptop space where overwhelming vast majority of M1 generation GPUs are deployed Apple are highly competitive with AMD solutions (and Intel/AMD iGPUs). Much of that 'flat scaling' is throwing Perf/Watt increasingly out the window. (and picking up the there at the end where flattens further).

For the driver maturity level of Apple's newest GPUs and the low year tenure of expertise writing the application code to a new GPU, Apple's curve will likely flatten slightly over time over a variety of workloads. ( Intel's driver horror show of more radical change more quickly is a relatively worse possible outcome that Apple could have landed in, but didn't. ) . Make it work, then make it faster. ( a good reason why a wise plan would have skipped M1 for the a serious Mac Pro. )
 
  • Like
Reactions: mikas
The M1 pretty likely shouldn't be plotted against the later M1 generation. The LPDDR RAM is different. The internal chip bus network is substantially different. If wanted a low end datapoint the M2 is higher. That would adjust partially adjust that green line pointing off into never-never land.

But to be consistent (with same shared infrastructure ; apples to apples ), it should be just a graph of the measured Pro/Max/Ultra.

The 122000 and 144000 are a circular logic introduction of a parabolic curve into the graph because choose a parabolic curve into the graph. Sure they are projections, but based on what?? ( Subtracting around 0.2 to the multiple each time? ) So it looks like a nice curve?
And the 144000 is just plain physically ridiculous. There is no practical way to make that SoC so why project it? It just introduces "noise" into the chart. If trying to hand wave about the "discrete" SoC/GPU once again are looping in completely different interconnect technology which projected off of different tech in Pro/Max/Ultra isn't going to be accurate.

Doubtful would get yet another 0.20 "fall" off going to quad set up. Yes there are more tiles but there should also be 3x as many UltraFusion connectors coming off of each tile to compensate. (without having a known interconnect topology and bisection bandwidth, it is mostly "shooting in the dark". Some leaks have shown 3x increase so if those hold up there is compensation being added that isn't in your chart. ) A 1.35-1.40 multiple put the quad up in the 127,00-132,00 range. ( ball park with the 6800XT (between 6800 and 6900 ) . Toss in some incrementally bigger cache ( not hard if go to TSMC N3 ) and a mid double digit increase in GPU cores ( go to 10 core GPU clusters from 8 ), that would have decent chance up near the 6900.

That is enough to get a very wide range of work done quickly with code that is at least halfway decently optimized for the Apple GPU tech, but it is no top end, ultra expensive AMD/Nvidia 'killer' in the 7900/4090 generation. Even more so as the storage->GPU VRAM APIs mature on the non macOS platforms.
However, there will be some cases where they are competitive ( not so much for "more horsepower" but more working smarter-not-harder algorithms on a subset of workloads. (e.g., substantially less copying overhead, better synchronization with the CPU cores, etc. ) )
I agree with you. In almost every point you make.

I made a prognostication based on something there is, towards something there is not. You can make your own. We'll see what's coming up.

Overall I like your posts. Allways. You must be in the business, or you are a fan of all of this, or studying it. It's not to say I fully agree with your posts and information all the time. But I think you are better than me delivering information and diving in to reasons. You can almost always give some, or more, factual points, to carry on with your interpretation of things at Apple, and AMD, and with the others too.

And to the topic. It was just a quick curve for M1, extrapolated. And now M1 is out, M2 is in. There is no M1 Mac Pro as Ternus said. So I guess we are never gonna see what would have happened with M1 "douple up's", or "four up's" or whatever there was in the sleeve.

The wait continues though..
 
Last edited:
What’s interesting about the geekbench link that @deconstruct60 shared is that there are three ‘dual’ GPU ‘cards’ in that list (third one is more than a GPU featuring much faster interconnect than the first two )
- Vega II/ Vega II duo
- Radeon w6800x/ Radeon w6800x duo
- Apple M1 max/ Apple M1 Ultra.

The GB5 score of the first two variety is almost

That largely because each GPU presents to macOS (and to user applications ) as two separate GPUs. Each is a separate die with its own collection of VRAM chips. A "Duo" card is almost completely the same thing as two "solo" cards hooked together with their Infinity Fabric sockets. The main different is that with two whole MPX modules approach there is better PCI-e bandwidth to each of the GPU in the configuration. That's it. The Duo card is more space efficient inside the system. So it is a trade off. Want more space (and slot availability) or trade so bandwidth throughput.



identical between the solo and dual cards (= the benchmark doesn’t detect/ignores the 2nd GPU)
While in Apple’s case it offers some 45-50% improvement between the ‘solo’ and the dual version.

In Apple's case it is nota "2nd GPU". There is only GPU present (macOS and applications only see one). So it is quite a misrepresentation to label it "duo" when applications and pragmatically the OS do not see two. There are two dies 'glued' together before even put onto the rest of the CPU package (and the package is 'glued' to the logic board). At that completed state of manufacturing, there is no "2" there. It is just one piece of silicon.

The memory bandwidth and latencies between the two internal 'halves' of larger sets of GPU cores is vastly different than between the separation on the "Duo" cards. It is high enough that Apple and macOS present it as just one GPU.

That is a challenge. AMD's MI250 chip packages present the two dies present in the package as two separate GPUs. They have a relatively very fast internal interconnect but they don't 'merge' into one GPU. So applications have to chop up the work across two GPUs and 'merge' the results as necessary. ( so get lots of internal space back was with the "Duo" module versus connecting two, bigger modules with same trade off of shared PCI-e link to the two GPUs (on PCI-e v4 vs v3 so a bit less of handicap).

The quad tile/chiplet situation would be a substantially even larger challenge to bandwidth and latencies low enough to present as a single GPU.


There is two classifications of applications that each approach is trying to addess. Some apps are 'clueless' about how to spread and collect work over multiple GPUs and some apps can do it just fine for some substantive subset of the functions they cover. What Apple is doing is solely catering to the first set. What the Duo cards do is cater to the second. It is really two different groups. The Duo cards are not cost effective for the first group. It is likely the Apple chiplet approach will be cost effective for the second group of applications ( especially as scale past 2. Where two Duos was effective, Apple's path here just won't work economically or in time critical performance results. ). Apple's approach is more effective for the single only and where one 'bigger' GPU is good enough ( shift back from two GPUs + two VRM to just one GPU and shared VRAM then it can be more cost effective. But loose out on that internal-to-the-box scale out ability. )


Are there other GPU benchmarks that can shed some light ?
Example octane benchmark suggest near linear scaling from w6800x to w6800x duo.

Maybe blender/ octane users can do some test between m1max and ultra ?

See this redshift test : the ultra is about 78% faster than the max (370 seconds vs 668 seconds)

Not quite logarithmic slow down in all benchmarks perhaps…

The apps themselves are potentially as much the culprit here as the hardware. The catch-22 with 'low-level-only' APIs like Metal ( and Vulkan and DirectX12 ) is that some very substantial amount of the responsibility for optimizing for hardware specifics is tossed into the application (or at best the game/render engine used across multiple apps). For something with a thicker API layer like OpenGL more of that onus is put on the graphics driver writers. If the drivers writers are 'off' and/or 'slow' to adapt then all the apps suffer. But when they do a very good job then that good work rolls out to all the applications quickly. Tossing a large chunk of that responsibly out to the individual apps means will tend to get a much more herky-jerky , uneven rollout of of adaptions to new hardware.

That is why is it a bit ridiculous when the Ultra comes out and a week or so after it arrives some benchmark porn purveyor declares the hardware is 'broke'. Really? Using the software that was compiled and built two-ten months before the Ultra even hit the market? Doing real science means knowing what your instruments can and cannot measure. If the benchmarking app using Apple foundational libraries those may have some 'release day 1' optimizations in it, but the more platform neutral the benchmark is the more likely going to have some hardware/software impedance mismatches on 'day 1'. (required changes are being required to be pushed into a much broader sets of code. )

Early on, we more so get a measure of how well the new hardware can run 'older' software than of the complete potential of the new hardware with new software. So much of the longer term success of the M-series SoC is going to depend upon how much time, effort, and resources app developers pour into matching up their algorithms/implementations to Apple's GPU hardware strengths ( e.g., locally caching in tile memory when effective ) and avoiding its weak points (assigning too much work , to too few cores ).
 
  • Like
Reactions: singhs.apps
The M1 pretty likely shouldn't be plotted against the later M1 generation. The LPDDR RAM is different. The internal chip bus network is substantially different. If wanted a low end datapoint the M2 is higher. That would adjust partially adjust that green line pointing off into never-never land.

But to be consistent (with same shared infrastructure ; apples to apples ), it should be just a graph of the measured Pro/Max/Ultra.

Forget about Mikas' plot. You distracted yourself and the audience of this thread from the central point I raised: M1 GPUs do not scale up well as demonstrated by GB5 Metal bench, particularly in GPUs with higher number of cores/EU/ALUs. Saw you were calculating slopes/ratios to 'visualise' and compare, and wrongly claimed Radeon GPUs exhibit similar scaling issue. Hence, offered you the suggestion of helping yourself to visualise pictorially. Not unexpectedly, you made a bigger fuzz of his plot, by which you conveniently avoided my central point.

M1/M1 Pro/M1 Max/M1 Ultra GPUs of course can be used to study how Apple GPUs perform in scaling up. After all they are the same generation/microarchitecture and share much more similarities than difference. You argued hat (i) M1 (because of using LPDDR4X) is not suitable for comparing against M1 Pro/Max/Ultra (using LPDDR5), and argued that (ii) the latter group has a different you-so-called 'internal bus' from the former, and argued that (iii) M1 is more suitably compared against M2. All three arguments are ludicrous at multiple levels.

Argument (iii) is beyond my words. First, it's a different study, different generation/microarchitecture/process node, and second you shot down yourself (the argument you made in (i) because M1 uses LPDDR4 but M2 uses LPDDR5.

About argument (ii). I'm sure you don't know what 'internal bus' and its topology that Apple uses in M1 GPUs, not to mention you will hardly know the base GPU uses different 'internal bus' than other M1 GPUs. It's funny you claimed different 'internal bus' results in unsuitability for comparison. I don't they if you-so-called 'internal bus' is different. Assume they're. Then there is no reason to believe Apple won't have picked the optimal you-so-called 'internal bus' for each of the M1 GPUs for optimal performance to end users. Because of 'internal bus' difference, performance can't compare. Then by your funny logics, people should not compare Intel processors against AMD process because they have different 'internal bus'. And people should not compare GeForce GPUs against Radeon GPUs because they certainly have different 'internal bus'.

Re: argument (i). M1 GPU uses LPDDR4x which is slower. If normal reasoning follows, then GPU performance should be lower than LPDDR5. However, if you calculate GB5 Metal Score per GPU core, M1 GPU achieves the highest performance. The fact doesn't follow your argument.

One quirk of that graph is that although there are 5 entries there it is only really just two different dies. How is there going to be huge variability in a line between just two different sized dies? There isn't. Also not really monotonic memory speeds there.

Let me borrow one of your arguments. Okay, two different sized dies. But you know what AMD uses a different 'internal bus'. Yet AMD is able to manage scaling up well across two different sized dies of the same generation/microarchitecture (and across different memory bandwidth too) but Apple can't.

What is 'wrong' with the M1 GPUs? Can the AMD GPUs fit in the same power envelope as the M1's ? 'Wrong' if objective is to create largest, fire breathing, highest power consuming GPU practical. In the laptop space where overwhelming vast majority of M1 generation GPUs are deployed Apple are highly competitive with AMD solutions (and Intel/AMD iGPUs). Much of that 'flat scaling' is throwing Perf/Watt increasingly out the window. (and picking up the there at the end where flattens further).
I didn't argue for/against performance per watt. It's irrelevant to my central point: M1 GPUs do not scale up well as demonstrated by GB5 Metal bench, particularly in GPUs with higher number of cores/EU/ALUs. You distracted yourself and the audience again.

For the driver maturity level of Apple's newest GPUs and the low year tenure of expertise writing the application code to a new GPU, Apple's curve will likely flatten slightly over time over a variety of workloads. ( Intel's driver horror show of more radical change more quickly is a relatively worse possible outcome that Apple could have landed in, but didn't. ) . Make it work, then make it faster. ( a good reason why a wise plan would have skipped M1 for the a serious Mac Pro. )
I said in previous posts we don't know what causes M1 GPUs failing to scale up in performance, and perhaps due to lack of software optimisation and/or issue in hardware design. Software optimisation surely covers both drivers and applications (using the metal APIs). Nevertheless, you touched on one thing I can make sense of, and agree.
 
And the GB5 metal scores are likely the result of Geekbench developers receiving guidance & best practices from Apple on how to programming their GPUs.



Apple will have to port AMD's driver to MacOS as I believe AMD won't trust small 3rd-party firms to uphold their crown jewel of proprietary codes. Apple has done many times and so porting work will be standard operations.

RX7900XT vs RTX4090? Sure will be interesting to watch. Nvidia goes back to TSMC and will be using '4nm'. RDNA3 will be on '5nm'.



If 7,1 silently lives on for 2-3 more years, I guess this is only a tactical setback by Apple so that their GPUs could catch up in the next Mac Pro after 8,1. Apple's ambition is clearly to get rid of Intel Macs asap. BTW, for professional use, I would be very surprised in 10 years time you will still be able to buy PCIe dGPUs in the PC world.
Hmmm, That's a hard sell for me to buy...reason being, I see the Mac Pro 7.1 living on and supported by Apple for at least the next 5 years. I would say 2027 would be a good time for them to stop support. BUT EVEN THEN, I believe the Mac Pro 7.1 will go on for another 3 years beyond that as a workhorse for many because...whatever the last OS update is for the Mac Pro, that will still run fantastically for years to come, and as far as using this thing for work...for the first time, the machine is at a place where it can do anything I would need it to pretty much here on out in terms of needs.

I can, and I'm not kidding, render a full on episode for a fully 3D animated Saturday morning cartoon over a period of several days on this machine no sweat. I could honestly animate, and complete full tv quality renders of an entire 22 minute episode including the test rendering and rough renders, within a weeks time, on this system alone.

It runs high resolution and very complex scenes in damn near real time in Octane and even Redshift is catching up to that speed at this point as it currently is on 28 cores with 2 w6800x duos...and so it can be a workhorse for everything from that to commercials to high end music videos and high end VFX for the next decade. Things will improve over that time, but we are reaching a point CURRENTLY that every breakthrough from here will be MOSTLY INCREMENTAL and not REVOLUTIONARY.

HOWEVER...and this is where I could be dead wrong. If the M4X Ultra is double the speed of both the current 7.1 MAXED OUT CPU AND GPU AND CAN HANDLE that minimum of 1.5 TB of ram...whenever they reach that point...that's truly when folks like me will be like "okay, it's time to get the 10.1 or whichever system ends up being the one to pull that off.

Also, as a note of support...people keep forgetting one strange thing that Apple said in that very same keynote where they mentioned the 2 year transition. THEY ACTUALLY SAID THEY HAVE SEVERAL MORE INTEL MACS IN THE PIPELINE.

I have no idea why people keep ignoring this very important fact, but. Think that's pretty big. Thoughts?
 
  • Like
Reactions: ZombiePhysicist
What I am trying to say is that Apple's products are overpriced and the for the majority of consumers the Xbox will be a more appealing product. The Xbox offers everything the Apple TV does and more like actual gaming, streaming apps, NAS playback and emulation.

Apple is offering "discounts" in the form of giftcards because the Apple TVs are not selling well and why would they. The product is overpriced.

Yeah for some leaving is not an option and are stuck. Enjoy the 7,1 and it will likely be the last proper upgradeable Mac that Apple makes.


Those who need real GPU power and workflows that don't depend on Apple's accelerators will not benefit. Apple needs proper GPUs as well as really good encoders and decorders.

Are you sure? The only reason why AMD can match Nvidia this gen is because RTX 3000 is on a much older samsung node. The next gen will have the RTX 4000 on TSMC 4nm(which is still in the 5nm family but 3rd gen) and TSMC AMD 7000 on 5nm. It will be very close but Nvidia will get better RT.
I don't disagree. Nvidia will have the slight edge, but it won't be a drastic difference to be honest. Tests are already coming in and showing only about a 33% improvement. It's not scaling like people assumed it would.

Definitely not STUCK with the 7.1 I genuinely adore that machine. And I do agree with you sadly, that this machine may very well be the last proper upgradable Mac...unless Apple has some serious card wizardry up their sleeves.

Apple TV is used far more than my Xbox. I play Fortnite and Forza Horizon 5 on downtime, but the speed and quality and ease of use and ecosystem just make the Apple TV the easy and immediate choice for viewing media. They serve very different purposes.

In any case, this will be an interesting year ahead of us.
 
Also, as a note of support...people keep forgetting one strange thing that Apple said in that very same keynote where they mentioned the 2 year transition. THEY ACTUALLY SAID THEY HAVE SEVERAL MORE INTEL MACS IN THE PIPELINE.

I have no idea why people keep ignoring this very important fact, but. Think that's pretty big. Thoughts?
I think most people by now forgot Tim Cook said that. Cook's speech was carefully scripted. So Apple indeed had plans to release one or two Intel refreshes. That didn't happen though. @Amethyst or his friend tested one Intel Mac Pro last year (?) that believed to the Intel Mac Pro refresh. I think just like the M1 Extreme Mac Pro they tested, Apple dropped both before launch.

I believe Apple's plan was M1 Max GPU nearing W6900x performance. They reasoned M1 Extreme i.e. 4X M1 Max will surpass Intel Mac Pro with 2X W6800x Duo. Seems things didn't quite pan out as planned either due to software optimisation issue and/or chip design issue.

Given Apple's access to sheer amount of resources, it's only a matter of time to sort out the issues on the critical path. So I would expect by the time of M3/M4 Extreme Mac Pro, a stagnant Intel Mac Pro with 2X W6800x Duo will surely be surpassed in performance at a fraction of energy consumption.
 
  • Like
Reactions: maikerukun
people keep forgetting one strange thing that Apple said in that very same keynote where they mentioned the 2 year transition. THEY ACTUALLY SAID THEY HAVE SEVERAL MORE INTEL MACS IN THE PIPELINE.

I have no idea why people keep ignoring this very important fact, but. Think that's pretty big. Thoughts?
What category of users would want an intel MacBook (air - 16” range ), iMac (standard/pro) and Mac minis? In other words, why aren’t AS devices in the above classes sufficient for their needs ? Are all new Macs sold by Apple purely AS based ?

How many of these users bought the latest - but last - intel macs with the expectation that given a few years, until they need to upgrade, their current ‘must needs’ workflow gains some extra transition window over what Apple wanted (2 years )

If such category of users exist in any significant numbers how will Apple retain them in their ecosystem ? Or are the new users numerous enough that Apple is happy to let go ?

So what category of devices will need to have new intel system in a post AS Mac world ? ( because current and near future AS systems are not sufficient for them )

I can only think of the Mac pro but this forum subcategory is something of an echo chamber so maybe other device class users can chip in.

Also why did Apple use plural - ‘several more intel macs’ ? Two or more iterations for a device class, or single new (but last) iteration of different classes ?

———

I have mentioned it else where but reiterating it once more :

Intel Mac Pro users belong to the following categories or combinations thereof. Some may have to let go of the Mac Pro while some may bite the bullet given sufficient benefits (or lesser loss )

  • users who like to upgrade/expand
    • Mainly two classes :
      • Complete over haul of the processor + expansions
        • Buy lower class and upgrade to a higher class of the same gen at much cheaper rates than what apple would charge if bought at similar higher class from apple.
      • Standard internal upgrades/extension (but otherwise keep the main system untouched)
        • Ram, GPU, PCI-e devices (storage, capture cards, audio cards etc
  • Users who have to use windows :
    • Dual booting was a great value proposition for the Mac (esp Mac pros)
    • As of today, dual booting on AS macs is not possible.
      • Which itself brings a caveat that windows on arm isn’t a power player yet, and the benefit of a native x86 ecosystem will be lost even if it were a reality (long history of supported apps, hardware etc )
    • It’s safe to say such use category may not find solid traction for a long while, mainly due to windows on arm itself being a poor ecosystem at present.
      • 4-5 years in the future ? Maybe it will be significant enough that such users can be brought back ( or they move back to pure Mac hardware once again)
  • Users who worry - legitimately - that one hardware error in the system chain may necessitate having the entire unit replaced or repaired at significant time/costs + complete shutdown of work in the interim (provided alternate Mac systems don’t exist but pros usually have backup systems at hand, so maybe not a show stopper)
    • This one bugs me the most. GPU/expansion card went bust ? Insert a new one and continue working. Closed system like an AS Mac Pro ? Pray your apple protection plan is still valid and you invested in a backup system.
For the above use cases, a new iteration of 7th gen intel Mac Pro maybe very desirable. It addresses all use cases without downsides
 
Last edited:
  • Love
Reactions: maikerukun
Wouldn't want to spoil the hope..

What he said exactly at WWDC Special Event Keynote (22.6.2020) was;
"in fact, we have some new Intel-based Macs in the pipeline"

I guess he tried to make it sound at least a little bit exiting, but failed miserably, I think.
1659855394221.png

And then a row of Intel-iMacs happened.
1659855531848.png

In Apple-parlance those might very well be these "some" intel-based Macs from their pipeline. They are "many", and that could be interpreted as "some". He didn't talk about categories or product lines, just said "Macs". We all know it was just an update, but maybe not to Apple.

So like above already someone mentioned something like, carefully crafted and orchestrated presentation with very careful wordings.
 
Last edited:
  • Like
Reactions: maikerukun
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.