Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
They talk about gaming of course but use it as a general statement everywhere. I corrected the same comment in another discussion and showed that M2 Ultra is even faster than RTX 4090 in After Effects Pudgetbench but of course that's not interesting in such discussions.

ska-rmavbild-2023-10-01-kl-19-15-15-png.2286088
That result actually isn't showing the M2 Ultra is faster than the 4090 (if it were, it would be surprising, since the 4090 is generally much faster than the M2 Ultra's GPU).

Instead, that's the M2 Ultra's CPU beating the i9-13900K, rather than its GPU beating the 4090:

"After Effects relies heavily on the memory and the central processing unit of your computer rather than the graphics card or GPU within it."

 
Last edited by a moderator:
  • Like
Reactions: MRMSFC

MayaUser

macrumors 68040
Nov 22, 2021
3,177
7,196
I still am convinced that if you need CUDA, 4090 is the far better gpu
But, M3 Ultra if its done properly can shorten that gap even there
Maybe im mistaken ,i will wait to judge it when it comes out within my work
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Update: I found some results for M2 Ultra and indeed the GPU score is about 18% higher than 4090 despite CPU score being lower. Even RAW Preview and Tracking score are higher. So despite heavier CPU load the PC system with 4090 gets lower score. In other words in this test M2 Ultra GPU is not as slow as 3060 Ti but faster than 4090. Interesting that 4090 is even using CUDA.
It would suggest that system RAM and PCIe is the bottleneck at the the Intel end. Especially telling when PC4200 vs PC6400 SDRAM are used.
 

bcortens

macrumors 65816
Aug 16, 2007
1,324
1,796
Canada
It would suggest that system RAM and PCIe is the bottleneck at the the Intel end. Especially telling when PC4200 vs PC6400 SDRAM are used.

Which help demonstrate something I've said here and elsewhere - the unified memory SoC approach is a good one and will only get better with time. I believe it is a mistake to wish for a discrete GPU in Apple silicon Macs and I believe that the reason we don't have more powerful GPUs is not because of the SoC approach but because Apple doesn't see the value in building a super large chip (4 or more chips fused) that goes only into their least important Mac.
 

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
Which help demonstrate something I've said here and elsewhere - the unified memory SoC approach is a good one and will only get better with time. I believe it is a mistake to wish for a discrete GPU in Apple silicon Macs and I believe that the reason we don't have more powerful GPUs is not because of the SoC approach but because Apple doesn't see the value in building a super large chip (4 or more chips fused) that goes only into their least important Mac.
If the rumors of compute modules are accurate then we may see a novel implementation of somewhat of a “gpu”.

I like the SoC approach as well but the drawback of having to use a super large chip is something that Apple will have to get around to be competitive, I feel.
 

bcortens

macrumors 65816
Aug 16, 2007
1,324
1,796
Canada
If the rumors of compute modules are accurate then we may see a novel implementation of somewhat of a “gpu”.

I like the SoC approach as well but the drawback of having to use a super large chip is something that Apple will have to get around to be competitive, I feel.
Whether or not they have to overcome that drawback depends, I think, on how important it is to Apple to compete in space the Mac Pro occupies.

I said at the time of the new M2 Ultra Mac Pro Launch that I wished they had created a PCI board with (essentially) a whole Mac on it as a compute module. Kind of treat it like xGrid and let you use distributed computing but all within a single Mac.
 

name99

macrumors 68020
Jun 21, 2004
2,410
2,321
I still am convinced that if you need CUDA, 4090 is the far better gpu
But, M3 Ultra if its done properly can shorten that gap even there
Maybe im mistaken ,i will wait to judge it when it comes out within my work
Well duh. That's like saying if you need Metal, the M2 Ultra is a far better GPU.

I assume what you mean to say is something like "if you want to program GPGPU code" or "if you want to use 3rd party professional visualization apps" – but that's precisely the point in dispute!
It's very definitely the case that nV provides functionality that Apple does not (yet...), from FP64 to much larger Scratchpad storage, to much more powerful synchronization primitives. If you need those, you need nV (certainly for the last two, for FP64 you may find an AMD solution).

But if the only reason you "need" nV is performance, then that's what this thread is about. How does performance line up between the two, for what use cases? Obvious arguments are that Apple probably wins
- if you need a larger memory footprint for your GPU code than is provided by the on-card memory
- if you need tight synchronization between the GPU and CPU

This is not about rah rah nVidia (or Apple), at least not for all of us. It's about trying to understand where the theoretical Apple advantages (like the two I described above) already manifest in code, because that gives us some idea of where the tech is headed. Not just for Apple but nV is going down the same path (Grace Hopper) and presumably AMD [and maybe one day Intel] will put together their versions trying to resolve the same issues.
 

Homy

macrumors 68030
Jan 14, 2006
2,507
2,459
Sweden
It would suggest that system RAM and PCIe is the bottleneck at the the Intel end. Especially telling when PC4200 vs PC6400 SDRAM are used.

Maybe but I'm not sure. Puget Systems found almost no improvements in AE, just about 3% in some cases. The article is from 2019.

pic_disp.jpg


However I found some results for faster memories. They still don't beat M2 Ultra but the GPU Score is better for 6800MHz and 7200MHz. 3080 Ti with the fastest memory 7600MHz isn't the fastest.

Skärmavbild 2023-10-12 kl. 07.34.08.png


Skärmavbild 2023-10-12 kl. 20.31.35.png
Skärmavbild 2023-10-12 kl. 20.31.44.png
Skärmavbild 2023-10-12 kl. 20.31.52.png
 
Last edited:
  • Like
Reactions: souko

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Maybe but I'm not sure. Puget Systems found almost no improvements in AE, just about 3% in some cases. The article is from 2019.

View attachment 2294179

However I found some results for faster memories. They still don't beat M2 Ultra but the GPU Score is better for 6800MHz and 7200MHz. 3080 Ti with the fastest memory 7600MHz isn't the fastest.

View attachment 2294185 View attachment 2294186 View attachment 2294187
It appears that even for the GPU test memory speeds of both the card and the systems ram matter. The 3060ti has about 60% of the bandwidth of the 4090 which matches the spread in the scores you posted.
And if the dataset has to hit main ram cause 24GB isn’t enough, well the M Series ends up faster even though the 4090 technically has more GPU bandwidth (crossing PCIe really sucks, lol).
 

Homy

macrumors 68030
Jan 14, 2006
2,507
2,459
Sweden
It appears that even for the GPU test memory speeds of both the card and the systems ram matter. The 3060ti has about 60% of the bandwidth of the 4090 which matches the spread in the scores you posted.

For the card sure, for the system I'm not so sure.
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
Back on track please. The perceived performance of an app has always been the hardware and the efficacy of the app code in using that hardware. Therefore we see all types of spreads in benchmarks. Remember that historically, optimising code has never payed off. So we are left with two unknowns in a very simplified equation:

Hardware compute capacity*efficacy of the code=benchmark score (proxy for perceived performance)

Just remember which part of the equation you are discussing to avoid any conflict.
 
  • Like
Reactions: name99

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
It appears that even for the GPU test memory speeds of both the card and the systems ram matter. The 3060ti has about 60% of the bandwidth of the 4090 which matches the spread in the scores you posted.
And if the dataset has to hit main ram cause 24GB isn’t enough, well the M Series ends up faster even though the 4090 technically has more GPU bandwidth (crossing PCIe really sucks, lol).
I would think AS has an inherent advantage when both CPU and GPUs are processing the same set of data, as both the CPU and GPU shares the same SLCs. And yeah, PCIe will be a massive bottleneck when you have tons of assets to render and worst when scenes changes rapidly.

I guess this is the main reason why most render farms uses CPUs instead of GPUs?
 

Confused-User

macrumors 6502a
Oct 14, 2014
852
988
Back on track please. The perceived performance of an app has always been the hardware and the efficacy of the app code in using that hardware. Therefore we see all types of spreads in benchmarks. Remember that historically, optimising code has never payed off.
I think that that's extremely wrong. Even leaving out algorithm optimization (which I assume you were), that's still way too broad.

There's probably a (much) more tightly worded version of that that's true. You could start by being a lot more explicit than "optimizing code".
 
  • Like
Reactions: altaic

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
I think that that's extremely wrong. Even leaving out algorithm optimization (which I assume you were), that's still way too broad.

There's probably a (much) more tightly worded version of that that's true. You could start by being a lot more explicit than "optimizing code".
It is a thread that often is hardware centric and to compare hardware uses benchmarks which we have little clue about how well the software uses the hardware. Sorry for simplifying too much but basically much of the arguments in this thread is benchmark=hardware performance. It is not, just look at 3D rendering engines and improvements just through software optimisation on the same hardware. I uses software optimisation very broadly, I agree, but that is for simplicity.

Feel free to make a better equation also taking the software side in. I will likely result in better discussion with fewer misunderstandings so we know which parameter is really being discussed and which parameter we have full knowledge on and are constant between hardware platforms.
 
  • Like
Reactions: Chuckeee

Confused-User

macrumors 6502a
Oct 14, 2014
852
988
It is a thread that often is hardware centric and to compare hardware uses benchmarks which we have little clue about how well the software uses the hardware. Sorry for simplifying too much but basically much of the arguments in this thread is benchmark=hardware performance. It is not, just look at 3D rendering engines and improvements just through software optimisation on the same hardware. I uses software optimisation very broadly, I agree, but that is for simplicity.

Feel free to make a better equation also taking the software side in. I will likely result in better discussion with fewer misunderstandings so we know which parameter is really being discussed and which parameter we have full knowledge on and are constant between hardware platforms.
I completely agree that benchmarks are often deceptive. Though I do think that there are benchmarks that have proven to be quite useful and reasonably accurate, within certain limited (but sometimes quite broad) domains. SPEC is a good example.

I wasn't talking about that - I was disagreeing with your assertion that "historically, optimising code has never payed off". In fact your followup quoted above ("just look at 3D rendering engines and improvements just through software optimisation") perfectly illustrates the counterpoint.
 

Populus

macrumors 603
Aug 24, 2012
5,942
8,412
Spain, Europe
Hi! I find this thread really interesting. Please excuse me if I’m not speaking with as much knowledge as some of you.

Given that the latest rubles place the first M3 devices well into next year, when the chips manufactured on the new N3E process… do you think the M3 will be based on the A17 architecture and using the N3B node? Or they will instead use the N3E node and use the new architecture that we will see later in the year with the A18?
 
  • Like
Reactions: Chuckeee

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
Thanks for raising people's awareness on this. I went back and added credits to my previous comment and will be careful to continue to do so in the future.
Thanks. I often feel like a "vox clamatis in deserto" when I raise this issue, so I appreciate the support.
 

picpicmac

macrumors 65816
Aug 10, 2023
1,239
1,833
do you think the M3 will be based on the A17 architecture and using the N3B node?
A more fundamental question is not which "node" at TSMC will make which Apple SoC, but what Apple intends for the future of the SoC.

The M2 paradigm: base version, "Pro" version, "Max" version, "Ultra" version may or may not be what is best going forward for Apple.

It's not clear to me how Apple will have a market for the Mac Pro going forward, if Nvidia becomes the de facto standard for the "AI" industry's small machines, as it has for a big chunk of the graphics-intensive market.

The "Ultra" chip and designation is for bragging rights. Apple will likely sell 100x as many computers with the base M2 as they do with computers with the M2 Ultra.

Shrinking device sizes allows the designer many options. Apple has as a corporation prioritized performance/Watt , not performance. It's a reasonable strategy but also means Apple will never have the highest performing computer, just the most energy efficient ones.

The other avenue that smaller devices brings is to shrink the System on a Chip (SoC) so small that it can be put in places that otherwise computing devices may not be found. Such as implanted into other objects, even living objects.
 

Chuckeee

macrumors 68040
Aug 18, 2023
3,065
8,729
Southern California
Apple has as a corporation prioritized performance/Watt , not performance. It's a reasonable strategy but also means Apple will never have the highest performing computer, just the most energy efficient ones.
There is a really big TBD associated with this.

A general axiom of electronics design is heat kills. The hotter, the electronics run, the lower the reliability and the short of the lifetime.

Does this mean that SoC will increase computer reliability and lifetimes? No one knows YET. But the potential is there.
 

streetfunk

macrumors member
Feb 9, 2023
84
41
It's a reasonable strategy but also means Apple will never have the highest performing computer, just the most energy efficient ones.
point is: what counts in real live really ?

i don´t game, i can´t judge on that one.
But overall, is it often about to reach above a certain threshold, and not necessarily about to have the highest performance. I would guess that´s even true for "Hobby-gamers". Those who don´t take ambitions up into toxic levels. Just enjoying. The drivetrains to make people buy, or how people drive themself into buying, is often hung up completly wrong.
I hope apple is clever enough and serves reality and not stupid dreams.

Well, since computers have become so powerfull that they serve allready well for so much things,
would i guess, "they" HAVE to work on making us dream. You do that by developing the apps of tomorrow cleverly.
I´m absolutely convinced that apple plans in "whole packages".
They hold now all parts in their own hands. They can steer our "dreaming of", when cleverly donne.
Nevertheless, "highest performance" is in my view not necessarily the main determing factor for tomorrows sales.
- kind of off topic. sorry for that
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
There is a really big TBD associated with this.

A general axiom of electronics design is heat kills. The hotter, the electronics run, the lower the reliability and the short of the lifetime.

This is massive oversimplification. Empirical data shows that fast degradation of modern silicon chips only starts at temperatures over 120C. It’s no coincidence pretty much every manufacturer sets 100-105C as the highest safe operating temperature.


Does this mean that SoC will increase computer reliability and lifetimes? No one knows YET. But the potential is there.

Apple has been routinely running their chips (be it Intel or Apple Silicon) at 100C for at least a decade. They are still regarded as one of the most reliable brands out there.

There is no downside not to, really. Quite the opposite, trying to keep the chip at low temperstures will cost you either performance or size/weight (or both).
 

Confused-User

macrumors 6502a
Oct 14, 2014
852
988
About "heat kills" chips:
There is no downside not to, really. Quite the opposite, trying to keep the chip at low temperstures will cost you either performance or size/weight (or both).
I believe that's true within certain broad parameters. Do you know what they are? Resistance is inversely related to temperatures in semiconductors (unlike metals), AFAIK, while leakage current goes up. Are there other effects as well?
 

MayaUser

macrumors 68040
Nov 22, 2021
3,177
7,196
My intel mac run around 105C almost non stop for 2 years...and the system is automatically placed in standby when it hits 110C
No issues
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.