Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
Apple focused on perf/watt and that’s what they achieved.
The main issue with M1 Pro/Max is that, for whatever reason, its performance currently depends a lot on the task. For some workloads, M1 Pro/Max run as good as top Nvidia GPUs, but for others, only as good as low-tier Nvidia GPUs.
 

Kronekerdelta

macrumors newbie
Nov 20, 2021
11
13
uh yeah, no. Sorry, not true. Deny all you like, Apple Silicon is highly performant plus low power/heat. It's the largest gain in performance vs anything on the PC side. Again, if anyone is hell bent on Intel and gaming, or whatever nonsense these threads feel like comparing for the sole sake of bashing, then by all means buy a PC. In the real world the gains are obvious.
Am I the only one who watched the video? ?
 
  • Like
Reactions: Lihp8270

Kronekerdelta

macrumors newbie
Nov 20, 2021
11
13
particularly not optimized ones. Lets just face it, M1 simply don't run x86 software as well as x86 devices do. I would love to see an x86 run a program optimized for M1 and see how they do. The would be a real test
It's the same excuse Linux users make. The fact is, 99% of people don't care the reason why their programs don't perform too well.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,462
955
You can run all sorts of tests until you find one or more where the processor underperforms. Frankly we were bound to find a few.
But in the case of the Apple GPU, it's the other way around. There are only a few tasks in which it performs as advertised by Apple : GFXBench, FCP/DaVinci (which may not even be GPU-bound) and... what else?
 

thunng8

macrumors 65816
Feb 8, 2006
1,032
417
But in the case of the Apple GPU, it's the other way around. There are only a few tasks in which it performs as advertised by Apple : GFXBench, FCP/DaVinci (which may not even be GPU-bound) and... what else?
Video editing exceeds what was advertised. Photo editing also exceeds any other laptop as well. That is a huge segment for creative pros.


 

thunng8

macrumors 65816
Feb 8, 2006
1,032
417
The main issue with M1 Pro/Max is that, for whatever reason, its performance currently depends a lot on the task. For some workloads, M1 Pro/Max run as good as top Nvidia GPUs, but for others, only as good as low-tier Nvidia GPUs.
For some workloads it runs a lot better than a top end nvidia gpu. See my links for photo editing. Also we know how well it does with video editing. Those combined markets represents a majority of what creative pros do and is a much larger market than for 3d rendering.
 

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
I just saw this video on youtube from Linus Tech tips and they tested the M1 Max :
Not so good after all compared to a lot of x86 systems out there, at least graphic wise.
What do you think?
Still waiting for my MBP M1 PRO 16…
Benchmarks don't tell everything+
 

827538

Cancelled
Jul 3, 2013
2,322
2,833
Until applications are natively written for Apple Silicon we won't see what it is capable of in many scenarios (especially gaming).
 

hagjohn

macrumors 68000
Aug 27, 2006
1,866
3,708
Pennsylvania
I just saw this video on youtube from Linus Tech tips and they tested the M1 Max :
Not so good after all compared to a lot of x86 systems out there, at least graphic wise.
What do you think?
Still waiting for my MBP M1 PRO 16…
Power down all that **** to match the M1 CPU/GPU and see what you get.
 
  • Angry
Reactions: Shirasaki

ahurst

macrumors 6502
Oct 12, 2021
410
815
Oh, that’s simple - Apple GPUs do not have any hardware support for FP64. Which makes perfect sense to me: given how crippled GPU FP64 usually is, one is better off implanting extended precision in software. Predictable performance and behavior.
What makes FP64 crippled in most GPUs? I'd imagine it's not too important for games, but for GPU compute tasks wouldn't properly-implemented high precision math be a key feature? Asking because a neuroimaging guy mentioned the lack of FP64 as a weakness of the M1 architecture for fMRI work (balanced out by many strengths, especially in single-threaded parts of the pipeline).
 

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
I don’t think it’s fair to compare an SoC to integrated graphics anymore. The architecture is fundamentally different from a standard cpu+gpu setup and that’s where the M series found the efficiency and performance gains. Essentially they “cut out the middleman”.
Agreed with this. Consoles are SoC.
 
  • Like
Reactions: Stratus Fear

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
I'd imagine it's not too important for games, but for GPU compute tasks wouldn't properly-implemented high precision math be a key feature?
Requirements for GPU for gaming and scientific computation are different. Games require fast calculations, and scientific computing, accurate calculations. Indeed, AMD makes GPUs with two different architectures: CDNA for scientific computing and RDNA for gaming.

For reference, 32-bit floating-point numbers have only seven decimal digits, and they are not enough for some applications.
 

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
I'm sorry but people who thought that an integrated graphic can beat top of the line graphics card are very naive
For sure in certain tasks having material acceleration will give a huge advantage but in general computing it was predictable

The key question is "for what workloads?". For GPGPU tasks, particularly those that are using well-established programming frameworks, Apple's Metal may not compete very well. I don't know.

Apple is following a path to more task-specific silicon rather than general-purpose. It will inevitably fall short in some areas.

Whether this matters to you is entirely dependent on what you are trying to do. Just choose the best tool for the job at hand. There is really no "right answer". I have a toolbox at home with lots of different tools for different things. Same with computers. I've got Linux desktops with PCIe cards, super-compact NUCs for controlling telescopes, compact desktops, big laptops, small laptops, tablets..... they all do different things.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
What makes FP64 crippled in most GPUs? I'd imagine it's not too important for games, but for GPU compute tasks wouldn't properly-implemented high precision math be a key feature?

GPU die area is limited and FP64 ALUs are not cheap. Given that their utility is marginal it’s difficult to justify budgeting for them.
Asking because a neuroimaging guy mentioned the lack of FP64 as a weakness of the M1 architecture for fMRI work (balanced out by many strengths, especially in single-threaded parts of the pipeline).

I think that statement fails to acknowledge the full context. FP64 is dead slow on virtually any modern GPU. It’s basically life suppport mode - they give you some FP64 hardware so that your software runs but that’s about it. What good is that fancy big GeForce to you if the FP64 throughput rate is limited to 1/64 of FP32? Use doubles - and you fall into the “it’s supported” trap. Sure, it’s supported - but probably not how you imagined it.

Apples approach is more principled IMO. They won’t do life support for you. Ned extended precision? Implement your own extended precision type using provided floats or ints. Metal is C++, you can build your own types easily. You’ll get better - and more predictable - performance than using native FP64 on hardware that supports it, and you can fine-tune the implementation to suit your needs.
 
  • Like
Reactions: Xiao_Xi and ahurst

arvinsim

macrumors 6502a
May 17, 2018
823
1,143
Let's be honest, the "Pros" that these machines are targeted at are video editors/youtubers. Which is a brilliant move since they are the ones that do Apple's marketing for free.

The only problem is the weasily marketing language used to imply that the M1 Pro/Max can compete against RTX GPUs on absolute performance.

In any case, as a non-creative pro, I hope the rumors are true for a new batch of pros next year to be cheaper by dropping some features that are not really needed by non-creatives i.e. Miniled is not a necessity for some pros.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Let's be honest, the "Pros" that these machines are targeted at are video editors/youtubers.

.. and developers, and statisticians, and data scientists, and...

In any case, as a non-creative pro, I hope the rumors are true for a new batch of pros next year to be cheaper by dropping some features that are not really needed by non-creatives i.e. Miniled is not a necessity for some pros.

That will never happen. Cheaper, maybe, but they won't drop miniLED or similar features. Apple is moving towards HDR across all their devices and they will not sell you a pro laptop with lower dynamic range than their cheapest tablet.
 

Lihp8270

macrumors 65816
Dec 31, 2016
1,143
1,608
It’s called a SOC for a reason (that is System on Chip). Who cares what piece performs what function, it’s about throughput. Apple does not sell GPUs separately. Other than video editing? Good question, did apple target any markets? I saw one chess engine written in assembler and optimized for x86, and definitely not optimized for m1. So that doesn’t count (and assembler, seriously that is so 1980). But seeing apple is not selling GPUs, only tests that count are optimized and workflow related - it literally doesn’t matter what part of soc gets you there
I don’t think anybody would argue this point. However apple claimed it could match raw performance of an RTX3080. That was a claim apple made. Not about SoC throughput. Explicitly that the gpu can near match a mobile 3080. So far outside of 1 or 2 programs it doesn’t come close.

Even redshift which apple included themselves, doesn’t come as close as apple presented.

Be arguments aren’t about the SoC which is fantastic it’s about the GPU matching apples claims. Nobody would be comparing raw performance if apple didn’t make the claim.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
I'm sorry but people who thought that an integrated graphic can beat top of the line graphics card are very naive

Let us finally put this "integrated graphics must be bad" thing to rest. A GPU is a GPU, it all depends on how big and powerful you make it. M1 Max has 4096 GPU "shader cores" running at approx 1.2Ghz, so that's the baseline GPU throughput you get. Closest equivalents in this department are Nvidia RTX 3060 or Radeon RX 6800. M1 Max is the first "big" GPU Apple made, and it makes perfect sense given which products it is intended for (laptops with maximal power consumption of 80-90 watt). This is not the last "large" GPU Apple will build and certainly not the largest they will build. The only thing preventing them of putting 10+K shader cores in an integrated solution is a) power consumption and b) size. But there will be other products for which Apple will undoubtedly use bigger GPUs.

So no, M1 Max is not slower than a desktop RTX 3080 because it's "integrated". It is slower because it has literally half the amount of shader cores and runs on lower frequency. Different products for different use cases.

The main issue with M1 Pro/Max is that, for whatever reason, its performance currently depends a lot on the task. For some workloads, M1 Pro/Max run as good as top Nvidia GPUs, but for others, only as good as low-tier Nvidia GPUs.

I don't think it's such a big mystery if you look at the specs. First, the obvious ones — ALU throughput and bandwidth. For example, in a straightforward throughput workloads you can't really expect the M1 Max with it's 5TFLOPS to outperform a 3080 with it's 14TFLOPS (I am not counting FMA double here). But there are also less obvious ones, if we look at unique strength and weaknesses of these GPUs:

- Apple G13 has access to much more cache
- Apple G13 has access to much more RAM and can share data with the CPU without incurring latency penalties
- Apple G13 has TBDR which allows it to use shading and memory resources more efficiently when rasterizing
- Nvidia/AMD have hardware RT
- Nvidia/AMD generally have more memory bandwidth

When you consider these, things should become more clear. For example, M1 is able to punch well above its weight in rasterization tasks (eg. gaming) because TBDR allows it to use the resources smarter. So it can perform close to a GPU that nominally has higher performance. Similarly, M1 will excel in tasks where you have to synchronize the data between GPU memory and system memory a lot, or where you need a lot of GPU memory to begin with. Content creation is a prime example, where data is streamed in and out from the GPU RAM constantly. No matter how fast your Nvidia GPU is, all these data syncs will introduce latency and incur overhead which will reduce the effective performance — no such problem with M1 as it simply doesn't care. And finally, M1 might do very well on complex workloads with a lot of data dependencies, where it can play out its cache advantage, but that's not really something GPUs are used for, so this one stays purely academical.
 

arvinsim

macrumors 6502a
May 17, 2018
823
1,143
.. and developers, and statisticians, and data scientists, and...
Developers only need the extra RAM configuration. ML is usually done on CUDA so these Macbooks are not really targeted to Machine Learning and Data Science people.

I mean they added hardware encoders for video. They didn't add anything specialized for software development.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Developers only need the extra RAM configuration.

Developers need short build times and a good balance of burst/sustained perfond M1 definitely delivers. ML is usually done on CUDA so these Macbooks are not really targeted to Machine Learning and Data Science people.

I mean they added hardware encoders for video. They didn't add anything specialized for software development.

Specialized hardware for software devs? What would that be? M1 is great for software devs because it’s a very strong general purpose CPU that excels at running complex logic and doing data manipulation.

ML is usually done on CUDA so these Macbooks are not really targeted to Machine Learning and Data Science people.

Not all data science equals ML. We use a lot of R and Stan for example, and M1 is insanely good here.

And M1 Macs are not too shabby for ML either - AMX delivers decent matmul performance, the problem right now is more on the software side. Obviously, for large scale training you would use a cluster or a big workstation, but the new MBP is great for prototyping and testing.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Well, I think that if running Windows apps is essential then it makes far more sense to buy a Windows PC. That’s just a given.
Not if you don't want to carry a second machine with you and you want the Mac. Your way the Mac losses out and I carry only a Windows machine.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
The main issue with M1 Pro/Max is that, for whatever reason, its performance currently depends a lot on the task. For some workloads, M1 Pro/Max run as good as top Nvidia GPUs, but for others, only as good as low-tier Nvidia GPUs.
I don’t think anybody would argue this point. However apple claimed it could match raw performance of an RTX3080. That was a claim apple made. Not about SoC throughput. Explicitly that the gpu can near match a mobile 3080. So far outside of 1 or 2 programs it doesn’t come close.

Even redshift which apple included themselves, doesn’t come as close as apple presented.

Be arguments aren’t about the SoC which is fantastic it’s about the GPU matching apples claims. Nobody would be comparing raw performance if apple didn’t make the claim.
Let's be honest, the "Pros" that these machines are targeted at are video editors/youtubers. Which is a brilliant move since they are the ones that do Apple's marketing for free.

The only problem is the weasily marketing language used to imply that the M1 Pro/Max can compete against RTX GPUs on absolute performance.

In any case, as a non-creative pro, I hope the rumors are true for a new batch of pros next year to be cheaper by dropping some features that are not really needed by non-creatives i.e. Miniled is not a necessity for some pros.
Again, anyone who applied critical thinking could have predicted the M1 Pro and Max gpus would falter in some way against the competition. Unless the competition is clowns like Intel, it wasn’t going to match top end gpus on every metric using a fraction of the power.

And I, too, am disappointed by the marketing. However again, you’ll see that their definition of performance (as narrow as it may be) is accurate and intentionally plays to the strengths of the architecture.

Programs that don’t take advantage of the specific strengths of the M series will perform worse, conversely programs that leverage strengths of other gpus will perform better. We’ve seen this in the stockfish thread.

People who are disappointed by this are people who had their expectations disconnected from reality, like people who think gaming will come to Mac.

And, knowingly dipping into “whataboutism”, I’d like to say this type of marketing is endemic to the industry. If I cared, I’d bring up performance charts from other product launches that have the same ********.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.