What's the power consumption of the Nvidia part?
Why not? The OP wants to compare the most powerful GPU in a notebook in the PC and Apple world. Whereas your comparison is between the most powerful GPU in a desktop in the PC and Apple world.I would not compare the m3 max with 4090 but the upcoming m3 ultra
But they are talking about laptops, there is no M3 ultra laptop coming afaik.I would not compare the m3 max with 4090 but the upcoming m3 ultra
"As fast" for what?I wonder if the 40-core M3 Max GPU will be as fast as the 4090 Laptop.
OP asked IF m3 max will be as fast to 4090 laptop....he was not asking what will be the difference between those twoWhy not? The OP wants to compare the most powerful GPU in a notebook in the PC and Apple world. Whereas your comparison is between the most powerful GPU in a desktop in the PC and Apple world.
To be fair, given the power consumption of the Nvidia GPU, Apple could as well put an M3 ultra in a laptop. And the power consumption may end up being lower.Why not? The OP wants to compare the most powerful GPU in a notebook in the PC and Apple world. Whereas your comparison is between the most powerful GPU in a desktop in the PC and Apple world.
It will probably come close in some rasterization/gaming benchmarks. But for many tasks the 4090 will be massively ahead.
I would not compare the m3 max with 4090 but the upcoming m3 ultra
To OP question is NO, M3 Max will not be as fast as the 4090 laptop
On CUDA applications that nvidia probably will have even an larger edge
The GeekBench OpenCL benchmarks are currently showing the M2 Max at around 80,000, the 4090 Laptop just under 182,000. There's surely no way that the M3 Max will even come close to bridging that gap.
Apple deprecated OpenCL way back in macOS Mojave, years before the first AS Mac was released. It's just not optimized for the new hardware.I wouldn't look at OpenCL results for Apple Silicon. They are known to run much slower than Metal (for reasons I don't know).
I wouldn't look at OpenCL results for Apple Silicon. They are known to run much slower than Metal (for reasons I don't know).
Apple deprecated OpenCL way back in macOS Mojave, years before the first AS Mac was released. It's just not optimized for the new hardware.
These are perfectly valid points, but the GB OpenCL numbers are the only ones I know of that give a direct comparison between Nvidia and Apple GPUs, albeit one that certainly isn't optimised for the Apple GPUs
With apologies to the OP for getting into what is probably too much of a digression ....The best thing to do is compare the highest numbers, which in this case would be Metal vs. Vulkan. All GB6 GPU benchmarks perform the same work and use the same algorithms, so it boils down to the maturity of the respective code and the frameworks. Both Metal and Vulkan codebases should be reasonably mature (in fact, I wouldn't be surprised if they use the same code under the hood). As OpenCL is neglected by both Nvidia and Apple I wouldn't use it for anything.
To be fair, given the power consumption of the Nvidia GPU, Apple could as well put an M3 ultra in a laptop. And the power consumption may end up being lower.
These are perfectly valid points, but the GB OpenCL numbers are the only ones I know of that give a direct comparison between Nvidia and Apple GPUs, albeit one that certainly isn't optimised for the Apple GPUs (it may well be that running OpenCL on Nvidia GPUs isn't ideal either).
The bottom line is, I think, that there isn't a good, simple, apples-to-apples comparison (and it may be unrealistic to expect that there should be one, given the huge range of workloads that GPUs can be used for).
Good rule of thumb is about 20-40% performance gain just from good CUDA code vs OpenCL last time i wrote smth there. And even 50%+ using cublas/cudnn and other nvidia stuff baked in NV ecosystem for more complex workloads.With apologies to the OP for getting into what is probably too much of a digression ....
Comparing Metal and Vulkan is certainly an option but, in the case of the 4090, the OpenCL numbers are quite a bit higher (for the Laptop version: approx. 182,000 vs. 156,000). I don't have an explanation of that, although I would expect that for at least some workloads on Nvidia (compute rather than graphical, for instance), both OpenCL and CUDA would be more efficient. And given the CUDA-oriented optimisations in the nvcc compiler, I would also expect CUDA to be (far?) superior to OpenCL for Nvidia GPUs. So, while Metal is obviously the best option for Apple GPUs, I wouldn't feel comfortable making that claim for Nvidia.
The bottom line is, I think, that there isn't a good, simple, apples-to-apples comparison (and it may be unrealistic to expect that there should be one, given the huge range of workloads that GPUs can be used for).
If one wishes to make use of GeekBench, I agree that the best -- or, at least, the least bad -- option may be to compare the best numbers for each GPU that is of interest which, in this instance (M2 Max and 4090 Laptop) appear to be Metal and OpenCL. It's just a shame that GB no longer seems to have CUDA numbers.
Don't be so smug on Apple's GPU power usage when gaming on laptops. My M1Max MBP 16 gets about the same battery life as my Legion Lenovo 7 3080 playing the same game BG3. They're both dead in about a hour. Now browing and video editing, the MBP wins everytime on battery life.To be fair, given the power consumption of the Nvidia GPU, Apple could as well put an M3 ultra in a laptop. And the power consumption may end up being lower.
I suppose the Lenovo reduces its GPU performance by a lot when unplugged.My M1Max MBP 16 gets about the same battery life as my Legion Lenovo 7 3080 playing the same game BG3.