Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

jujoje

macrumors regular
May 17, 2009
247
288
The RTX 4060 will perform like a RTX 3080. Sure the RTX 4060 will use 180-350 watts but it will be like $450.

and Apple charges $1000 to get the full 64 cores.

To get the same amount of GPU memory you will need 5 or 6 RTX 4060s.

Or you could go with the quadro equivalent, which, going off the RTX A6000 48GB card, will be $5000 for 48GB. The A6000 card is probably the best equivalent in that it is designed for the same professional workflows with large data sets as M1 Ultra.

For gaming, sure the Nvidia 40xx cards will probably be faster. For workflows that actually take advantage of the card it is less clear cut. The M1 Max comes in spitting distance of the 3090 desktop card when rendering the Moana data set, and beats 2x2080ti. The M1 Ultra should roundly beat the 3090. Although looks like Redshift doesn't scale too well...
 
  • Like
Reactions: Reggaenald

LuisN

macrumors 6502a
Mar 30, 2013
737
688
Torres Vedras, Portugal
Nvidia current GPU is based on Samsung's 8nm node, they will move to 5nm TSMC this year.
AMD will be doing the same.

Both Lovelace and RDNA3 will be really powerful.

Both will destroy the M1 Ultra GPU. The Achilles heel is Mac Studio is not have having PCEi solts allowing for dGPU AMD support.

Apple will likely update the Mac Studio with M2 Ultra in 2 years by then AMD and Nvidia will have another major GPU arch update.
Who cares?!
 

MayaUser

macrumors 68040
Nov 22, 2021
3,178
7,200
The RTX 4060 will perform like a RTX 3080. Sure the RTX 4060 will use 180-350 watts but it will be like $450.

and Apple charges $1000 to get the full 64 cores.
at 350W besides $450 you will have to count the energy bill difference between M1 ultra and this :)))
This chase for performance with higher W must to stop..nvidia and amd must find an different way...in 10 years you will have a mid-high end gpu that draws 800W alone
 

iDron

macrumors regular
Apr 6, 2010
219
252
Nvidia already beats Apple based on raw computational power.
A RTX3090 has already about 36 TFLOPS in computational power. This is 80% more than the M1 Ultra at 20 TFLOPS.
This is comparable to the ratio between M1 Max and GTX 3080 for laptops.
The mobile RTX3080 is at about 18 TFLOPS, also about 80% more than the M1 Max at 10 TFLOPS.

The M1 Ultra Duo or whatever it will be called in the upcoming Mac Pro will likely have 40 TFLOPS, which is faster than all current Nvidia/AMD cards. However, people who use those, probably have 2-3 in their PC. It will be interesting to see if Apple will try to compete with that, by making the chips modular, i.e. allowing the addition of a second M1 Ultra Duo, or just offering th GPU part, so that one could get to 80 or 120 TFLOPS.

I know that this is by far not the only metric, and Apple's benchmarks have some truth to it. Nonetheless, in raw computational power, the picture is clear.
 

macacam

macrumors member
Feb 10, 2022
49
108
Wait wait wait. Back the horse up. So you're telling me products to be released later in time will be superior to products currently being released? HOW DOES THIS HAPPEN?!?! Should I just keep waiting for them to stop making stuff so I'm sure the stuff I own is the best stuff? I'll never be able to talk about my stuff at this rate.. :eek:
:rolleyes:
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
A RTX3090 has already about 36 TFLOPS in computational power.
I read somewhere that the RTX3090 TFLOPS numbers are computed based on the number of ALUs and clock rates to arrive at the 36 TFLOPS. But due to the design of their architecture, some of the ALUs have dual functions, as in they can either do integer or floating point math, but not both at the same time. So the theoretical 36 TFLOPs cannot be achieved.

I read that is why the AMD RT6900XT with only 24 TFLOPS can beat the RTX3090 in some raster tasks.

So it seems that comparing TFLOPS across architecture may not tell the entire picture.

Add in the fact that Apple's SoC uses TBDR compared to IMR for the RTX3090, I'll not be surprised that the M1 Ultra will come out ahead in raster workload.
 

iDron

macrumors regular
Apr 6, 2010
219
252
I read somewhere that the RTX3090 TFLOPS numbers are computed based on the number of ALUs and clock rates to arrive at the 36 TFLOPS. But due to the design of their architecture, some of the ALUs have dual functions, as in they can either do integer or floating point math, but not both at the same time. So the theoretical 36 TFLOPs cannot be achieved.

I read that is why the AMD RT6900XT with only 24 TFLOPS can beat the RTX3090 in some raster tasks.

So it seems that comparing TFLOPS across architecture may not tell the entire picture.

Add in the fact that Apple's SoC uses TBDR compared to IMR for the RTX3090, I'll not be surprised that the M1 Ultra will come out ahead in raster workload.
On top of course the integrated memory of Apple's M1 is an advantage. 64GB or 128GB of GPU memory is quite a statement, especially for this price.

But for example looking at machine learning, raw computational power together with memory are important factors. That's quite different from rendering an image.
 

Bodhitree

macrumors 68020
Apr 5, 2021
2,085
2,216
Netherlands
What I find amazing is that the M1 architecture is basically Apple’s first shot at creating a desktop-class GPU and they’ve done a pretty awesome job at supplying performant hardware, competing with the best discrete GPUs out there. If your use case requires this kind of power, you can buy it.

Apple’s GPUs are still gaining in performance significantly with each generation, and they’ve done a new deal with Imagination for a very nice set of raytracing technology, which I am sure we will see incorporated soon.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Apple’s GPUs are still gaining in performance significantly with each generation
I think more importantly is that software are being optimised for the current generation of AS Macs to make them more performant. I think there are a lot of hidden potential even in the M1 SoC (e.g. efficiently using the TBDR architect of the GPU) that has yet to be unlocked by developers.
 

Gnattu

macrumors 65816
Sep 18, 2020
1,107
1,671
raw computational power.

It basically means nothing nowadays (not much in older days either). Below are the TOPs in INT8, from Xilinx (now AMD) VCK5000 press release. Real-world performance differs largely from that theoretical number.

AMD-Xilinx-VCK5000-Estimated-Dark-TOPS-1536x883.jpg
 
  • Like
Reactions: uller6 and jole

MayaUser

macrumors 68040
Nov 22, 2021
3,178
7,200
What I find amazing is that the M1 architecture is basically Apple’s first shot at creating a desktop-class GPU and they’ve done a pretty awesome job at supplying performant hardware, competing with the best discrete GPUs out there. If your use case requires this kind of power, you can buy it.

Apple’s GPUs are still gaining in performance significantly with each generation, and they’ve done a new deal with Imagination for a very nice set of raytracing technology, which I am sure we will see incorporated soon.
yes, and think about if the M2 pro max and ultra gain the gpu counts and the gpu core improvement from the A15
Just from that we can get over 40% improved at around the same power consumption
 

Spindel

macrumors 6502a
Oct 5, 2020
521
655
Doesn't matter.

Apple is still on generation 1 of their "desktop" GPUs. All M1 GPUs are basically the same but with different core counts.

The interesting part is to see how Apple will be able to follow the performance with generation 2, 3 etc and if they will be able to leap frog with those in the same way as they did on the iPhone side.

This might lead to apple either just keeping up with Nvidia and AMD (at a much lower power level since they already beat both in perf/W) or they will utterly crush them. But for that only time will tell.

Biggest hurdle right now for Apple is poor implementation from a lot of software houses of their graphics/compute API. If the software houses can sort this Nvidia and AMD are going to get a run for their money.
 

TracerAnalog

macrumors 6502a
Nov 7, 2012
796
1,462
Nvidia current GPU is based on Samsung's 8nm node, they will move to 5nm TSMC this year.
AMD will be doing the same.

Both Lovelace and RDNA3 will be really powerful.

Both will destroy the M1 Ultra GPU. The Achilles heel is Mac Studio is not have having PCEi solts allowing for dGPU AMD support.

Apple will likely update the Mac Studio with M2 Ultra in 2 years by then AMD and Nvidia will have another major GPU arch update.
Newsflash! In 7 years computers will be way faster!
 

vladi

macrumors 65816
Jan 30, 2010
1,008
617
Im not sure what's your GPU workflow but when it comes to rendering 2xGPU found in Ultra will lag behind 3080Ti by double render time. That's my estimate considering M1 Max GPU lags four times compared to 3080Ti
 

MajorFubar

macrumors 68020
Oct 27, 2021
2,174
3,826
Lancashire UK
Of course they will...it's a tit for tat race where the net result is the customer benefits. Competition raises standards. At least in this field. Technology moves so fast that anything state of the art this year isn't next year, at the absolute latest. This isn't new.
 

MajorFubar

macrumors 68020
Oct 27, 2021
2,174
3,826
Lancashire UK
The Studio will be paper weight in 3-4 years.
It won't. It will be within 5-7 years of initial launch like all other non-upgradable Macs, in line with when the new MacOS can no longer be downloaded to it legitimately without some DosDude patch. But in any case, certainly the ultra version is aimed at professional customers who can justify replacing it with whatever the latest thing is every three to four years.
 

ian87w

macrumors G3
Feb 22, 2020
8,704
12,638
Indonesia
Nvidia current GPU is based on Samsung's 8nm node, they will move to 5nm TSMC this year.
AMD will be doing the same.

Both Lovelace and RDNA3 will be really powerful.

Both will destroy the M1 Ultra GPU. The Achilles heel is Mac Studio is not have having PCEi solts allowing for dGPU AMD support.

Apple will likely update the Mac Studio with M2 Ultra in 2 years by then AMD and Nvidia will have another major GPU arch update.
Powerful at what power consumption? I believe Apple is still banking on having more performance per watt.
 

oz_rkie

macrumors regular
Apr 16, 2021
177
165
Powerful at what power consumption? I believe Apple is still banking on having more performance per watt.

Power consumption does not matter nearly as much on the desktop side of things. Users who want/need more performance won't mind paying for it with more watts.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Power consumption does not matter nearly as much on the desktop side of things. Users who want/need more performance won't mind paying for it with more watts.
I’ll agree that perf/watt matters the least on desktops/workstations as compared to laptops and servers buuut it still affects things like chassis size, cooling, noise, (electricity costs), etc ….

The M1 Ultra CPU is a faster i9/mid-level threadripper with a 3060-3090 GPU depending on circumstances (which graphics option, what workload) which if it were those things it would have to liquid cooled to fit in that sized chassis if it could be at all. Instead it’s air cooled and I’m going to guess pretty quiet.

Further perf/W means you can keep scaling a chip design and still get predictable performance (ie not dropping base clocks) without blowing thermals/power.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Thing is you can upgrade the Mac Studio, a PC and Mac Pro you can. The Studio will be paper weight in 3-4 years.

The DIY/user replacement market isnt tiny but it’s still much smaller than OEM. And even DIY computers often do nearly full replacements in that time frame as new memory, new sockets, new PCIe are required if you’re trying to keep up with the latest and greatest. It may be amortized a little over time, but if anything that’s not the latest and greatest is a paperweight to you, that’s still going to cost. And OEM is always more expensive than DIY.
 

Pressure

macrumors 603
May 30, 2006
5,182
1,544
Denmark
Now that’s a hot take. Newer products released in the future will be faster than current available products.

Shocker, who could have seen it coming, pesky future and its faster products.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.