But Apple can go higher than this for the 30" iMac or the Mac Pro. They can go 64 and 128 Core and match/exceed 3090Ti. It seems easily doable at this point.
Isn’t GDDR6 the same as DDR5 just with more latency?View attachment 1870258
This is the one inside Raider. The Texture and Pixel fill rate basically match M1Max.
Did you even watched the keynote? M1 Max has 400 GB/s memory bandwidth with 512-bit memory bus, and yes it is based on LPDDR5.
TGP is configurable on mobile Nvidia and even on desktop with nvidia-smi power limit tool. My 3060 mobile is configurable via function-Q key toggles between 40W, 70W/80W and 100W which is a plus so I can balance fps and battery life. Don't need >90fps mobile gaming which it already achieves at 70W so wished they added 50W and 60W toggles.
Did it? I thought M1 real world reviews were even more bold than what Apple claimed during the keynote. Apple was more conservative.I'll wait for independent unbiased reviews this time. A lot of the same M1 comparison claims to dGPU turned out BS.
Rumor has it that the Mac Pro lineup will have multiple top-end M1 Max SoCs in them...But Apple can go higher than this for the 30" iMac or the Mac Pro. They can go 64 and 128 Core and match/exceed 3090Ti. It seems easily doable at this point.
But bandwidth claim cannot be BS.I'll wait for independent unbiased reviews this time. A lot of the same M1 comparison claims to dGPU turned out BS.
Well, the problem is not about the performance. It's about the macOS platform itself. There aren't many game players from Mac and niche market size is the major problem with Mac gaming. Steam stats shows that only less than 3% are Mac players with Intel Mac.I was just looking at this, paused and screen captured:
View attachment 1870282
Searched the corner and found it was for a laptop with a 3050 TI, not the best discrete laptop GPUs. Went off googling 3050 TI vs 3080. So thought just about 1/2 the 3080 *roughly*. Sneaky, sneaky marketing graph. Then realised that's the pro, not the max!!! ??♂️? So yeah with the 32-core Max, not 16-core Pro it's gonna be ball-park. Nice!
(All with a good pinch of salt at least for now)
Maybe someone will finally make us some decent AAA games now it can match up with the graphics performance of a laptop that sounds like a vacuum cleaner! ?
Both mobile 3080 and 3070 have 448 GB/s bandwidth. Mobile 3060 has 336.0 GB/s. You clearly have no idea what you are talking about.NVIDIA GeForce RTX 3070 Mobile Specs
NVIDIA GA104, 1560 MHz, 5120 Cores, 160 TMUs, 80 ROPs, 8192 MB GDDR6, 1750 MHz, 256 bitwww.techpowerup.com
You said are they any GPU with high bandwidth, and there you go.50 extra GB/s is not „significantly higher“. Especially given that a 3080 has merely 5MB of cache. M1 Pro probably has over 32MB, so it’s aggregate bandwidth on complex data is going to be higher.
And frankly, I am not sure that you are in position to tell me that I don’t know what I am taking about ?
You said are they any GPU with high bandwidth, and there you go.
This is M1 Pro, not M1 Max.I was just looking at this, paused and screen captured:
View attachment 1870282
Searched the corner and found it was for a laptop with a 3050 TI, not the best discrete laptop GPUs. Went off googling 3050 TI vs 3080. So thought just about 1/2 the 3080 *roughly*. Sneaky, sneaky marketing graph. Then realised that's the pro, not the max!!! ??♂️? So yeah with the 32-core Max, not 16-core Pro it's gonna be ball-park. Nice!
(All with a good pinch of salt at least for now)
Maybe someone will finally make us some decent AAA games now it can match up with the graphics performance of a laptop that sounds like a vacuum cleaner! ?
Ignoring HBM2 huh?I wrote „significantly higher“. You said 400GB/s is „way slower„ than GDDR6 in comparable machines. So far you quote 10% higher, not „way higher“.
I was just looking at this, paused and screen captured:
View attachment 1870282
Because Ampere can do FP32 + (FP32 or INT). It basically has double the FP32 units from Turing (IIRC Turing could do FP32 + INT). So if your load has INT in it, you won’t get the full 19 TFLOPS.I don't think there's anything left to say. RTX 3080 Mobile has the following specs:
View attachment 1870019
M1Max 32 Core GPU offers 10.4 TFLOPs compute, 327 GTexels/s and 165 GPixels/s rates.
Texture and pixel fill rates exceed RTX 3080 Mobile but computing performance is half as much. (Wonder why that is).
I'm guessing that the 30" iMac won't be called iMac, but iMac Pro.Rumor has it that the Mac Pro lineup will have multiple top-end M1 Max SoCs in them...
M1 Max (Jade C-Die)
- 10-core CPU (8P/2E)
- 32-core GPU
- 16-core Neural Engine
- 64GB RAM
Jade 2C
- 20-core CPU (16P/4E)
- 64-core GPU
- 32-core Neural Engine
- 128GB RAM
Jade 4C
- 40-core CPU (32P/8E)
- 128-core GPU
- 64-core Neural Engine
- 256GB RAM
So yeah, Jade 2C = 3090Ti & Jade 4C = Dual 3090Ti...?
A Jade 2C in a new Cube seems the sweet spot for it all...?
Ah thanks. Now it makes sense.Because Ampere can do FP32 + (FP32 or INT). It basically has double the FP32 units from Turing (IIRC Turing could do FP32 + INT). So if your load has INT in it, you won’t get the full 19 TFLOPS.
Im still waiting for high end iMac cause many monitors with hardware calibration still struggling with Apple Silicon Mac. Still buggy and useless.I'm guessing that the 30" iMac won't be called iMac, but iMac Pro.
And I'm also guessing that the 30" iMac will have the same top end SoC options as the Mac Pro.
Because I don't see why Apple wouldn't be able to fit these SoC's into a 30" iMac Pro.
So if you want a desktop with top end Mac Silicon, you will have two options. Screen included, or not.
I want to switch to MacBook Pro + screen instead of iMac. I'm set on buying M1 Max 16", but I need a decent screen to go with it. Cannot afford XDR.Im still waiting for high end iMac cause many monitors with hardware calibration still struggling with Apple Silicon Mac. Still buggy and useless.
I have a nice hardware calibration monitor but it doesn't work well instead of buggy. Not reliable and not able to use all features.I want to switch to MacBook Pro + screen instead of iMac. I'm set on buying M1 Max 16", but I need a decent screen to go with it. Cannot afford XDR.
The bandwidth bottleneck is not a problem because it's Apple Silicon. Do you even aware that TBDR GPU does NOT require high bandwidth unlike other GPU? Also, the unified memory is way faster than current dGPU. You are comparing with different products after all. Apple Silicon is NOT PC anymore. Apple make their own chip and it's totally different from PC parts.
Well there is the 5k LG UltraFine TB3 monitor. That gives you a panel similar to the 5k iMac's.I want to switch to MacBook Pro + screen instead of iMac. I'm set on buying M1 Max 16", but I need a decent screen to go with it. Cannot afford XDR.