Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
The information revealed by Bloomberg today puts your whole post in the past.

- The next generation chip for the MacBook Pro is a 10 core chip with 8 performance cores and 2 efficiency cores. This means an entirely newly designed chip where 2 efficiency cores are now the same or better performance as 4 efficiency cores in the M1. Plus 8 performance cores, putting this thing easily in the 14k multicore range, or better if the performance cores get as much of a boost as the efficiency cores.

- The graphics options are said to be 16 and 32 core graphics. The memory options are said to go up to 64 GB. And if this is still unified memory, 32 core graphics with access to 64 GB of RAM will blow past your "mid-range desktop GPU's".

I just watched a video on this and I’m impressed. I suspect that a configuration of 8 performance cores, 32 GB, and 16 GPU cores is enough for me in a laptop.

I did not see details on a larger iMac though.
 

el-John-o

macrumors 68000
Nov 29, 2010
1,590
768
Missouri
I just watched a video on this and I’m impressed. I suspect that a configuration of 8 performance cores, 32 GB, and 16 GPU cores is enough for me in a laptop.

I did not see details on a larger iMac though.
The existing M1 is already a little bit faster than the dedicated GPU in my 2016 MacBook Pro. And the CPU blows it out of the water. It's impressive stuff! Exciting to see where it goes.
 
  • Like
Reactions: Rashy and JMacHack

ader42

macrumors 6502
Jun 30, 2012
436
390
M1 GPU scores around 20,000 on geekbench Metal test, current iMac gpu option tops out at Radeon Pro 5700XT 16gb, which scores upto 75,000.

Basically Apple needs to quadruple their gpu prowess, at least. Basically a 32 core Apple Silicon gpu would be in the ballpark of what we currently have from AMD for mid-range desktop Macs (not low level consumer and not the Mac Pro).
 
  • Like
Reactions: el-John-o

el-John-o

macrumors 68000
Nov 29, 2010
1,590
768
Missouri
M1 GPU scores around 20,000 on geekbench Metal test, current iMac gpu option tops out at Radeon Pro 5700XT 16gb, which scores upto 75,000.

Basically Apple needs to quadruple their gpu prowess, at least. Basically a 32 core Apple Silicon gpu would be in the ballpark of what we currently have from AMD for mid-range desktop Macs (not low level consumer and not the Mac Pro).
Yeah. And that may be hard to do with an integrated GPU.

I would imagine what we may see on the 27” iMac and on the 15” MacBook Pro will be a dedicated GPU that works in tandem with the M1 GPU.
 

Jorbanead

macrumors 65816
Aug 31, 2018
1,209
1,438
This means an entirely newly designed chip where 2 efficiency cores are now the same or better performance as 4 efficiency cores in the M1.
I don’t think the article says this. It only says it’ll have 2 efficiency cores. It doesn’t say they will be just as powerful as 4 M1 efficiency cores. If I had to guess, it’s just 2 icestorm cores instead of 4. With 8 firestorm cores. So it’ll still be extremely powerful.

Now they could theoretically clock the icestorm cores higher than the M1’s icestorm cores which would make them faster. So you could be right, I just don’t think they explicitly stated that.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
I'm not really sure how a 16/32 core GPU that shares its memory with the system is going to "blow past" something like a mid-ranged nVidia card with 1,500 cores and memory that is about 4x as fast as the memory in the M1 (14,000 MT/s vs. 4,000MT/s for the M1's low power RAM).

For the M1 and on-board graphics, the biggest bottleneck is absolutely going to be the "unified memory architecture", which is great marketing but it's just "shared memory", which is much slower than the GDDR6 in a modern GPU. That's a BIG bottleneck for just about every GPU application.

You are focusing too much on the fact that the memory is shared while you should be focusing on the memory implementation instead. Quad-channel LPDDR5 for example would yield in excess on 200GB/s. This is still half of faster desktop GPUs, but then again Apple GPUs use a different approach to rendering that requires less bandwidth.

For example, M1 GPU with it's 10 watts TDP and 68GB/s scores around 5000 in 3Dmark Wild Life Extreme. A desktop 1650 (75+W TDP, 130+GB/s) scores 7500, a mobile 1650 variant (50W TDP) scores around 6900. That's around 30% difference with 80% lower power consumption and 2x lower bandwidth. And you can also see it in real-world games (not that there are many right now, but M1 manages quite steady ~40fps on 1920p high in Metro Exodus).

(nVidias high-end GPU's have over 10,000 cores)

Because everyone counts cores in a different way. Nvidia counts individual ALUs, Apple counts actual cores (as in, minimal organisational unit in a GPU cluster). Counting things Nvidia way, Apple M1 has 1024 cores (each Apple GPU core contains 4x32 FP32 ALUs), counting things Apple's way something like Nvidia RTX 3060 (GA106) is either a a tree-core GPU (if you count GPCs, with 40x32 FP ALUs each), a 15-core GPU (if you count TPCs, with 4x32 FP32 ALUs each) or a 30-core GPU (if you count SMs with 2x32 FP32 ALUs each, or maybe its 4x16 FP32 ALUs, I was never quite certain).


But those never came close to what was available on the Desktop. If the next version of the M1 GPU is twice as fast as the current one, then it's still slower than a current-generation mid-raged Desktop GPU.

As I have shown above, M1 is already around 70-80% as fast as current entry-level desktop GPUs. Quadruple the number of processing units and give them more memory bandwidth and you get yourself a RTX 3060 performance easily, with a power consumption of a 1650 Ti max-Q
 
  • Like
Reactions: Rashy

Lemon Olive

Suspended
Nov 30, 2020
1,208
1,324
I don’t think the article says this. It only says it’ll have 2 efficiency cores. It doesn’t say they will be just as powerful as 4 M1 efficiency cores. If I had to guess, it’s just 2 icestorm cores instead of 4. With 8 firestorm cores. So it’ll still be extremely powerful.

Now they could theoretically clock the icestorm cores higher than the M1’s icestorm cores which would make them faster. So you could be right, I just don’t think they explicitly stated that.
I think it is far, far more likely that this is a redesigned chip rather than just overclocked efficiency cores. That's an oxymoron.
 

Lemon Olive

Suspended
Nov 30, 2020
1,208
1,324
I just watched a video on this and I’m impressed. I suspect that a configuration of 8 performance cores, 32 GB, and 16 GPU cores is enough for me in a laptop.

I did not see details on a larger iMac though.
The only detail on the larger iMac was that they stopped working on it. So, yeah. I don't expect to see that any time soon. More is known about the upcoming Mac Pro, which doesn't bode well for the iMac.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
The only detail on the larger iMac was that they stopped working on it. So, yeah. I don't expect to see that any time soon. More is known about the upcoming Mac Pro, which doesn't bode well for the iMac.

That's a disappointment but I suppose that Apple may be eager to put a monster out there for the pro market. Or they could make a cheap Mac Pro to be used like an iMac. Or the Mini could be so powerful that that would be another option. Or it could be hard to get 5k or 6k panels. I would like a large iMac to minimize cable clutter and for a small, clean look.

I'm eager on the MacBook Pros though. I could use a laptop and a desktop but just the laptop would be good enough for now as I can manage with the system I built last year.
 

Homy

macrumors 68030
Jan 14, 2006
2,510
2,461
Sweden
GPU performance. I'd like it to be on the level of the 1650 or other modern lower end cards

M1 GPU scores around 20,000 on geekbench Metal test, current iMac gpu option tops out at Radeon Pro 5700XT 16gb, which scores upto 75,000.

Basically Apple needs to quadruple their gpu prowess, at least. Basically a 32 core Apple Silicon gpu would be in the ballpark of what we currently have from AMD for mid-range desktop Macs (not low level consumer and not the Mac Pro).

M2 GPU with 32 cores would be on par with 5700 XT, 2070 Super, 2080 or 1080 Ti in games like Borderlands 3 at 1440p Ultra. :)

I extrapolated some benchmarks and it will be impressive (1260p is for iMac 24"):

- M1 GPU 8 cores: Borderlands 3 1080p Ultra 22 fps - medium 30 fps (1260p 19-26, 1440p 15-23)
- M2 GPU 16 cores 1440p 30-46 fps, 32 cores 1440p 60-92 fps
 

adcx64

macrumors 65816
Nov 17, 2008
1,270
124
Philadelphia
100% GPU performance. We're already at the point where we have a nice CPU with a REALLY nice IPC compared to Intel's chips melting a hole through your desk.

CPU perf can likely scale with clock speeds as the lithography shrinks. We need more GPU cores.

F*ck it, metal Raytacing would be a cool addition but not a game changer for me.
 

adcx64

macrumors 65816
Nov 17, 2008
1,270
124
Philadelphia
Also laughing at those here who think GPU cores scale equally across different brands. Yes, nVidia has more "cores" since they're counting their CUDA "cores" as if they were capable of completing an instruction set by themselves without help from the actual GPU CORE.

Same sh*t AMD does counting "Stream Processors" on their chips.

Still mourning the old MR forums where we all had at least some idea of what we're talking about.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
I’d like to see the graphics performance beat the last mbp, and at least a max of 32 gb of ram, preferably 64. I don’t think Apple will go balls-out on this chip, they’re likely still considering battery life because it’s going in the mbp.

hearing the rumor that the m1x/m2 is going to have more performance cores and less efficiency cores leads me to believe that they will likely make a discrete SOC for the Mac Pro.
 

Lemon Olive

Suspended
Nov 30, 2020
1,208
1,324
That's a disappointment but I suppose that Apple may be eager to put a monster out there for the pro market. Or they could make a cheap Mac Pro to be used like an iMac. Or the Mini could be so powerful that that would be another option. Or it could be hard to get 5k or 6k panels. I would like a large iMac to minimize cable clutter and for a small, clean look.

I'm eager on the MacBook Pros though. I could use a laptop and a desktop but just the laptop would be good enough for now as I can manage with the system I built last year.
I like having a desktop which is both permanent server and work machine. I have a MacBook Pro too, but it can't be a replacement for a desktop. I'd be fine with a Mac mini that has the upcoming MBP's chip and options...there were a respectable display available from Apple that wasn't $6000.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Also laughing at those here who think GPU cores scale equally across different brands. Yes, nVidia has more "cores" since they're counting their CUDA "cores" as if they were capable of completing an instruction set by themselves without help from the actual GPU CORE.

Same sh*t AMD does counting "Stream Processors" on their chips.

Still mourning the old MR forums where we all had at least some idea of what we're talking about.
Would it be better to count ROPS/TMUs? AMD is upfront in saying the smallest addressable unit is a CU, they really want folks to reference the newer cards by WGP (2 CU’s) though, if you look at the RDNA whitepaper.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Also laughing at those here who think GPU cores scale equally across different brands. Yes, nVidia has more "cores" since they're counting their CUDA "cores" as if they were capable of completing an instruction set by themselves without help from the actual GPU CORE.

Same sh*t AMD does counting "Stream Processors" on their chips.

Would it be better to count ROPS/TMUs? AMD is upfront in saying the smallest addressable unit is a CU, they really want folks to reference the newer cards by WGP (2 CU’s) though, if you look at the RDNA whitepaper.

Both Nvidia and AMD count FP32 ALUs, the X "CUDA cores" or "stream processors" simply means that the GPU is theoretically capable of doing X FP32 operations per cycle. It doesn't give you the full picture, but neither does Apple's "2.6TFLOPS". Not quite sire what the rest of @adcx64's comment means.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Both Nvidia and AMD count FP32 ALUs, the X "CUDA cores" or "stream processors" simply means that the GPU is theoretically capable of doing X FP32 operations per cycle. It doesn't give you the full picture, but neither does Apple's "2.6TFLOPS". Not quite sire what the rest of @adcx64's comment means.
You think Apple is going to do 1 RT unit per core (Simliar to AMD 1 RT Accelerator per CU)? Would it be better to do a unit that is separate from the Core that does RT? Will Apple dip their toe in hardware resolution reconstruction tech?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
You think Apple is going to do 1 RT unit per core (Simliar to AMD 1 RT Accelerator per CU)? Would it be better to do a unit that is separate from the Core that does RT?

I am afraid I am not qualified to answer any of these questions :) The only think I can tell you is that ray tracing is essentially built on top of compute shaders, what RT hardware accelerate is the object graph traversal (ray/object intersection). I am not quite sure how this is done in practice.

Will Apple dip their toe in hardware resolution reconstruction tech?

I don't see why not, they do have some untapped ML capacity that can work in parallel to the GPU. It could even be a third-party solution to be honest.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
I am afraid I am not qualified to answer any of these questions :) The only think I can tell you is that ray tracing is essentially built on top of compute shaders, what RT hardware accelerate is the object graph traversal (ray/object intersection). I am not quite sure how this is done in practice.



I don't see why not, they do have some untapped ML capacity that can work in parallel to the GPU. It could even be a third-party solution to be honest.
Hey @cmaier, do you have any insight on that side of the house?
 

el-John-o

macrumors 68000
Nov 29, 2010
1,590
768
Missouri
100% GPU performance. We're already at the point where we have a nice CPU with a REALLY nice IPC compared to Intel's chips melting a hole through your desk.

CPU perf can likely scale with clock speeds as the lithography shrinks. We need more GPU cores.

F*ck it, metal Raytacing would be a cool addition but not a game changer for me.
Or the idea that doubling the cores on a mobile GPU is going to make it "blow past" mid-ranged desktop GPU's that it can't catch up with now...
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Or the idea that doubling the cores on a mobile GPU is going to make it "blow past" mid-ranged desktop GPU's that it can't catch up with now...

Not doubling but quadrupling. And it’s not a random guess but an extrapolation based on existing specs and benchmarks.

Not so long ago the idea of an iPhone CPU outperforming a desktop one was laughable to many. And look where we are now.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.