Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I put some links (one from ARM) showing 3.5GHz design frequency for their more recent architectures (Cortex-X2, Neoverse), so 3.2GHz is clearly not a limit ;)

The question is why the "architecture" would be limited to any frequency? The fabrication may very well be....but the architecture is all about how the instructions are processed. There might be some race-conditions that create limits, but in theory you could just crank up the clock and it would just run faster - and hotter, of course. ARM no doubt gives design guidelines to implementers, and I expect there is a sweet spot beyond which ARM compares less favorably to Intel/AMD, which is while most chips are in the 2-3.2GHz range.
even if arm its getting even higher in 2-3 years...he will tell you something else..so dont bother is not worth it :)
 
Keep in mind this is was a openCL test. Not a Metal one. Should be interesting to see metal.

So its around a GTX 1660 ti in openCL.

I wonder what it is in Metal
so this is the 16 gpu cores? is not too low to be around 1660TI and not around dGpu 3070/3080?
 
again you deflect..you were proven wrong...now you tell us something else , so based on that, your opinion lose its trust
0.2ghz more is prone wrong? Not at all. Since you are not able to provide any proofs that ARM can go beyond 3.3ghz like 5.0, I would not respond.
 
  • Angry
Reactions: Nütztjanix
OMG new results for GFXbench. This thing is a monster that scores 275.9 fps on Aztec Ruins high tier!!! That's 3.45 times faster than the M1. Compares really well to a RTX 3070 for laptops which has a median score of 270 fps, it is about 10-20% slower than a RTX 3080 laptop or RX 6800M. Apple went all out with the M1 Max.

GFXBench Entry

Screen Shot 2021-10-19 at 11.43.54 PM.png
 
I say this as someone who is very critical of many of his posts: everything @mi7chy wrote is entirely correct in your quote. Frequency is a limit of power/heat. That’s it. There’s nothing inherent even in a uarch design never mind an architecture that limits a chip’s frequency. Remember an “architecture” is literally just the instruction set, that’s it - the uarch is the actual design of the core and while that obviously affects your performance per watt curve, it is a curve and it keeps going potentially forever. Fabrication nodes and physics are more relevant concerns.

Put another way: Lower clock speeds in ARM chips like Apple’s 3.2Ghz are simply where Apple/ARM sees the best performance to power ratio for the designs and use cases (ie mobile). They could set the clocks higher, but for every gain in clocks you have to pump more power. There’s no hard limit though - until you start melting the chip that is but neither Apple nor ARM are anywhere close to that. Most desktop x86 chips draw 4x the power for a single core when turned on to full boost. There’s nothing stopping Apple or ARM from doing that, it just doesn’t fit their use cases.
 
Last edited:
  • Like
  • Love
Reactions: Roode and Fomalhaut
That's still lower than what x86 can do.
There is a difference between "can do" and "chooses to do".

I expect the advantageous power/performance ratio of ARM starts to decline if you crank it up too much, which is why implementers of ARM architecture choose not to do this. No doubt ARM provides guidelines as well, but I have not found any reference to limits of the architecture.
 
OMG new results for GFXbench. This thing is a monster that scores 275.9 fps on Aztec Ruins high tier!!! That's 3.45 times faster than the M1. Compares really well to a RTX 3070 for laptops which has a median score of 270 fps, it is about 10-20% slower than a RTX 3080 laptop or RX 6800M. Apple went all out with the M1 Max.

GFXBench Entry

View attachment 1872103
Why are some on-screen test less than 120Hz, when the off-screen is clearly higher?
 
There’s nothing inherent even in a uarch design never mind an architecture that limits a chip’s frequency.

Just a quick comment on this: microarchitecture definitely has an effect on clocks. The way you set up your transistors limits their synchronization capability (I have no idea how it works since I am not a semiconductor person, hopefully someone can explain better). Appel chose to implement more processing units and arrange them in a more complex way, so they can't go as fast, but they can do much more work per clock. Current x86 CPUs choose to do less work per clock but go very fast instead. The later is arguably "easier" (in terms of chip design), but the power consumption will suffer.
 
OMG new results for GFXbench. This thing is a monster that scores 275.9 fps on Aztec Ruins high tier!!! That's 3.45 times faster than the M1. Compares really well to a RTX 3070 for laptops which has a median score of 270 fps, it is about 10-20% slower than a RTX 3080 laptop or RX 6800M. Apple went all out with the M1 Max.

GFXBench Entry

View attachment 1872103

I hope it is atleast as fast as a RTX 3070. Else I will cancel my order then. Apple sold me that this thing is as fast a RTX 3080.
 
0.2ghz more is prone wrong? Not at all. Since you are not able to provide any proofs that ARM can go beyond 3.3ghz like 5.0, I would not respond.
I'll play devil's advocate here, and say that could be partially correct.

ARM may very well create their designs with a maximum design frequency in mind, much like TDP (thermal design power), and I would be surprised if they did not provide guidelines to licensees. This could provide limits to for the maximum frequency of an implementation based on the physical limits of the fabrication - i.e. you can't run a small chip that is designed to run at 10W at 100W, put a large heat-sink on it and hope you will be OK. Much in the same way you can't put a Formula 1 engine in a compact car and expect the rest of the car to handle the power.

The argument is that ARM *could* theoretically decide to design a chip that runs at 5GHz, running the same Instruction Set Architecture. There are important differences between Instruction Set Architecture, physical architecture, and implementation/fabrication. They haven't (yet) chosen to do this because their value proposition is running well at very low power compared to the competition. The vast majority of their designs end up in low power devices like phones, and this is where they make their money.

ARM is increasingly penetrating into the server space, where again, high frequency does not equate to optimal workflow. Lots of mid-speed cores run better than a smaller number of high-speed ones, for most workloads, and have fewer challenges with cooling.

What we are saying is that the fundamental architecture (the Instruction set architecture) is not limited per se by frequency, and more than natural language is - you can speed it up and slow it down and still be intelligible.

Very high frequency single-core execution might be useful in some cases, but I suspect it is only a few, which is why overall CPU design is moving away from this as a goal.
 
Do you own either device? When cold the MBA M1 performs lower than cold MBP M1 and loses ~34% performance due to throttling under sustained load.
i own m1 13" Pro. Fans generally don't turn on, sustained high load they hover at 1100RPM and sustained full load (i.e. bouncing a longer project in logic or denoising a long file in RX) they spin to 6500RPM.
I think the Pro vs Air numbers are impressive, since the latter doesn't even have a fan. I doubt the difference between 14" and 16" would be as drastic.
But it's definitely something to wait for, it's a ~200$ difference for same spec, if size doesn't matter.

I still think i'll go with 14"
 
I hope it is atleast as fast as a RTX 3070. Else I will cancel my order then. Apple sold me that this thing is as fast a RTX 3080.

It appears to be much better than 3080 in some benchmarks.

M1 Max is a viable gaming rig - if only vendors would port their games to M1
 
  • Like
Reactions: BlindBandit
Just a quick comment on this: microarchitecture definitely has an effect on clocks. The way you set up your transistors limits their synchronization capability (I have no idea how it works since I am not a semiconductor person, hopefully someone can explain better). Appel chose to implement more processing units and arrange them in a more complex way, so they can't go as fast, but they can do much more work per clock. Current x86 CPUs choose to do less work per clock but go very fast instead. The later is arguably "easier" (in terms of chip design), but the power consumption will suffer.

That’s true … I did try to qualify that uarch affects performance per watt curves but you’re right that faster clocks means tighter syncing across the core and more things that just heat can get into trouble as you up clocks. I should’ve said little about a uarch defines upper limits for clock speed and nothing about an arch. Having said that, I don’t know of any indication that Apple’s anywhere close to breaking those syncing limits or even what those might be for their chips except that they’re likely to be tighter than x86’s. From a purely thermal headroom perspective they could easily pump more power for higher clocks. Truthfully since we don’t have overclockable M1’s, difficult to know. :) But I doubt 3.2 GHz is the limit of what’s possible.

Edit: Heck who know what Apple’s upcoming “high power mode” means. For the record I think it is unlikely to mean upclocking the cpu, though apparently some dram power states can be set higher than they are currently without increasing clocks:

 
Last edited:
That’s true … I did try to qualify that uarch affects performance per watt curves but you’re right that faster clocks means tighter syncing across the core and more things that just heat can get into trouble as you up clocks. I should’ve said little about a uarch defines upper limits for clock speed and nothing about an arch. Having said that, I don’t know of any indication that Apple’s anywhere close to breaking those syncing limits or even what those might be for their chips except that they’re likely to be tighter than x86’s. From a purely thermal headroom perspective they could easily pump more power for higher clocks. Truthfully since we don’t have overclockable M1’s, difficult to know. :) But I doubt 3.2 GHz is the limit of what’s possible.

Edit: Heck who know what Apple’s upcoming “high power mode” means. For the record I think it is unlikely to mean upclocking the cpu, though apparently some dram power states can be set higher than they are currently without increasing clocks:

Yep yep definitely related to the high power mode this is getting really interesting
 
Re: clock speed. Let’s wait and see for the M1 Max Quad MacPro. There should be much less power and temp constrains and so clocks may be set higher.
 
0.2ghz more is prone wrong? Not at all. Since you are not able to provide any proofs that ARM can go beyond 3.3ghz like 5.0, I would not respond.
you said limited at 3.2..now we talk about 3.3 ? you know the definition of "limited"
Even if its something out there with even 0.001 more that what you said , based on base math that is not limited
Its clear you are just guessing and when you are not right you go beyond...you lost the credibility to us so its better for you not to respond anymore
 
OMG new results for GFXbench. This thing is a monster that scores 275.9 fps on Aztec Ruins high tier!!! That's 3.45 times faster than the M1. Compares really well to a RTX 3070 for laptops which has a median score of 270 fps, it is about 10-20% slower than a RTX 3080 laptop or RX 6800M. Apple went all out with the M1 Max.

GFXBench Entry

View attachment 1872103
even with embargo and under NDA...the youtube reviewers found a way to talk to us :))
 
so this is the 16 gpu cores? is not too low to be around 1660TI and not around dGpu 3070/3080?
Who still cares about openCL anyway, even blender under mac os does not support it and will move to metal.

Would like to see the metal scores.
 
why is the geekbench clocked at 2.4GHz? Does it have to do with High Performance mode in Monterey?
 
i'm not jumping at 70% performance increase from the M1, hope the leaked benchmark is flawed.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.