Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

hefeglass

macrumors 6502a
Apr 21, 2009
760
423
Our newer systems use Epyc chips for that reason. I'm far more interested in performance than performance/watt.
of course you are...youd be insane if you were somehow interested in efficiency but spend all your time in here talking about how x86 isnt dead and they are going to catch up any day now.
Why would you NOT claim that power usage isnt important, its the only way to make x86 seem even remotely comparable..
 

Juuro

macrumors 6502
Feb 13, 2006
408
411
Germany
MacBook Airs will get some M1 Pros, Mac Minis also but probably not the M1 Max although it can fit in there. early/mid next year. Imac Pros can come back with M1 Pro and M1 Maxes in late 2022.

The new in 2022 Nov, Mac Pro will get the M1 Pro Max chip with 20/40/60/80 core options and memory respectively 128/256/384/512 GPU the same scaling with the cores.

I dont think there will be for example M2 next year (2022), Apple will max out M1 and then in late 2023 M2 maybe.
I think you got the timing and the specs wrong.

First there are several rumours pointing to a iMac release in the first quarter of 2022.
Then the MacBook Air can't handle a M1 Pro chip. It's a device without active cooling and yes, the M1 Pro is very effective, but it still needs proper cooling to take advantage of its performance. If you put a M1 Pro in a passively cooled MacBook Air you could just use the M1 because the M1 Pro would get throttled extremely the moment it tries to do some hard work.
The new Mac Pro would make sense to release at WWDC because it is a pro machine and Apple has a bit of a history to at least preview pro devices at WWDC. Also there might be some hardware specialities developer could take advantage of with the new Mac Pro.
Then an 80 core chip would mean 8 M1 Max dies in one chip. I Think thats no very realistic for now. We already heard of an 40 core chip (Jade 4C-Die). That one would already be extremely powerful. But it is probably not an easy task to put those dies together. I don't think they can simply scale that to infinity.
I think we will see a new M2 MacBook Air in late 2022. By then the M1 is two years old and the MacBook Air is too. So it would make perfect sense to start with the new generation of chips then in the oldest M1 computer.
 

Taz Mangus

macrumors 604
Mar 10, 2011
7,815
3,504
I think we will see a new M2 MacBook Air in late 2022. By then the M1 is two years old and the MacBook Air is too. So it would make perfect sense to start with the new generation of chips then in the oldest M1 computer.
That would make a lot sense to space the introduction of the M2 after the Mac Pro is introduced with the M1 Extreme. I wonder if Apple will switch to being in cadence with the A16 and use the A16 cores instead of using the A15 cores in the M2.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
Fall 2020 - M1 SoC
13" MacBook Air (existing design)
13" MacBook Pro (existing design)
Mac mini (existing design)

Spring 2021 - M1 SoC
24" iMac (new design)

Fall 2021 - M1 Pro/Max SoCs
14" Mac Book Pro (new design)
16" MacBook Pro (new design)

Spring 2022 - M1 Pro/Max SoCs
27" iMac (new design)
Mac mini (new design / taller)

Summer 2022 - M1 Pro/Max Dual/Quad SoCs
30" iMac Pro (new design)
Mac Pro Cube (new design)
Mac Pro (existing design)

Fall 2022 - M2 SoC
14" MacBook (new design)
16" MacBook (new design)
24" iMac (existing design)
Mac mini (new design / shorter)

Spring 2023 - M2 Pro/Max SoCs
14" MacBook Pro (existing design)
16" MacBook Pro (existing design)
27" iMac (existing design)
Mac mini (existing taller design)


TL;DR:

Spring - Mid/High-end
Summer - Extreme High-end
Fall - Low/Mid-end
 
  • Like
Reactions: ghstmars

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
of course you are...youd be insane if you were somehow interested in efficiency but spend all your time in here talking about how x86 isnt dead and they are going to catch up any day now.
Why would you NOT claim that power usage isnt important, its the only way to make x86 seem even remotely comparable..
Let’s face it though, Epyc is a server chip. Apple doesn’t make servers, so comparing the two is dumb.
 

MacZoltan

macrumors member
May 18, 2016
94
9
This is hardly surprising. The system has finite amount of resources (thermal, memory bandwidth etc.) so if you push it you will see diminishing returns. It makes little sense to optimize the system for such a case since these kind of workloads simply do not exist in the real world (aside from stress testing). By the way, other systems have the same behavior (almost every high-end laptop will throttle if you push the CPU abs the GPU simultaneously).

That said, I am sure that Apples professional systems will do much better in this kind of stress test, especially M1 Max which has many more resources than the base consumer N1.
well, i dont have the resources to buy those and test them:) but for example the 2013 mac pro which has thermal issues looses only 10% in the same scenario
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
the 2013 mac pro which has thermal issues looses only 10% in the same scenario

The 2013 Mac Pro does not have any thermal issues — it's problem is that the cooling system is not capable enough to work with hardware that came afterwards. As Apple themselves admitted, they have "designed themselves into a thermal corner", because they underestimated the increase in power consumption of future CPUs and GPUs.

And yes, in general larger desktop computers will see less of a performance loss here since they are usually less thermally constrained. Try a stress workload where CPU and GPU actively communicate though — any system with a dGPU will slow down to a crawl.
 
  • Like
Reactions: iPadified

thenewperson

macrumors 6502a
Mar 27, 2011
992
912
That would make a lot sense to space the introduction of the M2 after the Mac Pro is introduced with the M1 Extreme. I wonder if Apple will switch to being in cadence with the A16 and use the A16 cores instead of using the A15 cores in the M2.
Hoping for this over here too, similar to the AX chips.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
So is claiming that they will. They are years behind right now, and nothing they do can overcome the penalty caused by variable length instructions and convoluted addressing modes. The only way an x86-64 design could compete with an Apple design in performance/watt is to be fabbed on a smaller node. (Or for Apple to stumble and release a bad design for some reason)

It isn't the "only way". Part of Intel's and AMD's problem is that they have a "constipated" instruction set far more so than a "complex" one. It isn't so much ARM "is clearer" as much as ARM licensing allows Apple to throw away older stuff as much as allow new stuff (and apple specific accelerators).

What significant software vendor has written any new MMX specific code in last 5-6 years that is competing with competitors in Windows 11 , macOS , and Linux space?

Meanwhile, fire up a 5-8 year old arm app and it is dead as a doornail on the M1 series.

Running. circa 1989 DOS programs faster is limited to fab node shrinkages. But cutting loose 32-bit x86 , pragmatically failed SIMD extension , modes to mimic Multics, BIOS boot , and etc. is also an option. x86-64 is way overdue for a 'garbage collection' on at least a subset of the product line up.
[ keep doing process shrinks for the folks who want industrial control embedded processors for stuck in time software that really probably doesn't need much performance uplift at all. A smaller , more affordable chip with the same clocks and stuck in time instruction set probably would work better. ]

x86_64 will still "suck" in some respects , but it will suck less. And that can be a substantive contribution. ( e.g., shift some time/energy/effort/resources from 6-7 highly redundant and overlapping SIMD decoders into optimizing a smaller set of stuff. ). A contributing factor to Apple's lead over other Smartphone SoC implementors is also just saying "no" to some stuff.
 
  • Like
Reactions: throAU

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
We don’t really know what Apple’s update cycle will be once they have completed their transition and post all the supply chain disruptions. You can’t predict much based on the recent update intervals as this is an unstable period by many measures. You also can’t use the Intel Mac period as a pattern since Apple didn’t control the timeline and was not happy with the slow progress.

Last several cycles before the pandemic Apple kept the X series on closer to a 18 month cycles. ( moving from major process node shrink to process node shrink ). A10X -> ( no 11 ) -> A12X -> ( no 13) -> M1 ( pragmatically A14x)

Can hand wave and say that Apple had no competition in the iPad space so just didn't bother. But Apple really has little to no track record on. 300-600mm^2 scale dies doing yearly upgrades. ( pragmatically highly pipeline upgrades because these are not 12 month work assignments).

Apple punted on yearly updates in the 120mm^2. If drive the qualification costs up even higher for larger dies are they really going to put the money down for that when did not before?

The gap between the plain A-series at 80-110mm^2 and the 'A X' series deviation 110-140mm^2 wasn't really all that large. In the ballpark of around 25% bigger. That is different than 200-400% bigger.

The plain M1 , M2 , M3 , etc sequence staying attached on yearly updates would actually be an uplift from what Apple has done previously. Collectively across Macs and iPad Pros it is a bigger volume so it wouldn't be too surprising. As the dies get substantially larger and the volume goes substantively down probably going to get back into at least the same zone the "A X" were in when only in. iPad Pro (and some AppleTVs. ). A full set of suffixes trailing 'M n' on each iteration of 'n' doesn't seem likely .
 

jjcs

Cancelled
Oct 18, 2021
317
153
of course you are...youd be insane if you were somehow interested in efficiency but spend all your time in here talking about how x86 isnt dead and they are going to catch up any day now.
Why would you NOT claim that power usage isnt important, its the only way to make x86 seem even remotely comparable..
"all your time here"...

I'm considering upgrading to a M1 Max notebook for the multi-core performance, generally superior build quality (every system I own is a Mac of some vintage (or an SGI for nostalgia for real UNIX workstations) going back to an iMac G4 in the kitchen - form factor is perfect in that role), and the no-worse-than-any-other-workstation-laptop memory cost.

Battery life, while impressive, isn't even in the top 20 reasons I'd consider it.

Pointing out that Apple has competition STILL really does get your feathers in a bunch, doesn't it? Reminds me too much of the PPC era.
 

jjcs

Cancelled
Oct 18, 2021
317
153
A64FX is hardly a "high performance ARM chip"... we don't have exact benchmarks but M1 Firestorm is probably at least 2.5x faster than A64FX cores, and probably more than that. Fugaku's performance comes from the fact that it has gazzilion of those cores. And of course, A64FX is a highly specialized chip, designed for parallel throughput on certain scientific workloads. Not something you would put in a general-purpose computer at any rate.

It is a "high performance ARM chip" by any measure and I presented it as a non-lowend ARM chip example. My use cases are algorithm development for massively parallel, but not GPU, codes. So, an A64FX workstation would be a good fit for me. Since those aren't generally available, it's AMD for work at the moment. The M1 Max is very interesting, though. Couldn't get one for work, but for personal research it's very tempting.

The only thing holding me back is Apple's stated intent to drop OpenGL completely at some point and there's no way in hell I'm going to spend time porting tools written 20 or more years ago to Metal. (Note: if they stated it would continue to be on the system, but wouldn't be updated, it would be no issue for this.) I'm definitely interested in seeing how much of a performance penalty Parallels offers when virtualizing ARM Linux. If modest, I'll pull the trigger. x86-64 will continue on, but some variant of ARM is clearly the near-term future. At the moment, OS and base instruction set commonality from workstation to (most) supercomputers is attractive. After Bulldozer (which was massively disappointing), I wasn't expecting AMD to make a major comeback and dethrone Intel so handily. They did. So, I wouldn't claim (and I don't think Apple does) that Apple has an insurmountable lead. It's leading - in this segment (laptops), at least - by a good margin NOW. Competition is good.

I still miss IRIX and MIPS.... Best desktop and a good instruction set. Bad management. Even bought an Altix back in the day, but Intel pretty much dropped the ball compiler-wise. Xeon pushed past it....
 

Taz Mangus

macrumors 604
Mar 10, 2011
7,815
3,504
A64FX is hardly a "high performance ARM chip"... we don't have exact benchmarks but M1 Firestorm is probably at least 2.5x faster than A64FX cores, and probably more than that. Fugaku's performance comes from the fact that it has gazzilion of those cores. And of course, A64FX is a highly specialized chip, designed for parallel throughput on certain scientific workloads. Not something you would put in a general-purpose computer at any rate.
158,976 A64FX to be exact. Each A64FX costs on the order of $20,000.

Benchmark for the A64FX:

 
  • Wow
Reactions: Tagbert

leman

macrumors Core
Oct 14, 2008
19,521
19,679
It is a "high performance ARM chip" by any measure and I presented it as a non-lowend ARM chip example. My use cases are algorithm development for massively parallel, but not GPU, codes. So, an A64FX workstation would be a good fit for me. Since those aren't generally available, it's AMD for work at the moment. The M1 Max is very interesting, though. Couldn't get one for work, but for personal research it's very tempting.

Which measures are those? If you are not running specialized high-throughput SVE code, A64FX will be slower than an old Android phone. Development experience on machine like that, with per-core performance of a wet noodle will be extremely frustrating. Anyway @Taz Mangus poster some performance measurements, check them out.

Regarding OpenGL, I’m sure the compatibility is here to stay. Apple has a fairly robust OpenGL layer built on top of Metal, it doesn’t cost then anything to maintain it, and they have no reason to remove it. And even if they do, it will take community a week to wipe out a new one.
 

Melbourne Park

macrumors 65816
....

Pointing out that Apple has competition STILL really does get your feathers in a bunch, doesn't it? Reminds me too much of the PPC era.

I think now is close to the inverse of the PPC era. The G4 PPC blows out so much heat, it heats the room. Apple never could get a G5 into a notebook - too hot and too power hungry. And the tests Apple did were to convince buyers that the PPC slow clock rate was not its performance bottleneck (while Intel convinced buyers that clock rate was the definition of speed). So Apple would do Photoshop speed comparisons. Then Microsoft assisted Windows by slowing down Mac OS's Excel.

It's quite different now - the fastest chip from Apple was introduced on a notebook. When did that happen before? And I suspect, the Pro and Max chips can be upgraded annually for several years - via cache, clock speed, other fine tuning. Apple can already switch out bad sectors and downgrade a Max to Pro CPU. With greater production efficiency and lower failure rates they may use those spare areas of silicon and increase speeds, without cost at all
Hm, our bioinformatics developers has relatively old MBPs and let a mainframe/supercomputer/cloud computing do the heavy lifting. That is quite common. The means that you do not need 32 cores and 128 Gb on your desk.

That's true for many IMO. My daughter in law is a scientist, and just changed research jobs. She got an HP notebook, a big screen seperate (she's Medical university based), and all the work she is doing (lots of stats type work related to cancer research) is done via her screen but using a foreign program ie it's cloud based.

The point is though - why did she get a costly notebook - she died not need it. She really only need close to a dumb terminal.

Secondly, if work is cloud based there is no need for fast individual computers.

Thirdly if workstations are not in our futures, then notebooks are. And Apple have a more effective portable solution due to power characteristics ie long battery life, which really is a key success factor with a notebook.
 
Last edited:

Pro Apple Silicon

Suspended
Oct 1, 2021
361
426
Secondly, if work is cloud based there is no need for fast individual computers.

Thirdly if workstations are not in our futures, then notebooks are. And Apple have a more effective portable solution due to power characteristics ie long battery life, which really is a key success factor with a notebook.
I think we're already seeing the limitations of workstations in the cloud. Gaming in the cloud is already a commercial product...and it sucks. Sure it is passable for some people who have no other option, but it isn't a replacement for local and it never can be.

Latency and streaming quality will always be a factor and will always limit. That's not to say you can't offload heavy duty compiling or encoding tasks, but if they are truly heavy duty, you a re probably going to trade a bandwidth bottleneck for a CPU bottleneck and not save any time at all.
 

Tagbert

macrumors 603
Jun 22, 2011
6,259
7,285
Seattle
That would make a lot sense to space the introduction of the M2 after the Mac Pro is introduced with the M1 Extreme. I wonder if Apple will switch to being in cadence with the A16 and use the A16 cores instead of using the A15 cores in the M2.
You'll have to ask Johny Srouji
 

jjcs

Cancelled
Oct 18, 2021
317
153
Which measures are those? If you are not running specialized high-throughput SVE code, A64FX will be slower than an old Android phone. Development experience on machine like that, with per-core performance of a wet noodle will be extremely frustrating. Anyway @Taz Mangus poster some performance measurements, check them out.

Regarding OpenGL, I’m sure the compatibility is here to stay. Apple has a fairly robust OpenGL layer built on top of Metal, it doesn’t cost then anything to maintain it, and they have no reason to remove it. And even if they do, it will take community a week to wipe out a new one.
Most of the codes I use are very data parallel and also respond well to local vectorization, so many-core with careful vectorization performs well. I brought it up in the first place merely to point out that there are other developments in the ARM space that may end up competing with Apple too.

Regarding OpenGL, if the compatibiity was here to stay, they shouldn't mark it "deprecated" complete with warnings every time you compile against it and it does cost them to maintain it. They've even stated that bug fixes shouldn't be expected going forward.
 
Last edited:

jjcs

Cancelled
Oct 18, 2021
317
153
I think now is close to the inverse of the PPC era. The G4 PPC blows out so much heat, it heats the room. Apple never could get a G5 into a notebook - too hot and too power hungry. And the tests Apple did were to convince buyers that the PPC slow clock rate was not its performance bottleneck (while Intel convinced buyers that clock rate was the definition of speed). So Apple would do Photoshop speed comparisons. Then Microsoft assisted Windows by slowing down Mac OS's Excel.

It's quite different now - the fastest chip from Apple was introduced on a notebook. When did that happen before? And I suspect, the Pro and Max chips can be upgraded annually for several years - via cache, clock speed, other fine tuning. Apple can already switch out bad sectors and downgrade a Max to Pro CPU. With greater production efficiency and lower failure rates they may use those spare areas of silicon and increase speeds, without cost at al

You're correct regarding the inversion of the thermal issues from that era to now. At one time the G4 and G5 were the fastest available, but Apple marketing still claimed that when it was no longer true. I wouldn't expect it otherwise.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Most of the codes I use are very data parallel and also respond well to local vectorization, so many-core with careful vectorization performs well.

Well, A64FX might be ok for running it (although I would be surprised if M1 Pro outperforms it, even in SIMD code), but it would still make a terrible workstation. Building software would take ages.

I brought it up in the first place merely to point out that there are other developments in the ARM space that may end up competing with Apple too.

Maybe from Qualcomm. Fujitsu is as much a potential competitor of Apple as John Deere is a competitor of Tesla. Entirely different product. Abs frankly, A64FX is nothing remarkable. It’s a dead slow CPU with not much sophisticated about it - it just has wide vector units and high bandwidth RAM for SIMD throughput. The technology is hardly interesting or novel. Just specialized. A64FX is not a general purpose product and will never been one, nor dos it contain any tech that can be used to build a viable general purpose product.

Another way of looking at this is to ask yourself a question: how difficult is it to design a CPU with the performance characteristics of A64FX? The answer is: probably not too difficult. Pretty much any current CPU group company could do it, provided there is funding and a generous budget. But how difficult is it to design something like current Apple Silicon? Exceedingly difficult, as nobody except Apple is even close.


Regarding OpenGL, if the compatibiity was here to stay, they shouldn't mark it "deprecated" complete with warnings every time you compile against it and it does cost them to maintain it. They've even stated that bug fixes shouldn't be expected going forward.

Well, you are not supposed to use it in new software. But it works fine for legacy stuff.
 
Last edited:

Christian Schumacher

macrumors member
Oct 3, 2015
59
25
iu.png
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.