Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
But we are already past the 1 year deadline for consumer updates.
Right now we are in the middle of a global pandemic, global ship shortages, and global supply chain issues. Apples intended release cycle surely has been effected by this.

That’s why I said “eventually” I think apple will do that. Right now the industry is still recovering.
 
Over 4 months!? What happened to your order? I thought I had a long wait for my BTO M1 Max at 6-7 weeks (ordered mid-December - it was closer to 6 weeks in the end).

I agree that M2 Pro/Max are unlikely before mid-2023. However, Apple may move to a shorter 15-18 month cadence in the future.
My original date was around now, mid Feb. Ordered from KRCS a UK authorised reseller - I wonder if they messed it up or not.

But all Apple delivery dates got pushed out, if I order the same machine today from Apple it gives me the date range March 31st to April 13th - so I am a bit peeved. So I may as well wait - and if a 27” iMac gets released soon I might cancel my order.
 
I did the exact same thing expecting far more teething problems with the m chips.

The last intel air was hot, loud and slow. I’m normally a 13” pro user and expected an improvement from my 2015 13 pro. I jumped on the 14 ASAP.

Ah - good to hear that someone else went this route. From your language, I can assume that the 14" was a good update over your 13" Intel?

This sounds like a sensible plan.

I think buyers are generally more satisfied when they buy a new model near the beginning of its refresh cycle, because you avoid the feeling of missing out if a new model is released shortly afterwards, or of having waited unnecessarily long to "get onboard".

If a new M2 Pro/Max surprised us with a release in late 2022 (i.e. a one-year cycle), then you would still be less than half way into that cycle if you bought an M1 Pro/Max in March, which most people are OK with. This assumes you don't have an immediate need for a new computer and have the choice to either buy or wait.

If there is an M2 release in March/April (which rumours indicate is unlikely...), then this may well be good enough for you, and you would save a lot by not buying an M1 Pro/Max.

If there isn't, then you either wait to see what happens in Oct/Nov 22 in terms of M2 MacBook or M2 Pro/Max releases, or just buy an M1 Pro/Max now.

It's frustrating not to have a clear picture of when the best time to buy is, but I've just been through the same process, and am happy that I recently bought an M1 Max, whatever happens with future releases. I was getting frustrated with my Intel MBP16 and wanted to change anyway, and had to consider that I might be waiting almost 2 years for the next M2 Pro/Max (potentially at the end of 2023), and that just seemed too long to put up with an unsatisfactory computer.

Thanks. Unfortunately, a "regular" M (M1/M2) is not for me - I want that 16" screen. I tried to compromise with the 13" screen this last time and I regret it. My philosophy was that I could plug into my monitor whenever I wanted something bigger - but the reality is that I rarely want to go sit in my office while working on photography.

"seemed too long to put up with an unsatisfactory computer." I think this is where I'm at too. If there wasn't a rumored March 8th event then I think I've convinced myself that I would buy now. The only thing that would get me to wait at this point would be if the M2 comes out and has some new dedicated hardware for photography...

I think some get too caught up in the idea of what they could buy today vs. what they could buy next year.

top tip: next year will likely always be better.

Compare what you're buying today to what else is available TODAY.

If you don't need to buy today, or today does not offer an appreciable benefit then: don't.

I've been an Apple user forever and this has always been my philosophy before (Buy what you need right now)... but I got a bit burned on it last time around and it's made me a bit gun-shy. I _really_ want to keep this next computer for a long time.

I've also always enjoyed owning the first generation of Apple things (despite others saying they like to wait for the second). I got an original iPad on launch day and an original iPad Pro on launch day as well. I just missed the original iPhone - getting a 3G as my first... but I used it for a helluva long time. I also got an iPhone X that I loved and used for a long time. I also bought the first cylinder Mac Pro AND the first Intel Mac Pro (those were work purchases though) - both are still working well.

I appreciate all of the opinions and stories here... it is helping cement what I should do...
 
I think they should take full advantage of their Ax cadence and beat intel like a drum by releasing Mx every spring and Mx pro/max every fall.
...but Apple aren't really in competition with Intel. Intel sell processors to computer makers. Apple sell computers and have shown no interest in selling their processors in competition with Intel. Intel have huge resources, and x86 has a vast user base in commercial PCs and servers who wouldn't switch to Mac if Tim Cook turned up on their doorstep with a bag of cash and a crate of beer, plus other user bases in cheap home PCs and gaming, which Apple has never shown any interest in trying to conquer. It would be a stupid, Quixotic move on Apple's part if they made it their mission to "defeat" Intel, which would just generate more of the sort of marketing FUD from Intel that we've seen already.

The main way Apple threaten Intel is that they give credibility to the idea that ARM and other non-x86 ISAs are good for more than just mobile phones. The next move in that war is for Microsoft, Dell, Lenovo, HP et. al. to think "hey, we sell more PCs than Apple, perhaps we could make our own processors, or work with Qualcomm or someone..." - which Intel can counter with a well-funded FUD campaign since the target customers are unlikely to actually try ARM for themselves and will be happy with an excuse to stay in their comfort zone.

However, Apple Silicon/ARM does have a built-in advantage over x86 that should keep it ahead of the game even as x86 moves to newer processes: An ARM core is a RISC processor core. An x86 core is (roughly speaking) a RISC-like processor core plus a hardware x86-to-RISC instruction translator - so it is always going to be fundamentally bigger and more complex, which will allow ARM to use less power, cram in more cores, more specialist accelerators etc. on the same die. This works for Apple because they have a history of breaking backward compatibility every 10 years or so and their remaining user base is used to that. Rosetta is great, but it only has to work as a transitional solution for a few years - in 5 years time any Mac software still reliant on x86 will be dead. A big part of the PC world's huge, profitable customer base won't accept that - and wouldn't accept the level of "breakage" that happens with every major MacOS release, let alone a processor switch.

If Apple were like Wintel, they'd have kept Apple II compatibility until around Snow Leopard, people on MR would still be moaning about the removal of Classic from Mavericks and Carbon would be available as an optional download for Monterrey.

Intel could go for a Rosetta-type solution - drop the x86 instruction decode and rely on everything being pre-translated in software - but then (a) that would have to be the solution for the foreseeable future, because some of that x86 code isn't going away and (b) they'd kiss good by to their monopoly and be in direct competition with every ARM chip maker, given that both Apple and Microsoft already have pretty good x86-to-ARM translation tech.

Long-term, x86 is dying anyway - its only real selling point is the popularity and legacy of its ISA (which is reflected in the RISC-core + x86 decoder architecture) and that is becoming less relevant as more and more code is written entirely in 64-bit clean high-level languages and/or runs as bytecode on a virtual machine (Java, Android, Microsoft CLR, etc) or via a JIT-compiled scripting language. x86 failed dismally in the mobile market - which grew to be a significant part of the personal computing market - and some people kinda failed to notice when their phones started doing image processing and AI tricks that put full-size PCs to shame. The only reason it is still around is because there are enough legacy applications and highly conservative customers to keep it in slow decay - and therefore a lot of profit still to be extracted - for another 10 years or so.
 
...but Apple aren't really in competition with Intel. Intel sell processors to computer makers. Apple sell computers and have shown no interest in selling their processors in competition with Intel. Intel have huge resources, and x86 has a vast user base in commercial PCs and servers who wouldn't switch to Mac if Tim Cook turned up on their doorstep with a bag of cash and a crate of beer, plus other user bases in cheap home PCs and gaming, which Apple has never shown any interest in trying to conquer. It would be a stupid, Quixotic move on Apple's part if they made it their mission to "defeat" Intel, which would just generate more of the sort of marketing FUD from Intel that we've seen already.

The main way Apple threaten Intel is that they give credibility to the idea that ARM and other non-x86 ISAs are good for more than just mobile phones. The next move in that war is for Microsoft, Dell, Lenovo, HP et. al. to think "hey, we sell more PCs than Apple, perhaps we could make our own processors, or work with Qualcomm or someone..." - which Intel can counter with a well-funded FUD campaign since the target customers are unlikely to actually try ARM for themselves and will be happy with an excuse to stay in their comfort zone.

However, Apple Silicon/ARM does have a built-in advantage over x86 that should keep it ahead of the game even as x86 moves to newer processes: An ARM core is a RISC processor core. An x86 core is (roughly speaking) a RISC-like processor core plus a hardware x86-to-RISC instruction translator - so it is always going to be fundamentally bigger and more complex, which will allow ARM to use less power, cram in more cores, more specialist accelerators etc. on the same die. This works for Apple because they have a history of breaking backward compatibility every 10 years or so and their remaining user base is used to that. Rosetta is great, but it only has to work as a transitional solution for a few years - in 5 years time any Mac software still reliant on x86 will be dead. A big part of the PC world's huge, profitable customer base won't accept that - and wouldn't accept the level of "breakage" that happens with every major MacOS release, let alone a processor switch.

If Apple were like Wintel, they'd have kept Apple II compatibility until around Snow Leopard, people on MR would still be moaning about the removal of Classic from Mavericks and Carbon would be available as an optional download for Monterrey.

Intel could go for a Rosetta-type solution - drop the x86 instruction decode and rely on everything being pre-translated in software - but then (a) that would have to be the solution for the foreseeable future, because some of that x86 code isn't going away and (b) they'd kiss good by to their monopoly and be in direct competition with every ARM chip maker, given that both Apple and Microsoft already have pretty good x86-to-ARM translation tech.

Long-term, x86 is dying anyway - its only real selling point is the popularity and legacy of its ISA (which is reflected in the RISC-core + x86 decoder architecture) and that is becoming less relevant as more and more code is written entirely in 64-bit clean high-level languages and/or runs as bytecode on a virtual machine (Java, Android, Microsoft CLR, etc) or via a JIT-compiled scripting language. x86 failed dismally in the mobile market - which grew to be a significant part of the personal computing market - and some people kinda failed to notice when their phones started doing image processing and AI tricks that put full-size PCs to shame. The only reason it is still around is because there are enough legacy applications and highly conservative customers to keep it in slow decay - and therefore a lot of profit still to be extracted - for another 10 years or so.

This is why Intel is investing in Risc-V ( https://www.techradar.com/news/inte...-risc-v-processors-and-other-chip-innovations) they see the writing on the wall for x86. Everyone does.
 
An x86 core is (roughly speaking) a RISC-like processor core plus a …
Please retire this nonsense. x86 does not have a "RISC-like core". It has a CISC architecture that replaces predominantly-μcode with predominantly-μop dispatch. It is not "RISC-like, it is a CISC implementation that is more efficient than classic μcode CISC but has none of the real-world advantages of common RISC.
 
h - good to hear that someone else went this route. From your language, I can assume that the 14" was a good update over your 13" Intel?
My 14” pro is the best machine I’ve ever owned. It finally performs as I would expect.

I’ve owned Mac laptops since 2011 and used laptops for work (PCs) in various form factors from 10” to 15” since the late 90s. They’ve all been either hot, loud or inadequate performance. Or all of the above.

This new 14 just rips through everything and the fan almost never even activates, let alone becomes annoying.

They aren’t cheap but… best computer purchase I’ve made I think.
 
  • Like
Reactions: cwwilson
Not sure what conclusion this article is drawing, but I worked at Nexgen (well, after AMD bought them but before they were integrated into AMD) and on K6, and it wasn’t too risc-y :).

I took over the integer execution units and some of the scheduler stuff, and it was very convoluted, quite unlike the RISC designs I had worked on (PowerPC and SPARC). Can’t speak to the 5-series chip, which gets a bit more of the ink in that article.
 
Not sure what conclusion this article is drawing, but I worked at Nexgen (well, after AMD bought them but before they were integrated into AMD) and on K6, and it wasn’t too risc-y :).

I took over the integer execution units and some of the scheduler stuff, and it was very convoluted, quite unlike the RISC designs I had worked on (PowerPC and SPARC). Can’t speak to the 5-series chip, which gets a bit more of the ink in that article.
Pretty sure there is no conclusion. Just going over the history and what's changed about the arguments. I liked his "ISA centric" vs. "implementation centric" description. It is more accurate than the outdated CISC vs. RISC.

I also think reading the quotes is enlightening.
 
  • Like
Reactions: Basic75
I also think reading the quotes is enlightening.
Here is one:
Agner Fog said:
The x86 ISA is quite successful … because it can do more work per instruction. For example, A RISC ISA with 32-bit instructions cannot load a memory operand in one instruction if it needs 32 bits just for the memory address.
This is one of those ancient tropes that aggravates me no end.
 
"seemed too long to put up with an unsatisfactory computer." I think this is where I'm at too. If there wasn't a rumored March 8th event then I think I've convinced myself that I would buy now. The only thing that would get me to wait at this point would be if the M2 comes out and has some new dedicated hardware for photography...


I appreciate all of the opinions and stories here... it is helping cement what I should do...
Given the lead times for M1 Pro/Max orders, you would be better off ordering almost immediately, and cancelling if something happens in March. Have a look at what estimated delivery dates are - I expect they will be towards the end of March or into April.

Apple will not charge your card until the device ships, so it's quite safe to do as long as you keep checking the estimated delivery date for any changes.
 
  • Like
Reactions: friedmud
Given the lead times for M1 Pro/Max orders, you would be better off ordering almost immediately, and cancelling if something happens in March. Have a look at what estimated delivery dates are - I expect they will be towards the end of March or into April.

Apple will not charge your card until the device ships, so it's quite safe to do as long as you keep checking the estimated delivery date for any changes.

That's a really good point....
 
I suspect that the March event will not result in something comparable to the M1-Pro/Max machines; its simply too close to when they were released.

It will be an incremental version of the M1 for MBA or entry level 13" MacBook Pro (which tbh I think will be discontinued - there's just way too much overlap there).

So if you're wanting a M1 Pro class machine or better - I say buy now - they're still start of cycle. If you're more in the Air/13" Pro market - maybe wait.
 
  • Like
Reactions: Tagbert
I suspect that the March event will not result in something comparable to the M1-Pro/Max machines; its simply too close to when they were released.

It will be an incremental version of the M1 for MBA or entry level 13" MacBook Pro (which tbh I think will be discontinued - there's just way too much overlap there).

So if you're wanting a M1 Pro class machine or better - I say buy now - they're still start of cycle. If you're more in the Air/13" Pro market - maybe wait.

I agree that they won't bring out an M2 Pro/Max in March... but they may bring out the M2 itself... giving a preview of what the eventual M2 Pro/Max for MBPs would be.
 
I don’t think we will see any M2 Macs until all the M1 Macs are out (i.e. end of this year).

Next event is likely iPhone SE, iPad Air, Mac Mini M1 pro/max and maybe iMac 27” pro/max imho
 
  • Like
Reactions: dgdosen
They won't bring out a base M2 in march, no M2 before all M1 launches are complete unless the iMac gets a M2 Pro/Max but if that happens not before WWDC. Apple will more likely go for 1,5/2years if possible because of the costs of making a new chip this is basically written everywhere no real need to make a topic.
 
My 14” pro is the best machine I’ve ever owned. It finally performs as I would expect.

I’ve owned Mac laptops since 2011 and used laptops for work (PCs) in various form factors from 10” to 15” since the late 90s. They’ve all been either hot, loud or inadequate performance. Or all of the above.

This new 14 just rips through everything and the fan almost never even activates, let alone becomes annoying.

They aren’t cheap but… best computer purchase I’ve made I think.
Yeah been using my M1 Pro hooked to my dual 4K Displays since launch day and I gotta say it's quite impressive because I have never never heard the fan.

Heck at this point I'd might just as well upgrade next time to an Air since apparently I don't need that much CPU performance or GPU performance.

I suspect that the March event will not result in something comparable to the M1-Pro/Max machines; its simply too close to when they were released.

It will be an incremental version of the M1 for MBA or entry level 13" MacBook Pro (which tbh I think will be discontinued - there's just way too much overlap there).

So if you're wanting a M1 Pro class machine or better - I say buy now - they're still start of cycle. If you're more in the Air/13" Pro market - maybe wait.

I'd expect the Air refresh later this year for sure. Honestly I love this Promotion Display, I don't think I can go back to a 60z after this.
 
  • Like
Reactions: throAU
I don’t think we will see any M2 Macs until all the M1 Macs are out (i.e. end of this year).

Next event is likely iPhone SE, iPad Air, Mac Mini M1 pro/max and maybe iMac 27” pro/max imho
This sounds pretty reasonable. New M2 chips will probably be faster (at least single core) than most (all?) M1 offerings (M1/M1Pro/M1Max/M1-12). And it's probably not optimal for good will if a company releases slower 'pro' things after releasing faster 'non-pro' things.

So that makes me think replacing Intel mac mini's and imac pros will happen before new M2 devices.
 
  • Like
Reactions: ader42
This sounds pretty reasonable. New M2 chips will probably be faster (at least single core) than most (all?) M1 offerings (M1/M1Pro/M1Max/M1-12). And it's probably not optimal for good will if a company releases slower 'pro' things after releasing faster 'non-pro' things.

So that makes me think replacing Intel mac mini's and imac pros will happen before new M2 devices.

that thinking has not stopped them before (see xeon chips in imac pro and mac pro vs macbook in single core)

while supply chain stuff has screwed with their plans for sure its completely reasonable to have a cadence of M2 refreshes followed later by M2 xxx refreshes.
 
  • Like
Reactions: jdb8167
That‘s horrible.
It’s not so bad in context.

The complicated x86 ISA makes decoding a bottleneck. An x86 instruction can have any length from 1 to 15 bytes, and it is quite complicated to calculate the length. And you need to know the length of one instruction before you can begin to decode the next one. This is certainly a problem if you want to decode 4 or 6 instructions per clock cycle! Both Intel and AMD now keep adding bigger micro-op caches to overcome this bottleneck. ARM has fixed-size instructions so this bottleneck doesn’t exist and there is no need for a micro-op cache.

Another problem with x86 is that it needs a long pipeline to deal with the complexity. The branch misprediction penalty is equal to the length of the pipeline. So they are adding ever-more complicated branch prediction mechanisms with large branch history tables and branch target buffers. All this, of course, requires more silicon space and more power consumption.

The x86 ISA is quite successful despite of these burdens. This is because it can do more work per instruction. For example, A RISC ISA with 32-bit instructions cannot load a memory operand in one instruction if it needs 32 bits just for the memory address.

It is more of “damn me with faint praise” than a true rebuttal. It almost seems like it was added for “balance”. He already admitted that the ISA is broken down with μ-ops so the statement about more work per instruction doesn’t really mean anything.
 
What are the real-world advantages of RISC?

back in the late 80s early 90s research went nuts around reduced instruction sets as a way to focus on simplicity on-chip and exporting complexity to the compiler and OS. Load-store instruction sets brought higher clock speeds but memory bandwidth issues which led to an explosion in caches and DMAs and a lot of other cool stuff. Im an RTOS guy so I defer to the chip folks on the forum for details but from a compiler and OS perspective things got more complicated. For example MIPS chips had two address for every memory location (high bit determined cache access). Weird stuff in today's landscape but it was the wild west in terms of behaviors back then.

At the time the 386 was king but there was HP-PA RISC, DEC ALPHA, SUN SPARC, SG MIPS, and on and on. Everyone was getting in the game and this started a 'CISC vs RISC' sort of marketing talking point.

I forgot the name of the guy but an intel chip designer wrote a nice paper on the feature sets of RISC that were not available to CISC and the conclusion was /dev/null. for example, they introduced caches (486?).

But the one thing intel cannot shake is backwards compatibility. That ISA is going to be on their gravestone.

So RISC vs CISC is not really a thing so much as intel has to do x86 and its affecting their designs. ARM had a blank slate in the modern era.
 
What are the real-world advantages of RISC?
Simpler hardware design that takes less power. The CISC philosophy was great when RAM was expensive and you had people coding in assembly language to minimize the use of instruction memory. That hasn’t been the case in a long time.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.