Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

grkm3

macrumors 65816
Feb 12, 2013
1,049
568
I was responding to the OP's post about the SC and MC speeds of the M2, not the M1. I even quoted the OP's post in my post, so that should have been clear.

What makes you think intel won't follow with the 13 gen cpus? Keep.in mind intel is hitting 2200 on 10nm still.
 

jav6454

macrumors Core
Nov 14, 2007
22,303
6,264
1 Geostationary Tower Plaza
And yet as a CEO, Tim Cook has to plan for it. And I'm sure that he has.

You could say the same thing about Ukraine but we're all still around.
I will not comment further as it will derail the thread. But should Taiwan become closed off due to any conflict, Apple and several other companies will be screwed. There is barely any high end fabs outside of Taiwan or Asia.
 

grkm3

macrumors 65816
Feb 12, 2013
1,049
568
I think his point was that the base M2 beats every one of those, and thus budget-friendly models offer great value.

Alot of people think the lower i5s and i7s don't have high single core performance. The 13 gen i9 will reak 2300 single core. That test is on very slow ram specs and is missing 200mhz from its spec speed of 5700 mhz and all the other lower spec cpus will be right under the i9 in geekbench.

People think intel can't design a cpu but don't realize they are on 10nm and still supporting 32bit and legacy code from 30 years ago. If the i9 2900k was on tsmc 5n it would consume 50 to 80% less power.

Intels ceo was right about telling apple to stick with them because when they get 7nm and 5nm out they will beat tsmc in performance per watt.
 
  • Haha
Reactions: Argoduck

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
People think intel can't design a cpu but don't realize they are on 10nm and still supporting 32bit and legacy code from 30 years ago. If the i9 2900k was on tsmc 5n it would consume 50 to 80% less power.

Intels ceo was right about telling apple to stick with them because when they get 7nm and 5nm out they will beat tsmc in performance per watt.
1. Intel's 13 gen is on TSMC's 7nm equivalent
2. Intel's ST speeds will likely consume 10-20x more power. M2 ST consumes as little as 0.3w - 5w during Geekbench5 ST. Raptor Lake will likely boost to 50+ watt during ST testing.
3. It won't consume 50 to 80% less power because TSMC's 5nm is only -30% less power than TSMC's 7nm, which is equivalent to Intel 7.
4. They're not beating TSMC in perf/watt because TSMC doesn't design CPUs.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
What makes you think intel won't follow with the 13 gen cpus? Keep.in mind intel is hitting 2200 on 10nm still.
How did you not understand that I was referring exactly to that in my post?: "Apple and Intel will continue to swap places for fastest SC speed." That means Intel will follow Apple with something faster, and Apple will follow Intel with something faster, etc.
 
  • Like
Reactions: Argoduck and R3k

zakarhino

Contributor
Sep 13, 2014
2,611
6,963
The PC struggled when scrubbing through and overall use handling the footage. The Mac using Adobe Premier with same footage and file was like a hot knife through butter. Where the PC won was in the rendering and encoding.

I agree with others in saying the M chips are not magical, and often it makes sense to go with an Intel/Nvidia GPU setup instead.

However, your quote here is the key differentiator for M chips vs. the traditional Intel/Nvidia setup. For editing in particular having a fast workflow is arguably more important than having a faster render time (at least in my case). When I'm rendering a video I'm not touching the editor anymore, I'm doing something else so a few more minutes of render time is not that big of a deal for me. For others every minute counts and maybe if we're talking hours of render time vs. a few minutes I can see why one is obviously better than the other.

Additionally, at least when talking about M powered laptops, energy efficiency of the chips is more than just a nerd talking point because it actually translates into longer sessions away from the wall -- this has been pretty big for me and has genuinely changed my workflow vs. older Intel powered MacBooks.

Apple's performance gains over Intel/Nvidia are only noticeable in certain software conditions because Apple are not technically winning in some objective 'pure performance' metric; rather, they are winning in specific workflows that they can optimize for with the stuff Intel/Nvidia simply can't do right now (system on a chip improvements like the inclusion of a neural chip, ProRes decoders/encoders, faster CPU/GPU communication and sharing of a common memory pool, etc.) Once Intel/Nvidia find a path to build their next generation chip platforms (RISC V for Intel?) the M chip competition gap will close pretty fast mostly because the other guys will also have the same architectural benefits that M chips have. At least that's what I think, I could be wrong.
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
Well why isn’t an i9 workstation but a low end Xeon is? Saying the M1 Max is not workstation but gluing two together is doesn’t seem right. Can 10 i3 processors be workstation then? Is it just core counts?

Essentially yes there are consumer and workstation class CPUs from the manufacturer. If I say I want a workstation Intel you say Xeon.

And I was comparing capabilities which is why I DONT think that just sticking two M1 Max equals a workstation CPU. Capabilities on workstations typically include ECC but other things.

Let me just ask: What makes it workstation to YOU? Why is a base M1 not a workstation but you think M1 Ultra is? Or better yet, like I mentioned above, if you don't think M1 Max is workstation level, but putting two of them together is, why?

Perhaps it is just my age, but the olden days Xeons and workstation specific CPU and GPU were better suited for "workstation" level of work, better geared towards more 24/7 operations and yeah ECC memory added in it, more specialized workflow, access to more RAM and more memory channels (got a server recently with 4TB of RAM, Xeon was the only thing at the time that allowed this much RAM, not sure if it changed). Workstation GPUs have historically been bad with Gaming (I had a $5,000 Quadro that got beat by a $500 GTX in gaming).

This is why I base it off the capabilities. If Apple says it is Workstation I still would be questioning it. I was just curious why some people find it workstation level processor. If "powerful computer" is the metric we are going to classify workstations now, then that is just highly subjective. Someone might find their i3 "workstation" since it is powerful for them :)
Same reasoning as "what is a Pro" machine. I support you questioning of it as there are no "Pro" or "workstations" as the performance required for a given profession differs. However, Xeon and corresponding NVIDIA/AMD "workstation" GPU exist and is conveniently forgotten when pricing a PC to beat Apple. The Xeon/Quadro advantage over i7/i9 and GTX has been questioned for a long time and there is no good generic answer to this as it highly depends on the software and usage pattern.
 
  • Like
Reactions: Argoduck

exoticSpice

Suspended
Jan 9, 2022
1,242
1,952
I agree with others in saying the M chips are not magical, and often it makes sense to go with an Intel/Nvidia GPU setup instead.

However, your quote here is the key differentiator for M chips vs. the traditional Intel/Nvidia setup. For editing in particular having a fast workflow is arguably more important than having a faster render time (at least in my case). When I'm rendering a video I'm not touching the editor anymore, I'm doing something else so a few more minutes of render time is not that big of a deal for me. For others every minute counts and maybe if we're talking hours of render time vs. a few minutes I can see why one is obviously better than the other.

Additionally, at least when talking about M powered laptops, energy efficiency of the chips is more than just a nerd talking point because it actually translates into longer sessions away from the wall -- this has been pretty big for me and has genuinely changed my workflow vs. older Intel powered MacBooks.

Apple's performance gains over Intel/Nvidia are only noticeable in certain software conditions because Apple are not technically winning in some objective 'pure performance' metric; rather, they are winning in specific workflows that they can optimize for with the stuff Intel/Nvidia simply can't do right now (system on a chip improvements like the inclusion of a neural chip, ProRes decoders/encoders, faster CPU/GPU communication and sharing of a common memory pool, etc.) Once Intel/Nvidia find a path to build their next generation chip platforms (RISC V for Intel?) the M chip competition gap will close pretty fast mostly because the other guys will also have the same architectural benefits that M chips have. At least that's what I think, I could be wrong.
Yep I will be interesting to in late 2023/ early 2024 when Apple moves to ARMv9 and 3nm with M3 and how it competes with Meteor Lake and RTX 4000 GPUs.
 

exoticSpice

Suspended
Jan 9, 2022
1,242
1,952
Alot of people think the lower i5s and i7s don't have high single core performance. The 13 gen i9 will reak 2300 single core. That test is on very slow ram specs and is missing 200mhz from its spec speed of 5700 mhz and all the other lower spec cpus will be right under the i9 in geekbench.

People think intel can't design a cpu but don't realize they are on 10nm and still supporting 32bit and legacy code from 30 years ago. If the i9 2900k was on tsmc 5n it would consume 50 to 80% less power.

Intels ceo was right about telling apple to stick with them because when they get 7nm and 5nm out they will beat tsmc in performance per watt.
But you cannot get that speed in a laptop. Plus the 13th gen 13900K will be powerhungry.

Next Intel node Intel 4 will between TSMC N5 and N3. It will be inertesting then to compare MTL with M3.
 

MisterAndrew

macrumors 68030
Sep 15, 2015
2,895
2,390
Portland, Ore.
It's nice Apple sparked some competitive development from Intel. Intel would love to have Apple's business back. It would be funny if Apple announced they were going back. They already have a couple Macs to lead the way.

I also enjoyed reading the discussions about what makes a workstation a workstation. IMO a workstation is a computer designed to be used at extended periods over a long time for complex tasks. It is built on server grade hardware and has ECC RAM. Therefore, IMO a Mac Studio is not a workstation, even though it can be used for work. However, the Mac Pro is a workstation and always has been.

Apple's challenge is making the M series chips into powerful server grade hardware like the Intel Xeon.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
It is built on server grade hardware and has ECC RAM. Therefore, IMO a Mac Studio is not a workstation, even though it can be used for work.
Well, the DDR5 spec. requires ECC on chip, but I agree that computers (workstation/server or not) should come standard with ECC for the data channel to guard against corrupted memory while in transit. IMHO this is especially important when the memory capacity of the computer grows large.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Let me just ask: What makes it workstation to YOU? Why is a base M1 not a workstation but you think M1 Ultra is? Or better yet, like I mentioned above, if you don't think M1 Max is workstation level, but putting two of them together is, why?

I think "workstation" is just a label, and not a very useful one. As you have noted, the i9 alder lake will outperform many currently sold Xeons. I use the label "workstation CPUs" to loosely refer to a certain class of CPU that is expected to deal with certain types of demanding sustained workloads.


This is why I base it off the capabilities. If Apple says it is Workstation I still would be questioning it. I was just curious why some people find it workstation level processor. If "powerful computer" is the metric we are going to classify workstations now, then that is just highly subjective. Someone might find their i3 "workstation" since it is powerful for them :)

Yes, it's highly subjective. And there is nothing we can do about it. I mean, a Xeon-W 1390P is slower than the i7-12700. Relying on the ECC support as the core differentiating factor is not enough, as it doesn't tell us anything about the ability to do work. I prefer to classify the M1 Ultra as "workstation class" because its CPU has sustained processing capability beyond that of the usual enthusiast desktop. But again, these are all loose terms and there is no "right or wrong" here. One can't have an objective conversation if we are relying on subjective classification criteria. That's just the nature of the topic.

That 25% overhead is less than $2/GB. That's for all intents and purposes zero cost, except maybe if you are trying to buy a $200 laptop.

You are forgetting the memory bus. ECC has cost well beyond RAM modules itself. Memory bus, power consumption etc. That said, we know very little about M1 RAM and it's capabilities.
 
  • Love
Reactions: Argoduck

Retskrad

macrumors regular
Original poster
Apr 1, 2022
200
672
Apple clearly won the PR game with the M1, both in the Apple and PC world. It cast a big shadow over Intel and AMD. Even Intel's CEO praised them. The arrival of the M2, on the other hand, has been very underwhelming. People in the media and tech enthusiasts aren't gushing about Apple Silicon like they used to. People in the PC world aren't respecting the M2 like the M1. Apple has left themselves vulnerable and the upcoming Intel and AMD chips will force people to reevaluate Apple's chips.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,677
Apple clearly won the PR game with the M1, both in the Apple and PC world. It cast a big shadow over Intel and AMD. Even Intel's CEO praised them. The arrival of the M2, on the other hand, has been very underwhelming. People in the media and tech enthusiasts aren't gushing about Apple Silicon like they used to. People in the PC world aren't respecting the M2 like the M1. Apple has left themselves vulnerable and the upcoming Intel and AMD chips will force people to reevaluate Apple's chips.

This is all true. One of the main reasons however is the the only person in the tech journalism who did in-depth reviews of Apple hardware (Andrei Frumusanu) has left Anandtech, leaving no replacement behind. Objectively, M2 is a perfectly adequate improvement over M1, albeit one that arrived a bit too late. It boasts moderate improvements in the CPU department — for the CPU that was already really really fast — and great improvements in the GPU department, which further closes the gap to entry level dedicated gaming GPUs.

Anyway, the big thing is going to be the M2 Pro/Max. If they use the same architecture as M2, with similar performance improvements, yes, that's going to be underwhelming. If instead they are on 3nm as rumoured, with GB5 single of ~ 2200, that's going to be a killer.
 
  • Like
Reactions: Argoduck and pshufd

JouniS

macrumors 6502a
Nov 22, 2020
638
399
How did you come up with LPDDR5 6400 being less than $2/GB? I’m seeing more than 5x that for just the ICs.
The last time I checked, consumer prices for DDR5 modules were $7 to $8/GB. 25% overhead to that is less than $2/GB. Market prices for high-speed LPDDR5 may be higher, but those are usually based on estimates what people are willing to pay rather than true costs. If those high-end features were a required part of the standard, prices for high-end modules would obviously drop.

You are forgetting the memory bus. ECC has cost well beyond RAM modules itself. Memory bus, power consumption etc. That said, we know very little about M1 RAM and it's capabilities.
There are other costs involved, but they are also effectively free. Manufacturing is cheap, and you have to pay the R&D costs anyway if you want to have high-end features in high-end products.
 
  • Like
Reactions: pdoherty

pshufd

macrumors G4
Oct 24, 2013
10,149
14,574
New Hampshire
The last time I checked, consumer prices for DDR5 modules were $7 to $8/GB. 25% overhead to that is less than $2/GB. Market prices for high-speed LPDDR5 may be higher, but those are usually based on estimates what people are willing to pay rather than true costs. If those high-end features were a required part of the standard, prices for high-end modules would obviously drop.


There are other costs involved, but they are also effectively free. Manufacturing is cheap, and you have to pay the R&D costs anyway if you want to have high-end features in high-end products.

I just checked Amazon and 32 GB of CORSAIR Vengeance (is under $6/GB) for DDR5 5200. CORSAIR Vengeance is my preferred RAM for desktop RAM. G.Skill Trident Z5 RGB is about $10/GB for DDR5 6000 - and some of that is for the fancy lighting. DDR4 is around $3-4/GB. So cost depends on exactly what you're getting but I'd say that it's pretty affordable for what you get.

This is why those that want a lot of RAM may still consider the 2020 iMac which is still pretty easy to get. The overall system price, when you include the monitor, RAM expandability and the support for dual external monitors can make it an attractive option.
 

altaic

Suspended
Jan 26, 2004
712
484
The last time I checked, consumer prices for DDR5 modules were $7 to $8/GB. 25% overhead to that is less than $2/GB.
#1 Cost-wise (and by every other metric), DDR5 at whatever throughput (today, commonly 4800 MT/s) != LPDDR5 @ 6400 MT/s (which is unsurprisingly more expensive).

#2 Discounting whatever you decide is "overhead" by 1/4 is fallacious to an extreme. GBs for whatever purpose are still GBs that you pay for.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
#1 Cost-wise (and by every other metric), DDR5 at whatever throughput (today, commonly 4800 MT/s) != LPDDR5 @ 6400 MT/s (which is unsurprisingly more expensive).

#2 Discounting whatever you decide is "overhead" by 1/4 is fallacious to an extreme. GBs for whatever purpose are still GBs that you pay for.
1. We are talking about costs, not prices. High-speed memory modules typically use chips of the same size manufactured using the same process as lower-speed modules. They have simply passed though somewhat stricter quality control. The differences you see in prices are mostly unrelated to costs.

2. ECC memory modules need more transistors per gigabyte than non-ECC modules. A 25% overhead in the number of transistors could plausibly translate to a 25% overhead in die area and a 25% overhead in costs.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
We were talking about costs, not prices. You didn't read the discussion carefully enough. I wasn't careful enough to mention explicitly that I was using the market price for the cheapest product of the right type as an upper bound for the cost.
 

daavee80

Cancelled
Jul 17, 2019
77
132
I have a smoking hot girlfriend, make six figures, live in lower Manhattan, and love what I do.

I suggest you find some better things to focus on than numbers of a processor performance score.
Perhaps instead of condescendingly criticising strangers on an internet forum might I suggest that you should use your obvious gifts to find better things to focus on.
 
  • Like
  • Wow
Reactions: Argoduck and pshufd
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.