I think his point was that the base M2 beats every one of those, and thus budget-friendly models offer great value.Every single intel cpu beats the m1 in single core on geekbench. The i5 ,i7 and i9 all score higher than the m1.
I think his point was that the base M2 beats every one of those, and thus budget-friendly models offer great value.Every single intel cpu beats the m1 in single core on geekbench. The i5 ,i7 and i9 all score higher than the m1.
I was responding to the OP's post about the SC and MC speeds of the M2, not the M1. I even quoted the OP's post in my post, so that should have been clear.
I will not comment further as it will derail the thread. But should Taiwan become closed off due to any conflict, Apple and several other companies will be screwed. There is barely any high end fabs outside of Taiwan or Asia.And yet as a CEO, Tim Cook has to plan for it. And I'm sure that he has.
You could say the same thing about Ukraine but we're all still around.
I think his point was that the base M2 beats every one of those, and thus budget-friendly models offer great value.
1. Intel's 13 gen is on TSMC's 7nm equivalentPeople think intel can't design a cpu but don't realize they are on 10nm and still supporting 32bit and legacy code from 30 years ago. If the i9 2900k was on tsmc 5n it would consume 50 to 80% less power.
Intels ceo was right about telling apple to stick with them because when they get 7nm and 5nm out they will beat tsmc in performance per watt.
How did you not understand that I was referring exactly to that in my post?: "Apple and Intel will continue to swap places for fastest SC speed." That means Intel will follow Apple with something faster, and Apple will follow Intel with something faster, etc.What makes you think intel won't follow with the 13 gen cpus? Keep.in mind intel is hitting 2200 on 10nm still.
The PC struggled when scrubbing through and overall use handling the footage. The Mac using Adobe Premier with same footage and file was like a hot knife through butter. Where the PC won was in the rendering and encoding.
Same reasoning as "what is a Pro" machine. I support you questioning of it as there are no "Pro" or "workstations" as the performance required for a given profession differs. However, Xeon and corresponding NVIDIA/AMD "workstation" GPU exist and is conveniently forgotten when pricing a PC to beat Apple. The Xeon/Quadro advantage over i7/i9 and GTX has been questioned for a long time and there is no good generic answer to this as it highly depends on the software and usage pattern.Well why isn’t an i9 workstation but a low end Xeon is? Saying the M1 Max is not workstation but gluing two together is doesn’t seem right. Can 10 i3 processors be workstation then? Is it just core counts?
Essentially yes there are consumer and workstation class CPUs from the manufacturer. If I say I want a workstation Intel you say Xeon.
And I was comparing capabilities which is why I DONT think that just sticking two M1 Max equals a workstation CPU. Capabilities on workstations typically include ECC but other things.
Let me just ask: What makes it workstation to YOU? Why is a base M1 not a workstation but you think M1 Ultra is? Or better yet, like I mentioned above, if you don't think M1 Max is workstation level, but putting two of them together is, why?
Perhaps it is just my age, but the olden days Xeons and workstation specific CPU and GPU were better suited for "workstation" level of work, better geared towards more 24/7 operations and yeah ECC memory added in it, more specialized workflow, access to more RAM and more memory channels (got a server recently with 4TB of RAM, Xeon was the only thing at the time that allowed this much RAM, not sure if it changed). Workstation GPUs have historically been bad with Gaming (I had a $5,000 Quadro that got beat by a $500 GTX in gaming).
This is why I base it off the capabilities. If Apple says it is Workstation I still would be questioning it. I was just curious why some people find it workstation level processor. If "powerful computer" is the metric we are going to classify workstations now, then that is just highly subjective. Someone might find their i3 "workstation" since it is powerful for them
Yep I will be interesting to in late 2023/ early 2024 when Apple moves to ARMv9 and 3nm with M3 and how it competes with Meteor Lake and RTX 4000 GPUs.I agree with others in saying the M chips are not magical, and often it makes sense to go with an Intel/Nvidia GPU setup instead.
However, your quote here is the key differentiator for M chips vs. the traditional Intel/Nvidia setup. For editing in particular having a fast workflow is arguably more important than having a faster render time (at least in my case). When I'm rendering a video I'm not touching the editor anymore, I'm doing something else so a few more minutes of render time is not that big of a deal for me. For others every minute counts and maybe if we're talking hours of render time vs. a few minutes I can see why one is obviously better than the other.
Additionally, at least when talking about M powered laptops, energy efficiency of the chips is more than just a nerd talking point because it actually translates into longer sessions away from the wall -- this has been pretty big for me and has genuinely changed my workflow vs. older Intel powered MacBooks.
Apple's performance gains over Intel/Nvidia are only noticeable in certain software conditions because Apple are not technically winning in some objective 'pure performance' metric; rather, they are winning in specific workflows that they can optimize for with the stuff Intel/Nvidia simply can't do right now (system on a chip improvements like the inclusion of a neural chip, ProRes decoders/encoders, faster CPU/GPU communication and sharing of a common memory pool, etc.) Once Intel/Nvidia find a path to build their next generation chip platforms (RISC V for Intel?) the M chip competition gap will close pretty fast mostly because the other guys will also have the same architectural benefits that M chips have. At least that's what I think, I could be wrong.
But you cannot get that speed in a laptop. Plus the 13th gen 13900K will be powerhungry.Alot of people think the lower i5s and i7s don't have high single core performance. The 13 gen i9 will reak 2300 single core. That test is on very slow ram specs and is missing 200mhz from its spec speed of 5700 mhz and all the other lower spec cpus will be right under the i9 in geekbench.
People think intel can't design a cpu but don't realize they are on 10nm and still supporting 32bit and legacy code from 30 years ago. If the i9 2900k was on tsmc 5n it would consume 50 to 80% less power.
Intels ceo was right about telling apple to stick with them because when they get 7nm and 5nm out they will beat tsmc in performance per watt.
Well, the DDR5 spec. requires ECC on chip, but I agree that computers (workstation/server or not) should come standard with ECC for the data channel to guard against corrupted memory while in transit. IMHO this is especially important when the memory capacity of the computer grows large.It is built on server grade hardware and has ECC RAM. Therefore, IMO a Mac Studio is not a workstation, even though it can be used for work.
Let me just ask: What makes it workstation to YOU? Why is a base M1 not a workstation but you think M1 Ultra is? Or better yet, like I mentioned above, if you don't think M1 Max is workstation level, but putting two of them together is, why?
This is why I base it off the capabilities. If Apple says it is Workstation I still would be questioning it. I was just curious why some people find it workstation level processor. If "powerful computer" is the metric we are going to classify workstations now, then that is just highly subjective. Someone might find their i3 "workstation" since it is powerful for them
That 25% overhead is less than $2/GB. That's for all intents and purposes zero cost, except maybe if you are trying to buy a $200 laptop.
Apple clearly won the PR game with the M1, both in the Apple and PC world. It cast a big shadow over Intel and AMD. Even Intel's CEO praised them. The arrival of the M2, on the other hand, has been very underwhelming. People in the media and tech enthusiasts aren't gushing about Apple Silicon like they used to. People in the PC world aren't respecting the M2 like the M1. Apple has left themselves vulnerable and the upcoming Intel and AMD chips will force people to reevaluate Apple's chips.
The last time I checked, consumer prices for DDR5 modules were $7 to $8/GB. 25% overhead to that is less than $2/GB. Market prices for high-speed LPDDR5 may be higher, but those are usually based on estimates what people are willing to pay rather than true costs. If those high-end features were a required part of the standard, prices for high-end modules would obviously drop.How did you come up with LPDDR5 6400 being less than $2/GB? I’m seeing more than 5x that for just the ICs.
There are other costs involved, but they are also effectively free. Manufacturing is cheap, and you have to pay the R&D costs anyway if you want to have high-end features in high-end products.You are forgetting the memory bus. ECC has cost well beyond RAM modules itself. Memory bus, power consumption etc. That said, we know very little about M1 RAM and it's capabilities.
The last time I checked, consumer prices for DDR5 modules were $7 to $8/GB. 25% overhead to that is less than $2/GB. Market prices for high-speed LPDDR5 may be higher, but those are usually based on estimates what people are willing to pay rather than true costs. If those high-end features were a required part of the standard, prices for high-end modules would obviously drop.
There are other costs involved, but they are also effectively free. Manufacturing is cheap, and you have to pay the R&D costs anyway if you want to have high-end features in high-end products.
#1 Cost-wise (and by every other metric), DDR5 at whatever throughput (today, commonly 4800 MT/s) != LPDDR5 @ 6400 MT/s (which is unsurprisingly more expensive).The last time I checked, consumer prices for DDR5 modules were $7 to $8/GB. 25% overhead to that is less than $2/GB.
1. We are talking about costs, not prices. High-speed memory modules typically use chips of the same size manufactured using the same process as lower-speed modules. They have simply passed though somewhat stricter quality control. The differences you see in prices are mostly unrelated to costs.#1 Cost-wise (and by every other metric), DDR5 at whatever throughput (today, commonly 4800 MT/s) != LPDDR5 @ 6400 MT/s (which is unsurprisingly more expensive).
#2 Discounting whatever you decide is "overhead" by 1/4 is fallacious to an extreme. GBs for whatever purpose are still GBs that you pay for.
The last time I checked, consumer prices for DDR5 modules were $7 to $8/GB. […] Market prices for high-speed LPDDR5 may be higher
Nope.1. We are talking about costs, not prices.
We were talking about costs, not prices. You didn't read the discussion carefully enough. I wasn't careful enough to mention explicitly that I was using the market price for the cheapest product of the right type as an upper bound for the cost.Nope.
Where is the fun in that?I have a smoking hot girlfriend, make six figures, live in lower Manhattan, and love what I do.
I suggest you find some better things to focus on than numbers of a processor performance score.
Perhaps instead of condescendingly criticising strangers on an internet forum might I suggest that you should use your obvious gifts to find better things to focus on.I have a smoking hot girlfriend, make six figures, live in lower Manhattan, and love what I do.
I suggest you find some better things to focus on than numbers of a processor performance score.