Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Not really ignoring that. 3nm advantage would still be part of Apple's benefit. You don't just use the scale tool on a 5nm chip to shrink it to a 3nm chip, it needs to be redesigned specifically for 3nm architecture.

If AMD/Intel wants to do 3nm, sure, let's compare it. If AMD does a magical 2nm process, it's only fair AMD gets handed the crown, assuming it wins performance per watt metric. Until then, Apple wins.
Process node from the same company has the same characteristics for everybody. There are minor differences, that are supposed to reflect differences between architectures, but PDK: Process Development Kit is the same for everybody.

Apple was sitting on 5 nm process for what. 3-4 years? Only just recently AMD went to 5 nm with Zen 4 designs. Not to mention Intel process tech is absolute garbage compared to TSMC, hence the power draw of Intel CPUs on their own process nodes. Ignoring the advantage that Apple has in terms of process node in any discussion about performance per watt and claiming that AMD or Intel are straight up losing to Apple is ... laughable. We already have 25W 7040U APU designs from AMD that are either beating Apple or tying with Apple M2.
 
Process node from the same company has the same characteristics for everybody. There are minor differences, that are supposed to reflect differences between architectures, but PDK: Process Development Kit is the same for everybody.

There is still a ton of work into designing for 3nm. Saying "oh well Apple just benefitted from the 3nm process, anyone can do that" is not painting the entire picture of how much work went into actually designing high performance per watt chips. That's like saying thanks to Samsung for beautiful iPhone displays and thanks to Sony for the amazing iPhone photos.
 
I don't think an ARM Hackintosh will ever be a thing. Apple has special features on chip, so for example, Rosetta will never function on an off-the-shelf ARM chip.

You might be able to get by without the media engine since the original M1 chip didn't have one, but it'll be way worse than the Apple equivalent. Worse, there's almost no decent laptop-level ARM chips that can compete with even the original M1.
Hackintosh doesn't care about performance or battery life and stuffs like that, they only care about making OSX to run on non Apple Hardware.
 
  • Like
Reactions: Pinkyyy 💜🍎
I'd bet one more, seems likely they'll give the 2019 Mac Pro a final update in 2024 given they only just replaced it in in the lineup, likely will include the 2018 Mac Mini and all the 2020 intel machines as well.

2024 OSX tops, I doubt they will release 2025 OSX x86.
 
  • Like
Reactions: Pinkyyy 💜🍎
Hackintosh doesn't care about performance or battery life and stuffs like that, they only care about making OSX to run on non Apple Hardware.

That’s a pretty broad and ridiculous claim

Indeed many people run macos on pc precisely because they care about performance, among other things

That’s why I do it
 
  • Like
Reactions: theorist9
That’s a pretty broad and ridiculous claim

Indeed many people run macos on pc precisely because they care about performance, among other things

That’s why I do it
They care about running it cheaper than Apple, of course if you build your own computer is likely going to be faster but only for desktops, for laptops, people want their laptops to run for hours without getting a single charge.

Hackintosh people doesn't care about consuming less watts, they only care about getting OSX to run as native as possible.

So Hackintosh is pretty much 90% of desktop users.
 
There is still a ton of work into designing for 3nm. Saying "oh well Apple just benefitted from the 3nm process, anyone can do that" is not painting the entire picture of how much work went into actually designing high performance per watt chips. That's like saying thanks to Samsung for beautiful iPhone displays and thanks to Sony for the amazing iPhone photos.
You clearly do not understand the process node topic.

PDK is doing exactly what you are desciribing. You are using PDK to optimize your architecture for that process. But the process is the same, for everybody, and everybody gets the same PDK, from TSMC. And yes, Apple benefits from Process node advantage they have had. Currently they are going to N3. And everybody flaps their arms as if their products are best in the business. They arent, which AMD is proving with lower TDP 5nm APUs, compared to M2. Apple simply benefited from process node.
 
Hackintosh doesn't care about performance or battery life and stuffs like that, they only care about making OSX to run on non Apple Hardware.
MacOS, which is by far the least performant OS on the planet, behind Linux and Windows.

Only thing that Apple has is really good hardware.
 
Last edited:
You clearly do not understand the process node topic.

PDK is doing exactly what you are desciribing. You are using PDK to optimize your architecture for that process. But the process is the same, for everybody, and everybody gets the same PDK, from TSMC. And yes, Apple benefits from Process node advantage they have had. Currently they are going to N3. And everybody flaps their arms as if their products are best in the business. They arent, which AMD is proving with lower TDP 5nm APUs, compared to M2. Apple simply benefited from process node.

You clearly are not understanding the process design wise. I don't have an hardware engineering background but even I know that power and heat management is not a given. When you go from a 10nm to 5nm or even 3nm, it's much more difficult to manage power and heat. This is especially true when complexity of the design affects yields which is a shared concern between the designer and TSMC.

Apple clearly took on the extra work in designing for 3nm, therefore credit is given to them, not exclusively to TSMC.
 
  • Like
Reactions: Pinkyyy 💜🍎
You clearly are not understanding the process design wise. I don't have an hardware engineering background but even I know that power and heat management is not a given. When you go from a 10nm to 5nm or even 3nm, it's much more difficult to manage power and heat. This is especially true when complexity of the design affects yields which is a shared concern between the designer and TSMC.

Apple clearly took on the extra work in designing for 3nm, therefore credit is given to them, not exclusively to TSMC.
Indeed, heat density increases with smaller nodes.

It doesn't change anything for Apple nor for AMD. People were comparing 5 nm Apple products with 7 nm AMD/Nvidia products and claimed that Apple won everything. AMD brought Zen 4 to 5 nm APUs, and brought them to 25W TDP and suddenly the difference was either null, or AMD won. Apple won only with process node advantage that round.
 
Indeed, heat density increases with smaller nodes.

It doesn't change anything for Apple nor for AMD.
Completely disagreed. 2 completely different architectures likely means different design approaches to accommodate heat and power in 3nm process. AMD has a much more complicated design due to its instruction set I would assume.
 
Process node from the same company has the same characteristics for everybody. There are minor differences, that are supposed to reflect differences between architectures, but PDK: Process Development Kit is the same for everybody.
It sounds like you're saying the microarchitecture doesn't matter. It does, and in a very substantive way.
We already have 25W 7040U APU designs from AMD that are either beating Apple or tying with Apple M2.
You'll forgive me if I don't simply believe some random guy on the internet. Can you actually back up that claim with independent reviews? Honestly, I don't understand why people think they can make these claims without providing backup. Otherwise they're just meaningless.

Michael Larabel of Phoronix did a comparative review of the Ryzen 7 7840U vs. the M2 under Linux, and was not willing to make the claim you're making, because he realized his review methodology didn't give him sufficient information. That's a sign of someone trustworthy—someone who is willing to admit the limits of his knowledge:

"Most interesting would have been to see how the power efficiency of the M2 SoC compares to the AMD chips tested, but alas there is currently no SoC power information exposed under Linux for the M2 while just monitoring wall power would have been not too accurate given the sharp hardware differences. In any event here is the AMD power numbers overall. If the Apple M2 is only pulling 5~8 Watts on load as some claim, even with the current state of the Linux support for Apple Silicon it would be quite a competitive battle."


Apple M2 On Linux Performance Against AMD Zen 4 Mobile SoCs
 
Last edited:
MacOS, which is by far the least performant OS on the planet, behind Linux and Windows.

Only thing that Apple has is really good hardware.

I don't think MacOS is the least performance OS in the planet, but I also don't think is the best, as many people claim.

I would say the opposite, for example any gaming motherboard has way better sound card that even the mac pro sound card. And the list goes on with Keyboards, Mice, etc.
 
  • Like
Reactions: Pinkyyy 💜🍎
I know, but again, we aren't debating whether or not there is a market for something higher end than the 2023 MacPro. There certainly is, and I'm not arguing against that. What I am saying is that Apple has beautifully addressed just about the entire market below the very highest end professional work flow, where 192GB of RAM isn't sufficient. There are a very small number of users who truly need more than 192GB of RAM. I also don't think Apple is shutting the door to that, but I think because it is such a small market share, Apple's hasn't rushed to find a solution around using their M series SoCs for higher RAM and GPU requirements.
I didn’t say anything about RAM. The GPU on the Mac Studio just cannot compete with NVIDIA. I’m running blender much faster on Windows and I only have 32GB of RAM on it.
 
  • Like
Reactions: Pinkyyy 💜🍎
MacOS, which is by far the least performant OS on the planet, behind Linux and Windows.
I think there's a fundamental misunderstanding here of what a performant OS means, in its most important sense. It's not which OS boots faster. Nor is it which OS runs apps faster (particularly because that is overwhelimingly dependent on the app rather than the OS). Instead, the most performant OS is the one that lets the user get his or her work done with the greatest possible efficiency and effectiveness, and minimum possible hassle.

I.e., the most performant OS is not the one that makes apps run best, it's the one that makes *us* run the best.

And for most on MR that is, overwhelmingly, MacOS. I've use all of the big three —Linux, Windows, and MacOS, and find that my friction for getting tasks done is far less in MacOS than it is in Windows or Linux. Of course, this is entirely personal preference. Many scientists, especially physicists and chemists, feel the same, which is why if you were at, say an APS (American Physical Society) convention in the early 00's, when MacOS's market share was pretty low, you'd find a surprisingly large fraction of the attendees using MacOS.
 
Last edited:
I know, but again, we aren't debating whether or not there is a market for something higher end than the 2023 MacPro. There certainly is, and I'm not arguing against that. What I am saying is that Apple has beautifully addressed just about the entire market below the very highest end professional work flow, where 192GB of RAM isn't sufficient. There are a very small number of users who truly need more than 192GB of RAM. I also don't think Apple is shutting the door to that, but I think because it is such a small market share, Apple's hasn't rushed to find a solution around using their M series SoCs for higher RAM and GPU requirements.
The RAM issue really should be resolved within the next few years as LPDDR5x becomes increasingly available, since its specification allows for much more memory (up to 64 GB/package) than LPDDR5.

The M2 Ultra has 12 DRAM dies (https://wccftech.com/apple-m2-ultra...e-bigger-than-intel-sapphire-rapids-xeon-cpu/). If those were replaced with 64 GB LPDDR5x modules on the M3 or M4, that would mean 12 * 64 GB = 768 GB. Not that Apple would necessarily go that far, but that does show the potential capacities LPDDR5x offers.

The NVIDIA Grace superchip offers 960 GB RAM with 16 LPDDR5x modules (60 GB/module); see https://www.servethehome.com/nvidia...4-cores-and-128-pcie-gen5-lanes-arm-neoverse/

I think one of the big issues now is the strict coupling between max RAM size and CPU cores. For instance, there are people like me that use 128 GB RAM, but whose programs are mostly single–threaded, and thus really don't need an Ultra, but would be forced to buy it if they wanted to get that much RAM. [In practice, depending on their budget, what they might do is get a 96 GB Max, and hope they don't have to swap too much.] I'm thus hoping what Apple does with LPDDR5x's higher capacities is to make high-RAM options available on their Pro and Max machines. For instance, with 2 x 64 GB RAM modules, you could buy a Pro with 128 GB RAM. And with 4 x 64 GB on a Max, you could get 256 GB.
 
Last edited:
The M2 Ultra has 12 DRAM dies (https://wccftech.com/apple-m2-ultra...e-bigger-than-intel-sapphire-rapids-xeon-cpu/). If those were replaced with 64 GB LPDDR5x modules on the M3 or M4, that would mean 12 * 64 GB = 768 GB. Not that Apple would necessarily go that far, but that does show the potential capacities LPDDR5x offers.

...along with its 12 DRAM dies that are scattered in groups of four on each side.

Apple-M2-Ultra-SoC-delidded-Intel-Xeon-Sapphire-Rapids-CPU-_1-g-low_res-scale-2_00x-Custom-728x488.jpeg


I think WCCFTech is mistaken, how does one get 12 from what is obviously only 8...?

With 64GB RAM chips, a Mn Max would go up to 256GB, and the Mn Ultra up to 512GB RAM capacity...

The NVIDIA Grace superchip offers 960 GB RAM with 16 LPDDR5x modules (60 GB/module); see https://www.servethehome.com/nvidia...4-cores-and-128-pcie-gen5-lanes-arm-neoverse/

60GB RAM chips seem quite the odd number, I would think these are actually 64GB chips that are implementing in-line ECC, thus "reduced" to 60GB chips...?

With the whole "pro Macs need ECC RAM" thing, maybe LPDDR5X brings in-line ECC RAM to the Apple Silicon platform...?
 
The RAM issue really should be resolved within the next few years as LPDDR5x becomes increasingly available, since its specification allows for much more memory (up to 64 GB/package) than LPDDR5.

The M2 Ultra has 12 DRAM dies (https://wccftech.com/apple-m2-ultra...e-bigger-than-intel-sapphire-rapids-xeon-cpu/). If those were replaced with 64 GB LPDDR5x modules on the M3 or M4, that would mean 12 * 64 GB = 768 GB. Not that Apple would necessarily go that far, but that does show the potential capacities LPDDR5x offers.

The NVIDIA Grace superchip offers 960 GB RAM with 16 LPDDR5x modules (60 GB/module); see https://www.servethehome.com/nvidia...4-cores-and-128-pcie-gen5-lanes-arm-neoverse/

I think one of the big issues now is the strict coupling between max RAM size and CPU cores. For instance, there are people like me that use 128 GB RAM, but whose programs are mostly single–threaded, and thus really don't need an Ultra, but would be forced to buy it if they wanted to get that much RAM. [In practice, depending on their budget, what they might do is get a 96 GB Max, and hope they don't have to swap too much.] I'm thus hoping what Apple does with LPDDR5x's higher capacities is to make high-RAM options available on their Pro and Max machines. For instance, with 2 x 64 GB RAM modules, you could buy a Pro with 128 GB RAM. And with 4 x 64 GB on a Max, you could get 256 GB.
Apple will not replace memory modules with LPDDR5X and go straight up to 64 GB chips solely because they use bog standard memory chips that are available on the market, and they package them, themselves into certain form factor.

But they will increase the bus width with M3 series by 50%, which we already have proofs in the form of 36 GB M3 Pro, and 48 GB M3 Max chips.

36 Is impossible on 256 bit bus with current memory modules. It can only happen on 192 and 384 bit bus. 48 - analogically: 384, and 768 bit memory bus.
 
It’s one of the largest markets in computers. Building your own….. just because you don’t do it doesn’t mean it doesn’t exist.
Even pre built for around $1700 wipe the floor with the studio.
Its the smallest market in computers, and rapidly shrinks with each generation of GPUs. The only moments when there are spikes in volume being bought were and maybe still will be crypto booms.

Desktop DIY is dying. And it will die even more when very soon, Small Form Factor PCs, that use AMD APUs or Intel SOCs will become good enough for majority of people that it will be simply not feasible to built your own computer. DIY will become a nieche in current nieche.
 
Its the smallest market in computers, and rapidly shrinks with each generation of GPUs. The only moments when there are spikes in volume being bought were and maybe still will be crypto booms.

Desktop DIY is dying. And it will die even more when very soon, Small Form Factor PCs, that use AMD APUs or Intel SOCs will become good enough for majority of people that it will be simply not feasible to built your own computer. DIY will become a nieche in current nieche.

Do either of you have any numbers or are you both just speculating?
 
  • Like
Reactions: Pinkyyy 💜🍎
Its the smallest market in computers, and rapidly shrinks with each generation of GPUs. The only moments when there are spikes in volume being bought were and maybe still will be crypto booms.

Desktop DIY is dying. And it will die even more when very soon, Small Form Factor PCs, that use AMD APUs or Intel SOCs will become good enough for majority of people that it will be simply not feasible to built your own computer. DIY will become a nieche in current nieche.
DIY won't die, desktops pre builts won't die, Small Form Factors PC doesn't means people won't DIY them, could be possible that SOCS (AMD/Intel) become better but you will also need to put your RAM, SSD and Case at least.

Not everybody like a computer with no way to upgrade nothing as Apple does, as soon as there's people that want customization DIY won't die, and corporates won't use laptops, they will use desktops for their users and those computes will be pre builts and won't be macs in 99% of the cases.
 
  • Like
Reactions: Pinkyyy 💜🍎
Do either of you have any numbers or are you both just speculating?
YOU are the only one who is speculating here.


Desktop is dying. Period. DIY is dying. Period. DIY is becoming only the highest end, enthusiast level of nieche. Nothing else.
DIY won't die, desktops pre builts won't die, Small Form Factors PC doesn't means people won't DIY them, could be possible that SOCS (AMD/Intel) become better but you will also need to put your RAM, SSD and Case at least.

Not everybody like a computer with no way to upgrade nothing as Apple does, as soon as there's people that want customization DIY won't die, and corporates won't use laptops, they will use desktops for their users and those computes will be pre builts and won't be macs in 99% of the cases.
Considering where upcoming SOCs are going, you will NOT need to bring your own RAM.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.