Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Jack Neill

macrumors 68020
Sep 13, 2015
2,272
2,308
San Antonio Texas
I don’t miss the era in which Intel intentionally stagnated on progress for a decade until Apple and AMD came along to light a fire under their butts. Intel was trash. The only good thing about Intel Macs was Boot Camp.
I have some great Intel Macs, that I have strong emotional attachment to. I would say there are a lot of things including Boot Camp that made Intel Macs great.
 

winxmac

macrumors 68000
Sep 1, 2021
1,560
1,823
A unified memory is like a laptop with Intel built-in graphics without discrete graphics [AMD or NVIDIA] so in this case, I think the Intel version with 16GB RAM would be better... Although it would still depend on the workload you would apply for both...

Correct me if I'm wrong, but to me, a 32GB RAM Apple Silicon is like an Intel version with 16GB RAM and a discrete graphics with at least 2GB VRAM...
 

ctjack

macrumors 68000
Mar 8, 2020
1,553
1,569
I don’t miss the era in which Intel intentionally stagnated on progress for a decade until Apple and AMD came along to light a fire under their butts. Intel was trash. The only good thing about Intel Macs was Boot Camp.
I feel like everyone at my work is being Intel: so many talks left and right about the workload, everyone is doing busy face but at the end of the day the goal is to do less work and get paid.
So i guess this is a natural human behaviour - Intel is also run by humans: why do more if you get paid the same?
 

russell_314

macrumors 604
Feb 10, 2019
6,659
10,260
USA
If the M1 has to wait for paging activity it will be slower than an i7 that doesn't.
I’m sure there’s some scenario where this could happen, but with how much faster the swap file is with the M1, I really don’t see it happening often. Sure if you know you need 16 GB because you’re doing some specific task then buy that but either way, unless you need to run x86 Windows, there’s no reason to get the Intel model because it’s going to be slower for the majority of tasks.

You can find videos showing with even doing something like rendering 4K video, the base M1 destroys Intel models.
 
  • Like
Reactions: chabig

Basic75

macrumors 68020
May 17, 2011
2,101
2,447
Europe
I’m sure there’s some scenario where this could happen, but with how much faster the swap file is with the M1, I really don’t see it happening often. Sure if you know you need 16 GB because you’re doing some specific task then buy that but either way, unless you need to run x86 Windows, there’s no reason to get the Intel model because it’s going to be slower for the majority of tasks.

You can find videos showing with even doing something like rendering 4K video, the base M1 destroys Intel models.
I'm not advocating for buying Intel Macs. I'm advocating for not crippling a fast new computer by giving it too little RAM.
 

russell_314

macrumors 604
Feb 10, 2019
6,659
10,260
USA
I'm not advocating for buying Intel Macs. I'm advocating for not crippling a fast new computer by giving it too little RAM.
Of course buy the specification you need. If you buy a computer with 16 GB of RAM when you really need a 32 GB then that’s a problem. On the same token if you really only need 8 GB and you buy 32 GB you wasted some money, but it will still work. Your budget will determine if that few hundred dollars matters to you.

Since the question of this thread is about an Intel Mac with 16 GB versus Apple Silicon with 8 GB, my answer is I would take that Apple Silicon Mac any day of the week and twice on Sunday over an Intel Mac with 16 GB RAM. I really don’t see any situation where an extra 8 GB of RAM is going to make an Intel Mac outperform a M1. I’m not saying there’s not some niche scenario where that might happen though. It would be an interesting test for benchmarks or even running some regular programs.
 

russell_314

macrumors 604
Feb 10, 2019
6,659
10,260
USA
I have some great Intel Macs, that I have strong emotional attachment to. I would say there are a lot of things including Boot Camp that made Intel Macs great.
I agree with you that Intel Macs are good even today. I still remember my old 13“ MacBook Pro with dedicated graphics. If you currently own an Intel MacBook that was made within the past few years, I think you’re fine. Clearly Apple Silicon outperforms it but what matters is will it get the job done. I think people forgot about the MacBook Air made before Apple Silicon. At that time if you needed computing power, you would not buy a MacBook Air because it was really underpowered for anything other than basic tasks.
 
  • Like
Reactions: Basic75

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
That really depends on what you are doing. Systems with more RAM almost always perform better since they do not have to use swap memory much.
It would be more accurate to say they rarely perform worse assuming a definition of performance that doesn't include cost or power.
 

Analog Kid

macrumors G3
Mar 4, 2003
9,360
12,603
If your workload fits in 8GB of RAM, without a doubt. If it needs 16GB RAM, possibly.
Exactly this.

It's important to remember that "workload" doesn't mean a bunch of tabs cached away and only visited every few minutes, it means the data under active continuous CPU demand.
 
  • Like
Reactions: Sumo999 and leman

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
The only thing AS does for compatibility is the TSO mode (and a couple miniscule internal features like a half-carry).
re: the other minor features, if folks are curious...

I don't remember the details but some of the reverse engineering guys found features which help Rosetta emulate 4K pages for x86. There's also a FPU mode which makes it do rounding and denormal handling exactly the same way x86 CPUs do (or is that what you mean by half-carry?). Finally, there's three different flags extensions (two standardized by Arm as optional features, one Apple custom) to help Rosetta emulate x86 flag behaviors.

For those wondering, flags extensions are often very important to emulators. They're minor things in that they cost almost no gates, but have a disproportionately huge impact on emulator performance. Say the x86 program runs an integer add instruction. Arm has arithmetically equivalent add instructions, so great! Just substitute an Arm add. But the job isn't only about the arithmetic; all integer ALU instructions also generate flag outputs, and x86 flags are slightly different from Arm flags. Emulating these differences might cost two or three more Arm instructions. On Apple Silicon, thanks to the flags extensions, Rosetta doesn't need those extra instructions. This means it can get much closer to the ideal world where every x86 instruction translates 1:1 to an Arm instruction.

Rosetta 2 simply translates x86 code into ARM code, the same way Rosetta translated PPC to x86 (almost a mirror image process, except for the BE/LE thing).
There is a major difference between Rosetta 2 and Rosetta 1, though. Rosetta 1 was a pure JIT (just-in-time) emulator; translation is done at runtime, incrementally, as the program executes. Rosetta 2 primarily operates in ahead-of-time (AOT) mode. The first time you launch an Intel binary on an Arm Mac, Rosetta 2 attempts to recompile the entire binary to Arm, writes the resulting to disk so future launches can be faster, and finally launches it. This is why first launches of a x86 binary have a long delay.

Rosetta 2 does have a JIT mode, because it's not possible to run all x86 software with AOT recompilation. The most common example is x86 software which embeds a JIT engine of its own, such as a browser engine with its Javascript JIT. Because the JIT generates x86 code at runtime, there is no way to do a static AOT translation of all the x86 code that will ever exist, and Rosetta 2 must fall back to JIT mode to handle it. JIT-in-JIT performs much worse than AOT, but at least it works.
 

Choco Taco

Suspended
Nov 23, 2022
615
1,065
I sorry that Intel hurt you so badly. I hope you can recover one day and appreciate the 2006 - 2020 Apple era..
Yes, that's what's happening here. My hurt feelies. It's not that Intel stagnated their technology progression because they were the only game in town for so long and didn't until very recently start innovating again when Apple was forced to move forward with their own ARM technology because Intel wasn't going anywhere. AMD has caught up as well and now Intel is finally making progress. Competition is good. The lack of competition is what forced Apple to move away from Intel's garbage, stagnating, power-hungry, surface-of-the-sun CPUs.

I'm basing my thoughts on something that actually happened. You're basing your thoughts about Intel on your emotional attachment to them ... in your words. So please spare me the condescension.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
That's not true. Chrome uses Blink which was forked over WebKit about 10 years ago.

khtml is the web browser engine from KDE which is another desktop environment for Linux.

All these things are easy to find on the Internet.
You should have done some of that easy finding yourself; WebKit was forked from KHTML. @Sydde was being a bit loose in saying that Safari and Chrome are both KHTML, as WebKit has diverged from KHTML and Blink from WebKit, but they are all related codebases.
 

Schnort

macrumors regular
Oct 24, 2013
204
61
And all iOS browsers have to use the same Webkit rendering engine (though I hear that's about to change?)
 

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
There's also a FPU mode which makes it do rounding and denormal handling exactly the same way x86 CPUs do (or is that what you mean by half-carry?).

No, half-carry is a vestigial 8086 integer flag for the carry from bits [3] into [4]. I think it was primarily used by the specific ASCII/BCD conversion instructions.
 

TechnoMonk

macrumors 68030
Oct 15, 2022
2,604
4,112
A unified memory is like a laptop with Intel built-in graphics without discrete graphics [AMD or NVIDIA] so in this case, I think the Intel version with 16GB RAM would be better... Although it would still depend on the workload you would apply for both...

Correct me if I'm wrong, but to me, a 32GB RAM Apple Silicon is like an Intel version with 16GB RAM and a discrete graphics with at least 2GB VRAM...
That’s not how unified memory works.
 
  • Like
Reactions: Sydde

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
A MBA running its built-in screen at full res uses about 16Mb for the screen itself and probably a little more than half that for GPU code and source data like font outlines. A 5K Studio display would use around 56Mb for the screen, but not any more source data or code.

That accounts for large fraction of discrete VRAM memory on a dGPU. Since job source data (e.g., model specs and textures) has to be copied over to dGPU memory, that much is duplicated between VRAM and system RAM, so the real gain for a system using a dGPU is transient data generated by the GPU during calculations. That is most likely not a really large amount of data – a tailored iGPU and its drivers will be set up to work in the most space efficient way practical for the system.

Which is to say that UMA may cost the system one or two hundred megabytes compared to what would reside exclusively in dGPU VRAM. Not exactly trivial, as such, but not gigs and gigs. Couple this with the way Apple has designed their SoCs to use memory more efficiently and it becomes obvious that a Mac would most assuredly not need twice as much memory to match an x86 machine with a graphics card. Depending on what is being done.
 

Basic75

macrumors 68020
May 17, 2011
2,101
2,447
Europe
the way Apple has designed their SoCs to use memory more efficiently
What exactly did they do?

The only thing I can think of is that the system does not statically reserve a portion of RAM for the iGPU. It's like an Amiga with only chip RAM, you can have CPU and GPU data mixed in RAM and freely use pointer passing between the two.

But does the iGPU have an MMU so that it can use the same translation tables as the CPU? Or do all GPU assets have to be in physically contiguous memory with the application having to manually make sure the pointers passed to the GPU are the correct physicall addresses?
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
Correct me if I'm wrong, but to me, a 32GB RAM Apple Silicon is like an Intel version with 16GB RAM and a discrete graphics with at least 2GB VRAM...

How do you arrive at this conclusion? A 32GB AS Mac is like 32GB Intel, but you can also view it as a computer with up to 24GB GPU RAM, depending on your needs.
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
But does the iGPU have an MMU so that it can use the same translation tables as the CPU? Or do all GPU assets have to be in physically contiguous memory with the application having to manually make sure the pointers passed to the GPU are the correct physicall addresses?

I did some (very superfluous) investigation on this topic a while ago (by creating a large amount of memory objects and examining their pointers on the CPU and GPU side). Apple GPUs use virtual memory with 16KB pages (possibly larger page configurations are possible, but I have no way to ascertain it), but with their own set of translation tables which are different from those used on the CPU. Virtual addresses of the same memory object are thus different between the CPU and the GPU. Every Metal buffer gets its own virtual memory allocation (no surprise Apple recommends to reduce the amount of buffers). From programmer's perspective, you use shared memory IPC, but if your data contains pointers they will have to be marshalled to ensure that the address is correct. What's also interesting is that (at least as of Apple G13 GPU) the address space seems to be wider on the GPU: I've seen pointers on the GPU side that don't fit in the ARM's 48-bit space.

As to whether the GPU can mount the same translation tables as the CPU, that's a good question. One should ask Asahi GPU hackers.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.