Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
The problem Intel has is that even if it sat down with a clean sheet of paper and designed P and E cores that were meant to work together (e.g. they each support the same instructions, and neither is repurposed from another microarchitecture), both the P and the E cores need to decode variable-length instructions. That penalty will never be able to be overcome, other than by a superior fab process, and there is no indication that they will be able to beat TSMC any time soon. Long term, they need to come up with a RISC design that the market accepts.

They’d still suffer from not controlling the entire software stack, but they could get to “good enough.”
So with that long term RISC design you speak of, do you think x86 is essentially legacy? Something we need to move away from?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
So with that long term RISC design you speak of, do you think x86 is essentially legacy? Something we need to move away from?

I think so. Emulate it for awhile and then ditch it. When x86 started, it had the advantage of higher code density; there was a time when fitting the most instructions into the least amount of RAM was hugely important. That was a very long time ago.

When we extended x86 to 64-bit we did our best to make some improvements - I wrote the first draft of the integer instructions, and we simplified things, improved the register file, etc. But legacy is a bitch. The instruction decoding creates all sorts of issues, not the least of which is that you have to take at least a cycle just figuring out where instructions start and stop and deciding whether you need to read the microcode ROM. You can’t issue wide unless you spend a LOT of power pre-decoding or redundantly decoding all over the place. I *think* what Intel *could* do is get rid of all the 8, 16 and 32 bit instructions. This would still break a lot of legacy software (and the way operating systems boot would have to change), but it would get a lot closer to RISC in terms of power efficiency.
 

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
I think so. Emulate it for awhile and then ditch it. When x86 started, it had the advantage of higher code density; there was a time when fitting the most instructions into the least amount of RAM was hugely important. That was a very long time ago.

When we extended x86 to 64-bit we did our best to make some improvements - I wrote the first draft of the integer instructions, and we simplified things, improved the register file, etc. But legacy is a bitch. The instruction decoding creates all sorts of issues, not the least of which is that you have to take at least a cycle just figuring out where instructions start and stop and deciding whether you need to read the microcode ROM. You can’t issue wide unless you spend a LOT of power pre-decoding or redundantly decoding all over the place. I *think* what Intel *could* do is get rid of all the 8, 16 and 32 bit instructions. This would still break a lot of legacy software (and the way operating systems boot would have to change), but it would get a lot closer to RISC in terms of power efficiency.
Emulate it like how Windows on ARM is doing now? How effective do you think that can be? Will games written in x86 have major performance penalties due to being emulated? Do you think Windows will essentially and eventually lose the "run software from 20+ years ago on Windows 11" backwards compatibility?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Emulate it like how Windows on ARM is doing now? How effective do you think that can be? Will games written in x86 have major performance penalties due to being emulated? Do you think Windows will essentially and eventually lose the "run software from 20+ years ago on Windows 11" backwards compatibility?

You could emulate it like WoA or Rosetta (which are, from a high level, not too different), or by sticking one x86 core on the die (like Intel did for the original Itanium).

No idea what *will* happen, but what it comes down to, in my opinion, is that Intel has to make a decision. Is the key to their survival that they are legacy compatible? Is it that they have the best designers? That they have the best fabs?

They *never* had the best designers. They used to have the best fabs. Maybe they can again. If not, do they have to rely on backwards compatibility? Will that be something that’s important in 10 years? It’s been getting less important over time as the world shifts toward mobile.
 

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
You could emulate it like WoA or Rosetta (which are, from a high level, not too different), or by sticking one x86 core on the die (like Intel did for the original Itanium).

No idea what *will* happen, but what it comes down to, in my opinion, is that Intel has to make a decision. Is the key to their survival that they are legacy compatible? Is it that they have the best designers? That they have the best fabs?

They *never* had the best designers. They used to have the best fabs. Maybe they can again. If not, do they have to rely on backwards compatibility? Will that be something that’s important in 10 years? It’s been getting less important over time as the world shifts toward mobile.
Do you think that Microsoft and Intel/AMD will need to make the decision all together? Or do you think the processor manufacturers will make the decision and Microsoft will need to change to support their decision? When you worked with processors, did you work closely with Microsoft or is that typically what happens?
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Do you think that Microsoft and Intel/AMD will need to make the decision all together? Or do you think the processor manufacturers will make the decision and Microsoft will need to change to support their decision? When you worked with processors, did you work closely with Microsoft or is that typically what happens?

We worked with Microsoft, because when we did AMD64, if Microsoft didn’t support it, we wouldn’t be able to sell anything. Nobody wants to be a “Linux-only” processor.

I imagine that intel and/or AMD, if they are thinking of doing something new, would talk to Microsoft. That said, the next question is whether they would try to come up with their own ISA or just give in and adopt Arm. (Or, far more unlikely, RISC-V). If they adopted Arm, and provided some sort of layer that made it transparent to x86 software, they wouldn’t have to work with Microsoft very much.
 

tmoerel

Suspended
Jan 24, 2008
1,005
1,570
I am even playing the new HOT game at the moment on my GTX 1080. Take a look at Elden Ring's system requirements. So again, how is a 3090 NEEDED? Its a brand new game, released in 2022, that an old GTX 1080 can play at around 60 FPS on average. Which the M1 Ultra is at least better than a 1080.
Who cares!!?? Mac are production machines. Gaming on Mac is totally irrelevant. How can you game on a macbook if it (luckily) doesn't have RGB backlight on the keyboard??!! :p
 
  • Like
Reactions: AlphaCentauri

Abazigal

Contributor
Jul 18, 2011
20,392
23,893
Singapore
Again showing your ignorance - 'If the reports I am hearing', just lol. I am talking about laptop intel chips. Ones that are already in released laptops.

Here -
I watched those videos. That's 30 minutes I would like back.

The laptop referenced in the video is the MSI GE76. I looked up the specs.

Laptop is not cheap, at $4k.
17.3" 1080p display, 32gb ram, an inch thick, 6.4 pounds.
Pretty heavy-duty cooling (some kind of liquid metal pad or something), because apparently this thing generates enough heat to keep your house warm in the winter.
330W charger
Battery life seems to be sub-par (quoted 4.5 hours of chrome usage with GPU turned off), and this is after making serious compromises to the display resolution

In contrast, that same money gets me a 16" M1 Max MBP variant, with the following:
Way better display, .66 inches thick, 4.8 pounds (so about 75% of the weight and thickness).
Fan noise is pretty negligible for users desiring a quieter working environment
Better battery life (especially if you are working away from a power source).

The observations are pretty much in line with what I made in the other thread about Alder Lake (I recall why I said what I did, I was specifically referring to the supposedly more energy-efficient chips slated for the March CES event). Intel basically finds itself in a lose-lose situation when trying to compete against the M1 Pro / Max chips.

In order to trade blows in CPU compute, Intel basically dedicated their entire die budget to CPU, resulting in 14 cores (compared to Apple's 10), allowing them to narrowly eke out a win in terms of performance, but at the expense of pretty much everything else. You have to throw in a pretty power-hungry GPU to fill that gap (which is where the 3080 comes in). But this also leaves the laptop with 2 extremely power-hungry chips in tow, effectively decimating battery life.

To summarise - it currently takes a hulking 17.3" laptop, with massive cooling, a dedicated GPU, a gimped display, being thicker and heavier, sporting way worse battery life, to just narrowly beat out the MBP in terms of performance, while losing in pretty much every other metric. Meanwhile, an M1 MBP offers the best of all worlds, offering great, sustained performance, long battery life and a cool and quiet working environment.

Intel wants so badly to have the performance crown again, but it simply doesn't have the tech right now to pull this off gracefully without completely compromising the overall laptop experience. Which was the point I have been trying to make all along - nobody can match the specific experience that Apple is offering with their Mac lineup right now, because of the control over hardware and software that Apple currently wields, and it's not something Intel can replicate (much less replicate overnight), because they are providing just one of several parts of what forms the final experience of a PC.
 

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
Who cares!!?? Mac are production machines. Gaming on Mac is totally irrelevant. How can you game on a macbook if it (luckily) doesn't have RGB backlight on the keyboard??!! :p
I agree! But when people come and state that NVIDIA and AMD will beat an M1 Ultra and it will be irrelevant in 6-7 months is just crazy. And I provided proof to my claims that a brand new 2022 game performs very well on a GTX 1080. Not a 30 series, not a 20 series, but a 10 series.

This is like saying the M1 Ultra cannot POSSIBLY compete with the 2019 Mac Pro because the Mac Pro offers 1.5 TB of RAM. Does SOMEONE need that much RAM? Yes, is it relevant for a conversation on this website? Not really.

And the gaming attitude is crazy too. If a brand new hot game runs just fine on a GTX 1080, I think if the DEVELOPERS even cared about macOS (speaking as a developer I don't really care about macOS gaming either), then gaming will be better. The Field of Dreams "If they build it, games will come" attitude is just crazy talk.
 
  • Like
Reactions: Abazigal

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Who cares!!?? Mac are production machines. Gaming on Mac is totally irrelevant. How can you game on a macbook if it (luckily) doesn't have RGB backlight on the keyboard??!! :p

Gaming on Mac is absolutely great if you like the kind of games that are available (mostly strategy, simulation and RPG genre). My M1 Max runs everything I want in 4K with high settings.
 
  • Like
Reactions: Colstan and Ethosik

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
Gaming on Mac is absolutely great if you like the kind of games that are available (mostly strategy, simulation and RPG genre). My M1 Max runs everything I want in 4K with high settings.
Yep! Half of my gaming is on my Mac since I like playing games like Terraria, Factorio and Stardew Valley.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Regarding the original post… what does “will be beaten” mean in this context? Absolute performance? Well, you can buy faster AMD/Nvidia config today. Performance for money? You can absolutely build a faster than M1 max desktop for under $2K. More VRAM? That will take couple of years at best. More performance per watt? That will take a better part of decade.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Yep! Half of my gaming is on my Mac since I like playing games like Terraria, Factorio and Stardew Valley.

And even modern, fault demanding games like total war series and Baldurs Gates 3 run very well.
 

Colstan

macrumors 6502
Jul 30, 2020
330
711
Regarding the original post… what does “will be beaten” mean in this context? Absolute performance? Well, you can buy faster AMD/Nvidia config today. Performance for money? You can absolutely build a faster than M1 max desktop for under $2K. More VRAM? That will take couple of years at best. More performance per watt? That will take a better part of decade.
I realize this is the boring answer, but because it's the MacRumors forum, the most important benchmark is which processor runs macOS the best. Seeing how x86 is dead on Mac, despite its zombie corpse still roaming the hallways, the answer is obvious.

The problem is, of course, that makes the discussion a dead-end and entirely uninteresting, particularly because it makes it impossible for the PC crowd to effectively counter. All the Tensor Cores, Infinity Fabrics, Lakes and Coves are meaningless if it can't run your preferred operating system.
 
  • Like
Reactions: tmoerel

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
I realize this is the boring answer, but because it's the MacRumors forum, the most important benchmark is which processor runs macOS the best. Seeing how x86 is dead on Mac, despite its zombie corpse still roaming the hallways, the answer is obvious.

The problem is, of course, that makes the discussion a dead-end and entirely uninteresting, particularly because it makes it impossible for the PC crowd to effectively counter. All the Tensor Cores, Infinity Fabrics, Lakes and Coves are meaningless if it can't run your preferred operating system.
Yeah I never understood the idea that someone could buy into MacOS and at the drop of a hat switch to Windows (and have to rebuy all their software).
 

caribbeanblue

macrumors regular
May 14, 2020
138
132
According to what I read in other forums, the Apple GPU doesn't seem to be able to keep up with full-fledged graphics cards outside of Apple-specific scenarios. The feature set is also not up to date and is perhaps more in line with DirectX 10.
Ray tracing hardware is also missing. This is why we get such results as here: https://osong.xyz/benchmarking-m1-pro-with-blender-3-1a-cycles-with-metal/.
In another forum someone wrote that he could render the Optix scene in only 17 seconds instead of 43 seconds with his 2.5 year old laptop with 2060 graphics, which cost 900€ at that time.

I myself am not an expert. However, it seems that Apple's graphics are of course very efficient, but one should not think that they can completely replace dedicated graphics cards in terms of performance and features (depending on the application scenario).
https://devtalk.blender.org/t/cycles-apple-metal-device-feedback/21868/237 Read this response from an Apple engineer working on the Blender Cycles renderer. The Metal backend implementation for ASi (nor other Apple machines w/ AMD & Intel GPUs) isn't anywhere near close to being fully baked. Of course you're going to get questionable results for now.
 

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
  • Like
Reactions: caribbeanblue

throAU

macrumors G3
Feb 13, 2012
9,201
7,354
Perth, Western Australia
Height? No. Still useful, yes. Never gave a damn about Photoshop. Considering your view of Linux, you have zero experience with current HPC systems. You have fun with your toys.

MacOS had the potential to be that successor. Potential. Now, nothing more.

Why are you here discussing hammers when this site is for screwdrivers?

They're both tools. Linux does HPC well. It's a crappy desktop (yes I use it and have since 1994). Both platforms have their purposes.
 

jjcs

Cancelled
Oct 18, 2021
317
153
Why are you here discussing hammers when this site is for screwdrivers?

They're both tools. Linux does HPC well. It's a crappy desktop (yes I use it and have since 1994). Both platforms have their purposes.

Works better as a desktop for work than MacOS, for me. All depends on workflow and the tools required.
 

jjcs

Cancelled
Oct 18, 2021
317
153
So you like leaving massive amounts of performance on the table on your "HPC" gear?
When I'm using tools written against OpenGL, porting to Metal is expending time and resources better spent elsewhere since the same tool will continue to run fine on Linux after OGL is finally dropped on Mac.

If I'm developing GPU applications for use on HPC, Metal isn't all that helpful, either. In fact, it isn't at all. Apple isn't in that market and do not sell hardware for that market. An actual standard actually backed by the major players would be awesome, but we aren't really at that point since OpenCL more or less died.
 

throAU

macrumors G3
Feb 13, 2012
9,201
7,354
Perth, Western Australia
Emulate it like how Windows on ARM is doing now? How effective do you think that can be? Will games written in x86 have major performance penalties due to being emulated? Do you think Windows will essentially and eventually lose the "run software from 20+ years ago on Windows 11" backwards compatibility?
Emulate it like Rosetta 2 is doing on Apple Silicon right now.

It works fast enough to be an upgrade from any intel Macbook, pretty much.
 

throAU

macrumors G3
Feb 13, 2012
9,201
7,354
Perth, Western Australia
When I'm using tools written against OpenGL, porting to Metal is expending time and resources better spent elsewhere since the same tool will continue to run fine on Linux after OGL is finally dropped on Mac.

If I'm developing GPU applications for use on HPC, Metal isn't all that helpful, either. In fact, it isn't at all. Apple isn't in that market and do not sell hardware for that market.

If you're using Linux why are you not using Vulkan or CUDA or any of the other far more recent (open, if you plan to run on Linux) APIs for GPUs instead?

Writing for OpenGL is writing software for 1999...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.