Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
No, you didn’t miss anything. Unless they plan to add a lot of pipelines to each core, this is just further evidence of the inefficiency of the cores. And if they *are* adding a lot of pipelines, that’s a bad idea.

Is Intel expecting to beat Apple Silicon in 2025 targeting what Apple has today or what they will have in 2025? Apple has been fairly consistent in improving performance in their A-series over a long period of time. I'm asking myself the question: will I ever need to buy another x86 computer (or CPU) again? The M1X will provide at least a partial answer to that.

I just looked through the list of processes on my system and found one program that runs via Rosetta 2 which is Synergy. I'm using the free version from 2018 so no surprise that there's no Apple Silicon version. If and when Rosetta 2 goes away, I'll just buy the paid version. I have one other production program which runs via Crossover and Rosetta 2. Performance is awful but I have a couple of nicely configured Windows boxes and I can run that one program on those systems. I imagine that many others have gone through this exercise or are going through it and things should get better with time. It will kind of be like the old PowerPC days when Apple users really were different.

The stuff that Microsoft is doing to desupport hardware older than five years is also factoring in my decision to go strictly with Apple Silicon for PCs going forward. If they do it with W11, nothing stopping them with W12.
 
Last edited:

jz0309

Contributor
Sep 25, 2018
11,392
30,074
SoCal
But of course Intel PLANS … they have been “planning” a lot of things. What they really need to do is DELIVER.
Only time will tell
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
No, you didn’t miss anything. Unless they plan to add a lot of pipelines to each core, this is just further evidence of the inefficiency of the cores. And if they *are* adding a lot of pipelines, that’s a bad idea.

Bull poo-poo.

First, there is nothing there indicating that Intel is rolling SMT4 to their whole CPU product line.

Second, the execution bubbles are far coupled to the workloads far more than to the core designs. Everything isn't about racking up the highest scores on Geekbench or mainstream tech porn benchmarks.

Third, Alder Lake thread management assists in allocating to Big cores , then to small cores , and then finally to SMT ( hyperthreads ) on the Big cores. For mainstream CPU products, Intel is already rolling a solution that throws wide multithreaded workloads at non SMT cores ( Intel's 'E' cores ) and not at the SMT first strategy. So the notion that they have some grossly defective cores designs here is rather weak.

The real central issue here is what do you do when run out of physical cores to allocate the far more workload requests you have than cores that you have available. There are also upsides in not trying to solve every problem for everybody with a single CPU core design.

If there is a much higher amount of non SIMD, yet concurrent, work to do then adding a limited amount of more pipelines is better. The length of the bubbles guides whether those are virtual pipelines or real discrete pipelines.
Putting four cores worth of pipelines inside of one core is wrong. But that is a misapplication of SMT. SMT is primarily about using underutilized resources not dragging more threads into a single core. That was a different concept that hit "dead ends" a long time ago.


Is the solution space where SMT4 has deep traction shrinking as large systems grow in max RAM , Max SSD , and end user network speeds get higher? yes. Is it going to be approximately zero any time soon? Probably not.
Does this have anything to do with the space Apple M-series operates in ? Definitely not.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Bull poo-poo.

First, there is nothing there indicating that Intel is rolling SMT4 to their whole CPU product line.

Second, the execution bubbles are far coupled to the workloads far more than to the core designs. Everything isn't about racking up the highest scores on Geekbench or mainstream tech porn benchmarks.

Third, Alder Lake thread management assists in allocating to Big cores , then to small cores , and then finally to SMT ( hyperthreads ) on the Big cores. For mainstream CPU products, Intel is already rolling a solution that throws wide multithreaded workloads at non SMT cores ( Intel's 'E' cores ) and not at the SMT first strategy. So the notion that they have some grossly defective cores designs here is rather weak.

The real central issue here is what do you do when run out of physical cores to allocate the far more workload requests you have than cores that you have available. There are also upsides in not trying to solve every problem for everybody with a single CPU core design.

If there is a much higher amount of non SIMD, yet concurrent, work to do then adding a limited amount of more pipelines is better. The length of the bubbles guides whether those are virtual pipelines or real discrete pipelines.
Putting four cores worth of pipelines inside of one core is wrong. But that is a misapplication of SMT. SMT is primarily about using underutilized resources not dragging more threads into a single core. That was a different concept that hit "dead ends" a long time ago.


Is the solution space where SMT4 has deep traction shrinking as large systems grow in max RAM , Max SSD , and end user network speeds get higher? yes. Is it going to be approximately zero any time soon? Probably not.
Does this have anything to do with the space Apple M-series operates in ? Definitely not.

Nothing you wrote has any technical basis. The words don’t even make sense. What does “execution bubbles are far coupled to the workloads far more than to the core designs” mean? If you are saying that it’s not Intel’s fault that there are execution bubbles, that is not true. That means that they can’t parallelize an incoming stream enough to use all the execution units in a core. That means they either have too many execution units, or the instruction stream cannot be parallelized well. The latter is certainly the case, due to the horrible nature of x86 instruction decoding.

Your first point is not relevant to the point, at all.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.