Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
In this kind of situation, the overheads are much the same regardless of how many cores you have. So, divide those numbers all by 8? You're still context switching and quite possibly moving threads and processes from core to core, trying to maintain cache coherency, etc.
True, it doesn't sound like much difference, but divide those 8 cores by what is actively running and not in some kind of wait situation, and the numbers are closer to 8 and an 8 core processor is going to be better at it than a 2 core processor. If it's closer to 2, then the 2 core faster processor is going to be a lot better. I'll take 8 though.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
The difference between single job and general usage is why. I have 2 email programs, two browsers with multiple tabs open (edge and Chrome, about 10 tabs each), a powershell command line, 6 terminal sessions to our midrange machine, 2 excel speadsheets, and it's a friday, as light as it gets. A single core machine wont be able to do it as well. 346 processes, 5500 threads. 8 cores. (Ryzen7)


Only a few threads on your machine are actually running at any given time. If you’d try to run 5000 compute-heavy threads on your 16-machine it would simply lock up.
 
  • Like
Reactions: Basic75

Romain_H

macrumors 6502a
Sep 20, 2021
520
438
I have 2 email programs, two browsers with multiple tabs open (edge and Chrome, about 10 tabs each), a powershell command line, 6 terminal sessions to our midrange machine, 2 excel speadsheets, and it's a friday, as light as it gets. A single core machine wont be able to do it as well. 346 processes, 5500 threads. 8 cores. (Ryzen7)
That's where you are wrong. You couldn't tell the difference, given the single core clocked sufficiently high (which was the premise in this discussion)
 
  • Like
Reactions: Chuckeee

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Only a few threads on your machine are actually running at any given time. If you’d try to run 5000 compute-heavy threads on your 16-machine it would simply lock up.
Yes that's definitely true. With that load I had described, I couldn't have had any more than 4 active threads, and probably only a couple.
 

sack_peak

Suspended
Original poster
Sep 3, 2023
1,020
959
I wonder if one day Apple will sell their chips to 3rd parties at a profit without any design input from them to run competitors OSes or be placed into devices or form factors that Apple has not branched out into.

As many here pointed out Apple's chips are designed for their needs only. No one elses.
 

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
I wonder if one day Apple will sell their chips to 3rd parties at a profit without any design input from them to run competitors OSes or be placed into devices or form factors that Apple has not branched out into.

As many here pointed out Apple's chips are designed for their needs only. No one elses.
I would put my money on pigs flying first.

Modern Apple’s “founding myth” is the idea of providing the complete package. Computer, operating system, retail, and service, and with quality. (Modern Apple being after Steve Jobs returned to the company)

And, if I’m honest, the niches that exist are already served well enough by third parties. And there’s significant development being put into filling any niches that might also exist.

In the context of Apples own niche, they have carved out a consumer/prosumer/professional space. And their efforts in their products are geared towards that. Selling their processors would only result in machines that would also be completely within that niche. (Shades of the Mac clone era)

Where Apple doesn’t or has no interest in competing, I believe, wouldn’t be well served by their processors. i.e. scientific and server. And the competition is in a much better position to fill those roles than Apple.

For example, NVidia already has extensive experience with building ARM cpus, are the market and performance leader in graphics, are so ingrained into AI that they’re synonymous, and have actively developed heterogeneous cpus for those niches.

Amazon also has developed their own Graviton cpus for their server needs.
 

FlyingTexan

macrumors 6502a
Jul 13, 2015
941
783
People thinking Intel and AMD don't make comparable chips are not following along. If you mean ARM, because they don't want to make ARM chips. Intel/AMD chips are still more diverse in what they can do. The real question should be why doesn't Apple push the server market.
 
  • Like
Reactions: George Dawes

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Separate CPU cores. Separate SSDs. Separate GPUs. And so on. When a task starts misbehaving or otherwise consumes more resources than expected, it doesn't consume all the resources in the system. Just the ones that have been allocated to it.

A CPU core is a convenient unit of resource allocation. A scheduler can just let a task use 100% of a core for extended periods of time without any special permissions, because many tasks have a legitimate need for that. If a task misbehaves, it doesn't matter much, as other cores remain for other purposes. An equally fast single-core system would be more fragile, as tasks that need CPU time for legitimate reasons would end up competing against ones that misbehave.

You are talking about threads, not cores. A core is not a unit of resource allocation - it’s the resource provider. A misbehaving thread can be shut down by the OS at any time.
 
  • Like
Reactions: wegster

leman

macrumors Core
Oct 14, 2008
19,521
19,674
I wonder if one day Apple will sell their chips to 3rd parties at a profit without any design input from them to run competitors OSes or be placed into devices or form factors that Apple has not branched out into.

I don’t see any benefit to Apple here. They can have a much better business selling the final product. And they are already constrained by supply just making chips for themselves.

That said, I’d love Apple partnering with Nintendo or Sony. Apple Silicon would make an exceptional gaming console.
 

sack_peak

Suspended
Original poster
Sep 3, 2023
1,020
959
That said, I’d love Apple partnering with Nintendo or Sony. Apple Silicon would make an exceptional gaming console.
I can see Apple & Google eroding Nintendo, Sony and Microsoft's gaming revenue by at most 20% by Oct 2028.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Wrong again. Its easy actually to see that
Okay, let me try again, you have no authority or knowledge about me, and it is a forum rule when stating something as as fact without some kind of proof, or at least third party backup. And since that is not possible, you get ignored. I know, big deal to you, but at least I don't have to read such crud. Tah.
 
  • Haha
Reactions: Romain_H

thebart

macrumors 6502a
Feb 19, 2023
515
517
I admit I’ve never understood why single-core benchmark scores are given so much consideration.

Gaming, for one thing. Games vary in their ability to take advantage of multiple cores but for the most part, higher single core benchmark translates to better gaming performance

From an end user point of view, single core performance matters a lot because even though the OS is always doing multiple things at once, the user is mainly interested in the foreground task. When they open an app, how fast does it start. When they export a large file, how long does it take. These tasks can be parallelized to some extent, but not perfectly. Some are more or less linear. So single core matters to end user's perception of how fast their system is
 

George Dawes

Suspended
Jul 17, 2014
2,980
4,332
=VH=
From what I’ve read Apple might have messed up with the Apple silicon as the G P U figures ain’t too impressive , certainly not in 3d rendering

Maybe instead of concentrating on efficiency cores they should’ve prioritised performance ones
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
From what I’ve read Apple might have messed up with the Apple silicon as the G P U figures ain’t too impressive , certainly not in 3d rendering

I don’t know what you’ve read but Blender performance is fairly good for a GPU that lacks hardware raytracing. Next gen will likely improve it.

Maybe instead of concentrating on efficiency cores they should’ve prioritised performance ones

What does that have to do with rendering?
 

APCX

Suspended
Sep 19, 2023
262
337
From what I’ve read Apple might have messed up with the Apple silicon as the G P U figures ain’t too impressive , certainly not in 3d rendering

Maybe instead of concentrating on efficiency cores they should’ve prioritised performance ones
Hmmm you make some interesting points. I’ll pass them on to the team. It’s possible Johny Srouji and the team totally missed the idea of faster P cores as a way of making up for a lack of RT cores on their G P U.

Thanks.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
For anyone unfamiliar or who are familiar but like nostalgia…

It was one of the “bumpier roads” in the history of Apple transitioning Macs to a different hardware architecture, but it transitioned its Mac platform from the Motorola 68000[#] CPU to the Apple-IBM PowerPC RISC CPU — that had a VERY different ISA architecture from the Motorola 68000[#] (and to the Intel x86 ISA architecture for that matter).

The 68000 wasn't a VERY different ISA. The 68000 was relatively clean for the era it was initially designed in.

" ...
The design implements a 32-bit instruction set, with 32-bit registers and a 16-bit internal data bus.[4] The address bus is 24 bits and does not use memory segmentation, which made it easier to program for. Internally, it uses a 16-bit data arithmetic logic unit (ALU) and two more 16-bit ALUs used mostly for addresses,[4] and has a 16-bit external data bus.[5] For this reason, Motorola termed it a 16/32-bit processor.

As one of the first widely available processors with a 32-bit instruction set, large unsegmented address space, and relatively high speed for the era, the 68k was a popular design through the 1980s.
..."


The 68000 was 32-bit from the start. Didn't have any hocus pocus memory segmentation at all. The narrow amount of 16 bit stuff was more so in the 'inside' than the outside. It always had a decent number of data registers ( 32 ) . There was only 56 instructions.

It was big endian just like PPC ( the default of PPC .. PPC allow to flip. ) . Sun and Apollo ( later HP) workstation used them from the start to run Unix.

The design of the 68000 was done in the late 70's around the same time IBM was doing ROMP. Basically invented before RISC was invented , but mainly was on a similar tract. Kind of hard to be exactly 'RISC' before RISC is even invented.

The 68000 was going to run into issues when the workstation market was going to diverge from the more price constrained systems. Same Wikipedia page.

"... By the start of 1981, the 68k was making multiple design wins on the high end, and Gunter began to approach Apple to win their business. At that time, the 68k sold for about $125 in quantity. In meetings with Steve Jobs, Jobs talked about using the 68k in the Apple Lisa, but stated "the real future is in this product that I'm personally doing. If you want this business, you got to commit that you'll sell it for $15."[27] Motorola countered by offering to sell it at $55 at first, then step down to $35, and so on. Jobs agreed, and the Macintosh moved from the 6809 to the 68k. The average price eventually reached $14.76.[
..."

[ Always an eye-roll wherever Tim Cook is the bean counter who is 'ruining' Apple is contrasted to Steve 'spend whatever it takes' Jobs fantasy is rolled out on these forums. ]




" ... Into this came the early 1980's introduction of the RISC concept. At first, there was an intense debate within the industry whether the concept would actually improve performance, or if its longer machine language programs would actually slow the execution through additional memory accesses. All such debate was ended by the mid-1980s when the first RISC-based workstations emerged; the latest Sun-3/80 running on a 20 MHz Motorola 68030 delivered about 3 MIPS, whereas the first SPARC-based Sun-4/260 with a 16 MHz SPARC delivered 10 MIPS. Hewlett-Packard, DEC and other large vendors all began moving to RISC platforms ..."

And not shooting for $15/processor prices.

PowerPC stripped some instructions out of Power ( a reduced 'RISC' ? ) . PowerPC didn't bring any higher number of general purpose registers ( also 32). PowerPC has more instructions than 68000 ( >100 versus ~50 ... so which one is the 'Reduced' one ? )

Is is 'different' , but 68000 never was a hyper 'CICS' instruction set . A 68000 that wasn't trying to maximize code footprint compression could write "RISCy" code that leaned on registers load/store to do most of the work.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Ok, I will try to explain. Let's talk about server CPUs, and let's ignore the questions of memory/caches etc. for the moment, let's just talk about the CPU performance. What is your goal when designing an excellent server CPU? You have two constraints: ... .....I hope you can also see why AMD is not using the "big" Zen4 core for their server products — because it would be pointless. They simply don't have the thermal headroom to run these cores at the higher frequency. They know that they will run the cores at under 3Ghz anyway, and the compact Zen4c allows them to pack almost twice as many cores in the same die area. Not to mention that Zen4 would consume more power for the same performance.


Errrrr. No. AMD is using their 'big' Zen4 for their server products. The bulk of the Epyc line up is the regular
"Genoa" class packages. AMD is supplementing their server CPU line up with "Bergamo" ( with Zen 4c cores). Both of those are in the Epyc 9004 series. [ various 9_x4 for Genoa and 97x4 for Bergamo ]




This issue is that the CPU core chiplets used for the Genoa series are the same physical design used for Ryzen. The variation is on the I/O (and memory ) die and number of chiplets used ( as therefore size of the CPU package. ). The 'server' package is just physically much bigger because it has more 'stuff'.

There is little indication that AMD is going to fork the server chiplets from the desktop chiplets any time soon over Zen5 or Zen6 evolution at all.

Bergamo is far more aimed at 'cloud' than at 'server'. Those two category are slightly the same , but also pragmatically not equivalent. 'cloud' pragmatically has an implication of being large scale, 'for pay ' service as opposed to a machine 'down the hallway' or 'in the next-door building'. It us diverse user aggregation ( usually from multiple 'sources'/groups. ). Server is just not a client system. ( the Mac Pro down the hall could be a file server. )


In the matrix, the "Cloud and HPC" are not necessarily a uniform set of CPU chiplets being used ( 8 x n vs 12-16 x n . [12 being a binned down 16 ] )
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
No, it's not, but it's huge in manufacturing.

Anyway, I'll take more useful or less useful every single time.

It is not more useful to the larger group. You are conflating the notion of 'usefulness' with 'convivence' ( one hammer applied to every kind of nail or screw. )


Probably wishful thinking. removing stuff that relatively almost no one uses and applying those design and die area resources to stuff that does most likely will help. Decent chance contributes to less security defects also ( gratuitously complicated subsystems are substantive contributors to bugs. )
Got any proof it will help PC level hardware? I didn't think so.

You are kidding? Why is Intel using "E cores" in their PC level hardware. Very large contributing factor to that is their P cores are too big. Too big cores is a problem. Pragmatically all of these PC level dies are area constrained. If your CPU cores are too big then the area constrained die with either end up with less CPU cores , less something else (GPU) , or BOTH. Intel is in the BOTH category. That's is a major reason you got E-cores.

Smaller P cores are allowing AMD to be far more nimble than Intel.



100 or 200 core systems?????????? That's a whole different thing than I'm talking about, that's server class hardware.

But it isn't. The same P cores in Intel 12-13 generation are in the server products. AMD physically uses the exact same chiplets for their mainstream Ryzen and Epyc line ups. That is even more clear that the cores are being reused across both server and mainstream products.

The fact they are trying to hand wave around is that these cores are expensive to design. So they end up being used in multiple markets. They are no where near as decoupled as you are trying to assert. For Apple also. The A-series CPU cores are not decoupled from the M-series ones. Same basic core cluster buiding block used in both. Which leads to coupling between the different SoC packages.
 
  • Like
Reactions: wegster

JouniS

macrumors 6502a
Nov 22, 2020
638
399
You are talking about threads, not cores. A core is not a unit of resource allocation - it’s the resource provider. A misbehaving thread can be shut down by the OS at any time.
A core is a unit of resource allocation when you are dealing with virtual machines. The OS cannot shut down misbehaving threads, because it can't tell the difference between them and threads that use resources for a legitimate purpose. And it's often the OS itself that misbehaves. We all have stories about Windows / macOS / Linux feeling sluggish, discovering a process with a weird name consuming a lot of CPU time and/or memory, searching for more information, and finding that it's an OS service and that the issue is common enough that the internet is full of pages describing it.
 
  • Like
Reactions: bobcomer
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.