Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Even if Thunderbolt had enough bandwidth, for Apple to go back to that kind of design is just asking for ridicule. For Thunderbolt 4 at 32 GB/s vs x16 PCIe 5 at 63 GB/s also is going to make a pretty big difference.

You have conflated your units.

Thunderbolt 4 is about 32Gb/s ( not 'Bytes', but 'bits'. So that is 4GB/s ). x16 PCI-e v5 is 512Gb/s ( or about 63-64 GB/s ). There is an order of magnitude difference between those two bandwidths. That is more of a "huge" difference than a 'big' one. Handwaving that "Thunderbolt solves the problem" is exactly one the explicitly mentioned (by Apple) 'traps' that they fell into with the MP 2013 (6,1). There is an 'economy of volume' problem with overly relying on Thunderbolt. Folks who have two or three Audio cards , a video capture card or any sort of > 2 collection of stuff piling up enclosures get more awkward to physically provision in limited spaces.

Highly unlikely that Apple is going to touch PCI-e v5 though. But even falling back to v4's 32GB ... still basically an order of magnitude difference.


I think that Apple is going to take a hybrid approach. Unified memory for the first 64-128 GPU cores and then x16 PCIe 5 for any add ons. PCIe isn't going to give 200-400 GB/s but more GPU cores at 63 GB/s is still better than nothing.


If this is a "M1 <insert adjectives here > " SoC the likeihood of PCI-e v5 making an appearance is more wishful thinking than likely outcome. Apple has done some relatively extremely limited PCI-e v4 with the M1 SoCs so far. To jump to v5 with the same gen number attached would be 'odd'.

The other problem is that these discrete GPUs would require new drivers and more GPU customizations because the unified memory assumptions buried in the Apple GPU optimized code would be gone. Apple just herded folks into rewriting their GPU optimizations into the Unified memory format. How happy are developers going to be be to re-optimize for yet another one? Probably not very pleased.

Better chance that Apple is betting the whole farm on unified Memory GPU cores. 128GPU cores will cover a large amount of the GPU performance space. M2 , M3 , M4 generation GPU cores will get improvements so that space is at least just as large , if not more. Multiple GPUs used to attack AI/ML workloads will be pointed at bigger NPUs. Video/Imagine processing of standard camera codecs .. at bigger image compute fixed function logic. high end , interactive 3D ... doesn't really scale that well with mulitple GPUs.

Squeezing better GPUs into VR/AR headsets is also probably going to be higher on the priority list than chasing market where Nvidia is far more deeply entrenched with a wide , defensible moat. Their focus is likely going to be on more GPU performance in smaller volumes not bigger ones.


Apple is much more clever than I am and I trust that they will solve this in a way that lets their customers with crazy GPU requirements get the most out of the Pro. Just for reference, the current Mac Pro has about 140 GB/s memory bandwidth and still uses x16 PCIe 3 @ 32 GB/s for GPUs.

Apple will probably have slots for other stuff than GPUs. GPU add-in-cards are a myopic overly narrow justification for provisioning some slots. Without the drivers it isn't a complete solution and there is zero movement on discrete GPU drivers in macOS on M-series.
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
You have conflated your units.

Thunderbolt 4 is about 32Gb/s ( not 'Bytes', but 'bits'. So that is 4GB/s ). x16 PCI-e v5 is 512Gb/s ( or about 63-64 GB/s ). There is an order of magnitude difference between those two bandwidths.
Good catch. Thanks. Huge difference is right.

If this is a "M1 <insert adjectives here > " SoC the likeihood of PCI-e v5 making an appearance is more wishful thinking than likely outcome. Apple has done some relatively extremely limited PCI-e v4 with the M1 SoCs so far. To jump to v5 with the same gen number attached would be 'odd'.
The PCIe V4 on the M1 hasn't really been used for much. To get the kind of performance out of the desktop CPUs I think might take PCIe V5 and since the competition (Intel Alder Lake) has already supported it, I don't think it is too much of a stretch. We'll see.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
How much of a hit would we be looking at for off package RAM and external MPX GPU's? That and how many PCIe lanes does AS support now?

M1 supports four x1 PCI-e v4 lanes. Even if cobbled that into a x4 PCI-e v4 bundle it is way, way ,way off where competitive " > = 64" PCI-e v4 provision other contemporary workstation SoCs are at.

There is a zero sum game Apple has with going "super duper" wide on RAM bandwidth connectivity and still having room to also go wide on another off chip I/O . These I/O interfaces don't "shrink" as well has more computation logic or eRAM does on the die. There interfaces to the outside world even less so ( 'pin' / 'pad' external linkage points).

The push to x1 v4 lanes is to have fewer lanes coming out of the die not that that Apple is on the 'war path' to higher bandwidth rates. four v4 lanes have the same bandwidth as eight v3 lanes. If trying to chop down on die edge space devoted to PCI-e then move up. That driver isn't overall bisection bandwidth increases.

The other issue is that much bigger PCI-e controller complexes are going to buy better "perf/watt". If that is Apple's #1 design criteria. If Apple goes to a chiplet/tile set up to get to their higher than "Max" core counts their the die I/O edge space is very substantially more likely going to be assigned to "inter-die" communication subsystem than to PCI-e . "per/watt" pressure will likely mean that is a very short distance , on package communication path (physically drawing the dies close together).


Everything you guys talk about adding internally could be done via Thunderbolt, so really there is no reason to keep the existing MP form factor and they could go back to the "Trashcan" design.

No. Largely because there is more to the current Mac Pro than just GPU cards. Even if Apple unwinds the discrete GPU card options there are still 100's of cards that have worked in previous Mac Pros. There are even still substantively number of cards that work in Thunderbolt with macOS on M-series.

i. Audio , Video production. (capture and generation )

ii. Storage ( another admitted by Apple 'flaw' in the MP 2013 ... one and only one internal drive isn't going to cut it for a large swath of the Mac Pro workload space. )

iii. > 10GbE Networking ... if trying to interace to SAN storage networks

iv. Legacy I/O.


For example if go to a external TB enclosure like


and look at the PCI-e compatible card list. More than 30 cards there in the "M1 works with" column that say "Yes".
[ there are issues of moving over to the new model that retires kernel extensions which will unwind over time if there is a market . For example the exteranal direct attached storage drive cards. ]


Unlike GPU cards, Apple has not blocked macOS on M-series drivers being written for these groups of products. So would make extremely little common sense to block them from a Mac Pro. If going to support Thunderbolt PCI-e expansion boxes on the MBP's with M-series then there will be viable product to put inside a Mac if just skip Thunderbolt as long as provision the slots . eGPU were not "everything" in the Thunderbolt market.


Are they going to have > 4 slots ? Probably not. Could it be as low as 2 ? Yes. But zero.... that would be a 'fail' as a "Mac Pro". They could release a "Mac NeXT" or just plain "Mac". However, as a "Mac Pro" it would be a 'fail' by the guidelines that Apple laid down in 2017. Apple could get up on stage and be hypocritical , but likely would be BBQ'ed by a large segment of that user group.



If Apple chucks GPU cards the AUX power provisioning would likely go away also. ( which gets them to a better system "perf/watt" envelope also).

P.S. If Apple had a clean separation between "compute" Metal and GUI graphics Metal ( and not put OpenCL on 'death row' ) then perhaps they'd have a path for computational GPGPU cards. It is a matter of having a DriverKit 'model' for something that can farm "compute" workload too.

Similarly Apple has ignored CXL for far. PCI-e v5 and v6 without CXL doesn't make much sense.

Maybe when get to M3 , M4 generation Apple will come back to opening the door for a Mac Pro classs system but these first two generations are pretty dobutful. There is also lots of software abstractions (and driver API provisioning ) missing too.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
Thunderbolt 4 is about 32Gb/s ( not 'Bytes', but 'bits'. So that is 4GB/s ).
Minor nitpick: it's actually about 40 Gb/s. One TB3/TB4 lane is 20 Gb/s, and there are two lanes in the connector.

Highly unlikely that Apple is going to touch PCI-e v5 though. But even falling back to v4's 32GB ... still basically an order of magnitude difference.
PCIe gen 5 might not happen as soon as M2, but it will happen sooner or later.

After all, even plain M1 already supports gen 4. Not sure what you meant by "extremely limited", either, it's there and is being used. One example: the M1 Mini 10 gig ethernet option uses an Aquantia 10GbE NIC connected to the M1 by x1 PCIe gen 4.

With M1/M2/M3 etc in particular, it's important to adopt high speed I/O standards quickly. Everything on these chips that's only useful in a Mac is dead weight on iPad Pro. Moving to the latest and greatest PCIe spec helps Apple minimize overhead for iPads by reducing the number of SERDES required to support Mac-level I/O.
 

Andropov

macrumors 6502a
May 3, 2012
746
990
Spain
P.S. If Apple had a clean separation between "compute" Metal and GUI graphics Metal ( and not put OpenCL on 'death row' ) then perhaps they'd have a path for computational GPGPU cards. It is a matter of having a DriverKit 'model' for something that can farm "compute" workload too.
Huh? They can already do that without much change. Make fragment and vertex functions available for Apple GPUs only (so the assumed UMA and TBDR still holds) and keep kernels available for external GPUs so they can be used for compute. Some other things like samplers may have to go too but it can be done.

They wouldn't have UMA, but that's less of a problem for compute kernels. I'd say most of the developer work optimizing for Apple Silicon GPUs revolves around TBDR more than UMA and that's not a problem for compute workloads. You would have to account for the extra latency, sync issues and blitting to the external GPU VRAM if you choose to support eGPUs in your app but that's it. You already have to do that for macOS apps that run on Intel Macs, and it's more like an 'specialization' of the code you already have than a 'I have to keep a separate rendering pipeline for Intel Macs because the GPU rendering uses a totally different architecture'.
 
  • Like
Reactions: singhs.apps

tdar

macrumors 68020
Jun 23, 2003
2,102
2,522
Johns Creek Ga.
I understand why people expect that a Mac Pro would be a Mac Pro, but I still expect that you are going to be disappointed. They have lost a tremendous amount of their real pro business. They understand that they are not all things to all people. It’s not hard to imagine how you can remove expansion slots and still support audio and video input and output. Not with today’s expansion cards but with a new kind. I still believe that they were on the path they wanted to be on with the trash can. But they were ahead of the times. It was too early to go in that direction. But it’s not too early now. I also said that there would be Thunderbolt four ports. I’m making the same mistake, that I wonder if others are making. Maybe they have a new thunderbolt? Thunderbolt five? Maybe they have some other systems that has a enormous amount of bandwidth. One should never count Apple out, of going in a proprietary direction. But on the other hand, there would be no thunderbolt today if it wasn’t for Apple. So there are ways to build a workstation that works for the kind of people that Apple wants to sell to. And they don’t require you to build a traditional PC tower. I believe that Apple is not looking to take over the high-end workstation business. They’ve already lost that opportunity. They aren’t going to beat HP And Dell. And Lenovo. That’s not going to happen. So why not take this opportunity, to reimagine what a high end, Mac could be like. I know that’s not what people want to hear, but Apple at its founding was not about giving people what they wanted to hear. What was that famous line that Steve Jobs said? “People don’t know what they want until we show them.“

Edit: here’s a system to add PCI cards if you really need them in such a system.:
 
Last edited:

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
I understand why people expect that a Mac Pro would be a Mac Pro, but I still expect that you are going to be disappointed. They have lost a tremendous amount of their real pro business. They understand that they are not all things to all people. It’s not hard to imagine how you can remove expansion slots and still support audio and video input and output. Not with today’s expansion cards but with a new kind. I still believe that they were on the path they wanted to be on with the trash can. But they were ahead of the times. It was too early to go in that direction. But it’s not too early now. I also said that there would be Thunderbolt four ports. I’m making the same mistake, that I wonder if others are making. Maybe they have a new thunderbolt? Thunderbolt five? Maybe they have some other systems that has a enormous amount of bandwidth. One should never count Apple out, of going in a proprietary direction. But on the other hand, there would be no thunderbolt today if it wasn’t for Apple. So there are ways to build a workstation that works for the kind of people that Apple wants to sell to. And they don’t require you to build a traditional PC tower. I believe that Apple is not looking to take over the high-end workstation business. They’ve already lost that opportunity. They aren’t going to beat HP And Dell. And Lenovo. That’s not going to happen. So why not take this opportunity, to reimagine what a high end, Mac could be like. I know that’s not what people want to hear, but Apple at its founding was not about giving people what they wanted to hear. What was that famous line that Steve Jobs said? “People don’t know what they want until we show them.“
Yeah, I feel that the ONLY thing the next Mac Pro HAS to be, is faster than the current Mac Pro. That, plus MAYBE one PCI slot for a selection of Apple I/O cards is all that’s required to bring forward from the current system. GPU options, Afterburner options, perhaps even expandable RAM, and other things like this lean towards to “exclusion” side because of what Apple Silicon is. Most everything else about a future Mac Pro is on the table for inclusion/exclusion.
 

CASMAS

macrumors regular
Jan 9, 2022
108
24
So, there are 2 problems I see here. The first is that although most of AS' advantages in laptops are due to its efficiency, efficiency is much less relevant in desktop machines. Nvidia and AMD proved this with their fall 2020 releases. Their cards drew much more power on the upper-end, with Nvidia going as far as releasing a stock triple-slot card. I was floored; the last time they'd done that was with the Titan Z in 2014. The crucial difference here is that the Titan Z had 2 Titans in it, while the 3090 is only 1 (modern, renamed) Titan. Despite all the efficiency improvement from Kepler to Ampere, Nvidia still realized that they'd be better off cranking the power for their flagship card through the roof. Because people don't care about performance per watt on desktops.

The other problem is the release timing. It doesn't look like the AS Mac Pro will be announced before WWDC or launched before the fall. It's rumored to be coming with M1 graphics cores from late 2020… in the same few months that Nvidia and AMD replace the GPUs that they also launched in late 2020. The M1 lineup's GPU cores should compete well when scaled up to the numbers we'd see in a desktop, but that's if we focus on the 2020 cards… which will be replaced by the time said desktop comes out. This used to be AMD's problem when they'd release competent cards to compete with Nvidia half-way through Nvidia's release cycle. Intel's being criticized for releasing their 1st gen cards in May, only 5 months before Nvidia and AMD release their next-gen ones. I don't want to see Apple one-up them by releasing their 2020-competing machine after the others release their 2022 cards.
1. Efficiency still matters for desktop. Even servers and super computers are very sensitive about the efficiency. The latest 12900K consume as high as 300W even with efficient cores.

You can use efficiency cores to increase the multicore performance dramatically just like Intel did with 12th gen. Their 13th gen will get 24 cores for both laptop and desktop. Since 10 cores from M1 Pro/Max consumes around 30W at max, it will be 120W for 40 cores which is very low. You see, the efficiency still works. Without efficiency, both Intel and AMD can't even develop their CPUs. this is how it works.

Also, what if Mac Pro uses MULTIPLE M1 Max Quadro? Then it's a different story.

2. The release date might be the issue but for workstation and server, it's quite common to use 1~2 years old tech for the stability. You shouldn't compare with the latest, you should compare with workstation or server grade CPU and GPU.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
1. Efficiency still matters for desktop. Even servers and super computers are very sensitive about the efficiency. The latest 12900K consume as high as 300W even with efficient cores.
Efficiency does not matter that much for desktops. It matters for laptops, because cooling is difficult and power is scarce. It matters for servers, because cooling is expensive when you have a large number of densely packed servers. Desktops avoid all these issues.

Intel could have easily limited the 12900K to 150 W or 200 W with a minimal performance loss, but they chose not to. They believe the increased performance is worth the power consumption, at least for people who are buying an i9 processor. The i5 and i7 families make other trade-offs between performance and efficiency.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Efficiency does not matter that much for desktops. It matters for laptops, because cooling is difficult and power is scarce. It matters for servers, because cooling is expensive when you have a large number of densely packed servers. Desktops avoid all these issues.

Intel could have easily limited the 12900K to 150 W or 200 W with a minimal performance loss, but they chose not to. They believe the increased performance is worth the power consumption, at least for people who are buying an i9 processor. The i5 and i7 families make other trade-offs between performance and efficiency.

It still matters (especially per-core), but yes it does matter less. Not every desktop case can appropriately cool a 250W desktop chip especially if paired with an equally power hungry GPU. And having good per-core efficiency means you can fit more cores at closer to peak frequency. This is why Intel E-cores exist on a desktop part because the P-cores are too big and power hungry that they can’t compete with AMD in multicore workloads. You simply can’t put enough of the Golden Cove cores. The ultimate example of this was Rocket Lake which had to decrease its max core count even relative to the previous generation because the cores were too large and power hungry. Also there’s issues like fan noise and so forth …
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Intel could have easily limited the 12900K to 150 W or 200 W with a minimal performance loss, but they chose not to. They believe the increased performance is worth the power consumption, at least for people who are buying an i9 processor. The i5 and i7 families make other trade-offs between performance and efficiency.

I think in Intel case they simply wanted the boasting rights for having the fastest processor. So they overlocked the hell out of these models, specs or thermals be damned :)
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
It still matters (especially per-core), but yes it does matter less. Not every desktop case can appropriately cool a 250W desktop chip especially if paired with an equally power hungry GPU.

Being a SFF (Small Form Factor, chassis of 20L or less, using a bounding box measurement) aficionado, I know this all too well...

But then you got dudes like this...!

I think in Intel case they simply wanted the boasting rights for having the fastest processor. So they overclocked the hell out of these models, specs or thermals be damned :)

150W+ CPUs, 300W OC'ed to near death...? Lovelace rumored to be 500W(+) reference, imagine what a Kingpin might draw, first 1kW GPU...?!? ;^p
 
  • Like
Reactions: crazy dave

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
Being a SFF (Small Form Factor, chassis of 20L or less, using a bounding box measurement) aficionado, I know this all too well...

But then you got dudes like this...!
Note that he only benched it open-air and never finished the build.

It's all about heat dissipation. The thermal capacity of the heatsinks/enclosure is reached rather quickly, and then you need to get rid of the heat at the same pace that it is generated. Which, regardless of whether you use water or not as a carrier, always end up at a fan/radiator. That is, it gets dumped into the room at a certain pace. The required pace will translate to a air velocity/pressure at some combination of fan/radiator. And that air velocity/pressure will translate into noice. The higher the air velocity/pressure, the higher the noice basically. (As I'm sure you are aware.) The benefit of larger cabinets is basically that you can move the required volume of air slower.

From my personal experience trying to build quiet systems, provided you don't create airflow problems with for instance a bad cabinet design, you can cool 100 W silently, 300W kinda quietish, 500W will be loud. Using myself as the yardstick, obviously. I'm not quite happy with my current system, and I'm already jumping through hoops for quiet cooling so my next PC build will simply have to reduce total power draw. Unfortunately self-build PC components are moving in the opposite direction, towards higher power draw. Which inevitably leads to higher noise for the total system. Dagnabbit.

Utimately, I don't want quiet computers, I want them to be silent. The response of the PC industry to slowing gains from lithography so far has been to increase power draw. I find that unfortunate, and the higher power draws get pushed, the harder it will be to backpedal from that to more ergonomically suitable levels. Current desktop PCs feel like dinosaurs - big and powerful, and an evolutionary dead end.
 
  • Like
Reactions: iPadified

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Note that he only benched it open-air and never finished the build.

It's all about heat dissipation. The thermal capacity of the heatsinks/enclosure is reached rather quickly, and then you need to get rid of the heat at the same pace that it is generated. Which, regardless of whether you use water or not as a carrier, always end up at a fan/radiator. That is, it gets dumped into the room at a certain pace. The required pace will translate to a air velocity/pressure at some combination of fan/radiator. And that air velocity/pressure will translate into noice. The higher the air velocity/pressure, the higher the noice basically. (As I'm sure you are aware.) The benefit of larger cabinets is basically that you can move the required volume of air slower.

From my personal experience trying to build quiet systems, provided you don't create airflow problems with for instance a bad cabinet design, you can cool 100 W silently, 300W kinda quietish, 500W will be loud. Using myself as the yardstick, obviously. I'm not quite happy with my current system, and I'm already jumping through hoops for quiet cooling so my next PC build will simply have to reduce total power draw. Unfortunately self-build PC components are moving in the opposite direction, towards higher power draw. Which inevitably leads to higher noise for the total system. Dagnabbit.

Utimately, I don't want quiet computers, I want them to be silent. The response of the PC industry to slowing gains from lithography so far has been to increase power draw. I find that unfortunate, and the higher power draws get pushed, the harder it will be to backpedal from that to more ergonomically suitable levels. Current desktop PCs feel like dinosaurs - big and powerful, and an evolutionary dead end.
A water-cooled system with large (or many) radiators and Noctua fans can be pretty quiet and dissipate lots of heat. Though in a SFF chassis it could be a bit tight.
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
A water-cooled system with large (or many) radiators and Noctua fans can be pretty quiet and dissipate lots of heat. Though in a SFF chassis it could be a bit tight.
I’d say! ?
My current custom Gfx-card cooler takes four (!) PCI-e slots. (I don’t like water pump drone.) But I’ve decided not to follow that path anymore when it comes to my private PC. Don’t really need top shelf performance at home. Next system will be optimized for niceness instead. Which, incidentally, will also save a lot of money.
 

CASMAS

macrumors regular
Jan 9, 2022
108
24
Efficiency does not matter that much for desktops. It matters for laptops, because cooling is difficult and power is scarce. It matters for servers, because cooling is expensive when you have a large number of densely packed servers. Desktops avoid all these issues.

Intel could have easily limited the 12900K to 150 W or 200 W with a minimal performance loss, but they chose not to. They believe the increased performance is worth the power consumption, at least for people who are buying an i9 processor. The i5 and i7 families make other trade-offs between performance and efficiency.
The efficiency still matters especially for multicore performance. And we are talking about Mac Pro which is a workstation, not just a desktop. Having many cores is more important than the single core performance which is what Intel is doing. Also, Intel is using high power consumption to achieve high performance and this trend is still happening on GPU.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
The efficiency still matters especially for multicore performance. And we are talking about Mac Pro which is a workstation, not just a desktop. Having many cores is more important than the single core performance which is what Intel is doing. Also, Intel is using high power consumption to achieve high performance and this trend is still happening on GPU.
Desktops and workstations are usually constrained by the number of CPU cores the manufacturer is willing to sell for the price you are willing to pay. Because the number is likely to be in the low hundreds at most and because you are probably not going to have that many computers in the same room, power consumption is unlikely to be a serious issue.

Also, you are confusing what Intel is trying to achieve with what the customer is trying to achieve. People who buy i9 want to push the current technology to the limit. They want as much performance as possible from a cheap consumer chip by running as many cores as possible at as high frequency as possible, and they achieve that by using unreasonable amounts of power. People who want performance at reasonable efficiency buy i5 or Xeon W.
 
  • Like
  • Haha
Reactions: jinnyman and CASMAS

CASMAS

macrumors regular
Jan 9, 2022
108
24
Desktops and workstations are usually constrained by the number of CPU cores the manufacturer is willing to sell for the price you are willing to pay. Because the number is likely to be in the low hundreds at most and because you are probably not going to have that many computers in the same room, power consumption is unlikely to be a serious issue.

Also, you are confusing what Intel is trying to achieve with what the customer is trying to achieve. People who buy i9 want to push the current technology to the limit. They want as much performance as possible from a cheap consumer chip by running as many cores as possible at as high frequency as possible, and they achieve that by using unreasonable amounts of power. People who want performance at reasonable efficiency buy i5 or Xeon W.
You still misunderstood about the efficiency. You can put MORE cores than Intel or AMD base on the efficiency. M1 Max is already beating Mac Pro 2019 in some ways so I dont see your logic.

Also, Intel i9 was not faster than M1 until they announce 12th gen. Even efficiency beats mobile i9 easily so what's the point? The clock speed was only around 3ghz and yet M1 did that. You see, your logic will eventually fail because of next gen apple silicon chips and the efficiency is still important.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
You still misunderstood about the efficiency. You can put MORE cores than Intel or AMD base on the efficiency. M1 Max is already beating Mac Pro 2019 in some ways so I dont see your logic.
My point was that other constraints are more important than efficiency in the desktop/workstation market. There are already workstations with 128 CPU cores. Apple Silicon is more efficient than what Intel and AMD are offering, but because there are other constraints, we are unlikely to see more than 40 CPU cores in the foreseeable future. In other words, the efficiency gains are almost irrelevant.
 
  • Haha
Reactions: CASMAS

CASMAS

macrumors regular
Jan 9, 2022
108
24
My point was that other constraints are more important than efficiency in the desktop/workstation market. There are already workstations with 128 CPU cores. Apple Silicon is more efficient than what Intel and AMD are offering, but because there are other constraints, we are unlikely to see more than 40 CPU cores in the foreseeable future. In other words, the efficiency gains are almost irrelevant.
CPU and GPU are not everything, it's SoC. Clearly, you know nothing about it.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
CPU and GPU are not everything, it's SoC. Clearly, you know nothing about it.

What does it have to do with SoC or not SoC? Look, the problem is theory vs. reality. Yes, in theory Apple could build a very power efficient 128-core CPU SoC that would put everyone to shame. But the way Apple chooses to design their chip such a system would be very expensive. Their current SoC model is great for laptops, not really scalable to enthusiast desktops. I am sure that eventually we will see excellent performance with multi-chip technology, but that will come at an extreme premium.
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
With M1/M2/M3 etc in particular, it's important to adopt high speed I/O standards quickly. Everything on these chips that's only useful in a Mac is dead weight on iPad Pro. Moving to the latest and greatest PCIe spec helps Apple minimize overhead for iPads by reducing the number of SERDES required to support Mac-level I/O.
I wonder.

The systems getting the base level M1 aren't exactly starved for I/O right now. So it depends more on what Apple wants to do next. I tend to agree that Apple will want the latest PCIe spec, but I suspect it would be less because of the low end, but the high end, especially if they are going multiple dies like the rumor mill has been saying.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
I wonder.

The systems getting the base level M1 aren't exactly starved for I/O right now. So it depends more on what Apple wants to do next. I tend to agree that Apple will want the latest PCIe spec, but I suspect it would be less because of the low end, but the high end, especially if they are going multiple dies like the rumor mill has been saying.
Apple's PCIe plans should not be entangled with their multi-die plans the way you seem to be implying. You can't use PCIe as CPU interconnect in a NUMA system; its latency is far too high and it doesn't support cache coherency. PCIe is strictly an I/O interface.

Even CXL (PCIe with cache coherency layered on top) can't really be used as CPU interconnect, everything I've heard about it has it positioned only as I/O for peripherals which benefit from cache coherency.
 
  • Like
Reactions: EntropyQ3

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
Apple's PCIe plans should not be entangled with their multi-die plans the way you seem to be implying. You can't use PCIe as CPU interconnect in a NUMA system; its latency is far too high and it doesn't support cache coherency. PCIe is strictly an I/O interface.

Even CXL (PCIe with cache coherency layered on top) can't really be used as CPU interconnect, everything I've heard about it has it positioned only as I/O for peripherals which benefit from cache coherency.

I wasn’t trying to suggest it would. More that whatever Apple might have in mind as a die interconnect places demands on footprint and routing, placing demands on component positioning that can make it more complex to also route PCIe traces. Mix that in with more things that will want to use PCIe on the Mac Pro than on other devices, and the suggestion is that there might be a bigger win on the more complex dies to simplify things. Larger logic boards give rise to more opportunities for off-package components that can help here as well, such as multiplexers and/or dedicated I/O dies that improve utilization of the PCIe lanes, pushing down the number required to feed the system.

Perhaps I’m wrong about how limited access to off-die traces actually are these days, but it at least seems logical that the more different types of I/O (and the greater number of buses/lanes of each type) you want to route off the die, the more contention you have for the space needed to route all the traces and connections between the die and the rest of the system (and routing on the die as well).

It’s more that if we look at the M1’s use of PCIe, and how limited it generally is, I’m wondering what the overhead itself actually is here that PCIe 5 can actually solve if peripherals are already using a minimal number of lanes.

Now I’m curious what all Apple currently uses PCIe for on and off die, and if they are doing anything on the die to improve utilization of the PCIe 4 capabilities when feeding the on die components like the Thunderbolt controllers in the face of possible under utilization.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.