Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

CWallace

macrumors G5
Aug 17, 2007
12,528
11,543
Seattle, WA
I would then bet the mac pro jr. will have no slots then.

I believe the Apple Silicon Mac Pro model will have PCIe slots, just fewer. My guess would be four (so half of what MP 7,1 can hold) so that it can handle up to two MPX cards or an MPX card and two single-slot cards.

Laaaame. I think it puts a damper on the Apple Silicon change if they’re gonna release another Intel machine.

Apple spent a mint to bring Mac Pro 7,1 to market. Customers then spent a mint to purchase them. Neither Apple nor the customers are going to want MP 7,1 to be a "one and done" model like MP 6,1 was and Apple clearly engineered MP 7,1 with sufficient overhead to take successor generations of CPUs and GPUs.

The only drawback for existing MP 7,1 owners is that W-3300 will use a new socket and therefore will require a new systemboard so a "drop-in" CPU upgrade will not be possible.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
i think its the only model where it doesn't put a damper on their conversion.

people that buy those have long buying periods and need stuff to be native before they switch.
I have difficulty reconciling this with Apple dropping 32 bit support and OpenGL with Catalina. Surely Apple would have special plugins for these types of customers if they were insistent on backwards compatibility.


I believe the Apple Silicon Mac Pro model will have PCIe slots, just fewer. My guess would be four (so half of what MP 7,1 can hold) so that it can handle up to two MPX cards or an MPX card and two single-slot cards.



Apple spent a mint to bring Mac Pro 7,1 to market. Customers then spent a mint to purchase them. Neither Apple nor the customers are going to want MP 7,1 to be a "one and done" model like MP 6,1 was and Apple clearly engineered MP 7,1 with sufficient overhead to take successor generations of CPUs and GPUs.

The only drawback for existing MP 7,1 owners is that W-3300 will use a new socket and therefore will require a new systemboard so a "drop-in" CPU upgrade will not be possible.
Then that sort of defeats the purpose of a socketable cpu if it can’t be upgraded to successive gens. Not that that’s entirely Apple’s fault, Intel is notorious for that.

Likewise, over in the dark realm of the Mac Pro forums, many decried the 7,1 for it’s price to performance ratio. I don’t think they would clamor for an ice lake upgrade.

And Apple Silicon is poised to beat the pants off Intel in 2022. It seems silly to have your lower end laptops and desktops walk circles around your highest end machine.

I can’t honestly justify such a product in my mind unless Apple Silicon isn’t as scalable as we thought (which is possible). To me it reads like Apple is not confident in their new architecture. Which definitely puts a damper on my enthusiasm for the new architecture.
 

robco74

macrumors 6502a
Nov 22, 2020
509
944
There are still a number of pro apps, plug-ins, and interfaces that haven't yet been updated to work with Big Sur and/or Apple Silicon. It may be some time before that happens, if ever. Apple may decide to give those users one last powerful Intel machine before moving on to the next gen products that may not support all the hardware and software they need. Some users will require x64 compatibility for the foreseeable future.
 
  • Like
Reactions: opeter and CWallace

CWallace

macrumors G5
Aug 17, 2007
12,528
11,543
Seattle, WA
Then that sort of defeats the purpose of a socketable cpu if it can’t be upgraded to successive gens. Not that that’s entirely Apple’s fault, Intel is notorious for that.

Indeed they are. In addition to a new socket, the new supporting chipset supports faster memory (DDR4-3200) and PCIe Gen 4.0.

And Apple Silicon is poised to beat the pants off Intel in 2022. It seems silly to have your lower end laptops and desktops walk circles around your highest end machine.

I can’t honestly justify such a product in my mind unless Apple Silicon isn’t as scalable as we thought (which is possible). To me it reads like Apple is not confident in their new architecture. Which definitely puts a damper on my enthusiasm for the new architecture.

Ice Lake is expected to top out at 36 cores and JadeC-4Die said to be going into the Apple Silicon Mac Pro ("8,1") will be 40 cores (32P and 8E) so I don't think there is any real risk of "MP 8,1" with JadeC-4Die being outclassed by "MP 7,2" with Ice Lake W-3300 Xeons.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Ice Lake is expected to top out at 36 cores
Unless Apple changes the Darwin kernel to support more than 64 logical CPUs, it's unlikely Apple will use any Hyperthreading Intel CPU with more than 32 cores/64 threads.

In any case, I do not think Apple will release another Intel Mac. Any new Macs coming out from Apple will be Apple Silicon Macs.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
but its probably not PCI-E based
That I think is probably a given. PCIe max bandwidth is only half of what the M1 LPDDR4X RAM is able to achieve, and that's not even counting the PCIe bus overhead.

Apple will just improve their GPU tech. with ever wider and faster memory bus.
 
  • Like
Reactions: 09872738

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Unless Apple changes the Darwin kernel to support more than 64 logical CPUs, it's unlikely Apple will use any Hyperthreading Intel CPU with more than 32 cores/64 threads.

Can you provide more information about this limitation? I was not aware that Darwin is limited by the number of CPU cores it can support...

I doubt that. It looks like Apple does away with AMD/Nvidia to embrace their own approach to GPU computing.
I have no idea what it might look like, but its probably not PCI-E based

Not just Apple. Nvidia have their NVLink and AMD has InfinityFabric. I suspect PCI-e is approaching end of life where high-performance applications are considered. It will probably still be used for a while in consumer PCs, where high latency of CPU/GPU communication is tolerable, but high-end pro market will see more and more integration, with cache-coherent memory sharing and other goodies. In fact, future of processing (especially for ML) seems to move more and more towards in-memory processing, as traditional architectures simply don't scale. You can also see this effect in the CPU market as well, with AMD testing humongous caches and Intel announcing that they are going to integrate HMB2 in the future Xeon releases.

Apple Silicon will probably be somewhere between the consumer PC and the high-end datacenter system. It will use similar system architecture as the future supercomputers (wide unified memory, large caches, heterogenous processors), but deployed at a much more humble scale, in consumer hardware. At any rate I am looking forward to my next ultracompact system that has more effective memory bandwidth to the CPU than some mid-range graphics cards.
 
  • Like
Reactions: JMacHack

diamond.g

macrumors G4
Mar 20, 2007
11,437
2,665
OBX
Can you provide more information about this limitation? I was not aware that Darwin is limited by the number of CPU cores it can support...



Not just Apple. Nvidia have their NVLink and AMD has InfinityFabric. I suspect PCI-e is approaching end of life where high-performance applications are considered. It will probably still be used for a while in consumer PCs, where high latency of CPU/GPU communication is tolerable, but high-end pro market will see more and more integration, with cache-coherent memory sharing and other goodies. In fact, future of processing (especially for ML) seems to move more and more towards in-memory processing, as traditional architectures simply don't scale. You can also see this effect in the CPU market as well, with AMD testing humongous caches and Intel announcing that they are going to integrate HMB2 in the future Xeon releases.

Apple Silicon will probably be somewhere between the consumer PC and the high-end datacenter system. It will use similar system architecture as the future supercomputers (wide unified memory, large caches, heterogenous processors), but deployed at a much more humble scale, in consumer hardware. At any rate I am looking forward to my next ultracompact system that has more effective memory bandwidth to the CPU than some mid-range graphics cards.
https://techteamgb.co.uk/2020/05/13/hackintosh-parts-guide-dont-buy-threadripper/ Looks like macOS has a hard limit on 64 threads? This person says using the 3990 would require SMT to be disabled.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841

Icelus

macrumors 6502
Nov 3, 2018
422
579
Straight from the horses mouth, so to speak ... heh heh:

Seems a flexible setting.
CPU quiescing generation counter implemented with a checkin mask

This bitfield currently limits MAX_CPUS to 32 on LP64.
In the future, we can use double-wide atomics and int128 if we need 64 CPUS.

 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Seems a flexible setting.
I have not really studied the source codes, but typically kernels are implemented with bit-masks for many operations to track CPU usage; e.g. schedulers, IRQs, etc. So I imagine it will take a while for Apple's engineers to use larger bit masks for the entire kernel, just to support more than 64 logical CPUs.

The MAX_CPUS value should be a constant that is probably not getting used much, other than a defined limit for the bit-masks. Increasing that value will likely not do much, and probably crash the XNU kernel under extreme condition.

If Apple is planning to increase this limit, it would be unlikely to be increased for Intel CPUs, but instead for Apple Silicon.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Straight from the horses mouth, so to speak ... heh heh:


Thanks! This seems to be a consequence of them using an int64 as a CPU bitmask. The code would probably work if they use a wider bitmap (as pointed out by @Icelus), so I wouldn't interpret too much into this specific limitation. Linux does something similar, but they start with a max CPU count they want to support and then define the CPU mask as a bitfield of an appropriate size.

I just had a very quick look at the arm64 part of the kernel, and the code organization seems to be very different there. I didn't see any mention of cpu masks or predetermined CPU count, but I admit that I didn't look long.
 

Joe The Dragon

macrumors 65816
Jul 26, 2006
1,031
524
Can you provide more information about this limitation? I was not aware that Darwin is limited by the number of CPU cores it can support...



Not just Apple. Nvidia have their NVLink and AMD has InfinityFabric. I suspect PCI-e is approaching end of life where high-performance applications are considered. It will probably still be used for a while in consumer PCs, where high latency of CPU/GPU communication is tolerable, but high-end pro market will see more and more integration, with cache-coherent memory sharing and other goodies. In fact, future of processing (especially for ML) seems to move more and more towards in-memory processing, as traditional architectures simply don't scale. You can also see this effect in the CPU market as well, with AMD testing humongous caches and Intel announcing that they are going to integrate HMB2 in the future Xeon releases.
PCI-E is big for storage and network and high-performance people may want raid and storage they can swap / destory if needed.

Also networking choice is big unless you want boards with apple only network transcievers
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
PCI-E is big for storage and network and high-performance people may want raid and storage they can swap / destory if needed.

Also networking choice is big unless you want boards with apple only network transcievers

Yes, you are right of course. Sorry, I was too lost in the context of CPU/GPU interconnect. What I mean is that PCIe is not scalable enough to satisfy the future (or even the modern) processing needs. Storage and I/O are still much slower than processors, so PCIe will of course remain as a great choice in that domain. Although we are slowly seeing experiments with mapping the SSD directly into the system address space, bypassing an interconnect layer (Apple does it with M1, and if I understand corectly, that also how PS5 operates).
 

CWallace

macrumors G5
Aug 17, 2007
12,528
11,543
Seattle, WA
Process doesn't matter nearly as much as design so A16 at 4nm will still crush Snapdragon at 3nm. Also gives TSMC time to work out any production issues so when they need to start pumping out tens of millions of these a month for Apple they will be ready.
 

macsplusmacs

macrumors 68030
Nov 23, 2014
2,763
13,275
Process doesn't matter nearly as much as design so A16 at 4nm will still crush Snapdragon at 3nm. Also gives TSMC time to work out any production issues so when they need to start pumping out tens of millions of these a month for Apple they will be ready.

Good point. And the end of the day, experience counts, actual boots on the ground building of things. Apple and TMSC are literally years ahead of anyone else in this aspect.

Intel, or Qualcom can not just go for Zero to TMSC/Apple killer in day, a year, a half decade or even more.
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
  • Like
Reactions: macsplusmacs

CWallace

macrumors G5
Aug 17, 2007
12,528
11,543
Seattle, WA
Or Apple could have a 3nm design for the Mac ready to go into production as soon as TSMC is ready in 2022. Not all Apple silicon has to start with the iPhone Axx SoCs. Apple could start the with the M3 at 3nm and use it as the basis for the A17 later in the fall of 2023.

The A series is the most-important SoC as iPhone is the most-important product. So unless Apple significantly diverges the M series from the A series with the next generation (something I do not see happening), A will continue to precede M so the A17 would be the first 3nm SoC used by Apple with the "M3" following later.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Process doesn't matter nearly as much as design so A16 at 4nm will still crush Snapdragon at 3nm. Also gives TSMC time to work out any production issues so when they need to start pumping out tens of millions of these a month for Apple they will be ready.
Never say never. I’d bet that other companies are already putting the M1 under a microscope and copying what they can.
 

CWallace

macrumors G5
Aug 17, 2007
12,528
11,543
Seattle, WA
Never say never. I’d bet that other companies are already putting the M1 under a microscope and copying what they can.

One imagines they have been doing that since the release of A4 in 2010 and yet every year the A series handily outclasses every other smartphone-class ARM SoC on the market.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
One imagines they have been doing that since the release of A4 in 2010 and yet every year the A series handily outclasses every other smartphone-class ARM SoC on the market.
I didn’t argue that, and I do have faith in Apple’s current microchip development. However, it’s entirely possible that the competition could come up with something surprising.
 

CWallace

macrumors G5
Aug 17, 2007
12,528
11,543
Seattle, WA
I didn’t argue that, and I do have faith in Apple’s current microchip development. However, it’s entirely possible that the competition could come up with something surprising.

it is true no lead tends to last forever, but for I am confident Apple will still be comfortably in the lead come 2022-2023. :)
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
The A series is the most-important SoC as iPhone is the most-important product. So unless Apple significantly diverges the M series from the A series with the next generation (something I do not see happening), A will continue to precede M so the A17 would be the first 3nm SoC used by Apple with the "M3" following later.
It doesn’t matter which is most important. The schedule matters. Apple isn’t likely to change from their September iPhone release but Macs release schedules are much more flexible. You have to explain why Apple would be reluctant to release an M3 before an A17. From a consumer point of view, there isn’t much chance of confusion. Everyone would see an iPhone and a Mac SoC as completely separate product lines and everyone already expects a Mac to be faster than an iPhone. I don’t see any conflict.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.