ios?? your pro app must follow app store rules and be sandboxed? Also that may also = no NON APPLE storage? Be ready to pay $200+ an TB with raid 0 as the only choice!!
- I try to squeeze some info about 3rd party GPU, he doesn't tell me anything just reply me with -like-emoji.
https://9to5mac.com/2023/02/22/excl...le-device-mac-pro-reality-pro-something-else/ said:
ComputeModule13,1 and ComputeModule13,3
Ladies and gentleman those are likely the Mac Pro solutions over PCIe5 to address that GPU/Compute power issue: M2-pro and m2-max based compute module add on peripherals running an striped down iOS.
IMHO the Mac Pro 8,1 to arrive as an bit smaller cheesegrater 7,1 with M2-Ultra and M2 extreme options as CPU upto 4 tb ram on DDR5 ECC DIMM, and instead options for traditional GPUs besides ones part of the M2 complex, compute accelerators cards running likely overclocked m2-pro and m2-max as slave peripherals, or m2-max and M2-Ultra.
Have you ever in HPC and familiar with Xeon-phi?https://www.pugetsystems.com/labs/hpc/top-5-xeon-phi-misconceptions-508/A stripped down iOS would not have coverage for M-series elements. Only iPadOS and macOS have M-series SoC specific element coverage in them. It could be a 'blended' strip down that was bigger. However, most often a stipped down iOS looks like watchOS , tvOS (used in AppleTV and HomePod now) , Studio Display .
What I realize is said iOS to be an uOS doppelganger.The Phi itself runs an embedded Linux OS in memory on the card. The card has a boot-loader flashed onto non-volatile memory and when the card starts up it loads a file system and Linux kernel that are stored on the host system. It's called uOS.
Not same purpose, and macOS true Boot image its even too complex for a compute accelerator micro system .This huge problem with reality here is that Apple really already has a 'stripped down' macOS which is the One-True-Boot system image that recovery now runs on.
And then iPadOS->iPad Pro ASi OS, actually absolutely nothing prevents apple to follow an fast track from iOS->cOS m2.Pretty good chance this is another "new Apple device" which typically take a fork off of iOS .
iOS -> iPadOS
5 inch in height from the mp 7,1 are related exclusively for xeon cooling solution plus an small room for an HDD cage, it's 2023, neither needed.A smaller chassis really doesn't 'buy' Apple all that much
Ladies and gentleman those are likely the Mac Pro solutions over PCIe5 to address that GPU/Compute power issue: M2-pro and m2-max based compute module add on peripherals running an striped down iOS.
IMHO the Mac Pro 8,1 to arrive as an bit smaller cheesegrater 7,1 with M2-Ultra and M2 extreme options as CPU upto 4 tb ram on DDR5 ECC DIMM, and instead options for traditional GPUs besides ones part of the M2 complex, compute accelerators cards running likely overclocked m2-pro and m2-max as slave peripherals, or m2-max and M2-Ultra.
It absolutely makes sense with all the information I've been gathering and posting here.
I think the Mac pro may follow an design similar to mp5,1 with main processor module, upgradeable linked thru some custom PCIe5 with the peripherals section which runs only PCIe peripherals form factor, on these slots could install a compute card but not to run it's macOS, only will run compute kernels which may have more in common with an iOS kernel than an cuda o Radeon kernel, as it could manage all m2 features in slave mode, it also allows Lower code overhead while running as such uOS kernels may be crafted to not require protected mode pagination which increase its performance, of course it's an very complex topic with lots of room for speculation.IF what Apple is instead making is something more like an Intel big NUC system, where it's just a backplane with MPX+ slots, and the only processing is those compute modules, and you can mix & match compute modules with GPUs...
I'll just lay claim that I pointed to the outfield with my MacStation musings 2+ years ago
I think the Mac pro may follow an design similar to mp5,1 with main processor module, upgradeable linked thru some custom PCIe5 with the peripherals section which runs only PCIe peripherals form factor, on these slots could install a compute card but not to run it's macOS, only will run compute kernels which may have more in common with an iOS kernel than an cuda o Radeon kernel, as it could manage all m2 features in slave mode, it also allows Lower code overhead while running as such uOS kernels may be crafted to not require protected mode pagination which increase its performance, of course it's an very complex topic with lots of room for speculation.
I just considered the MPX compute modules all being the same, you add them as you like only first one run full macOS and the surrogated that stripped down iOS thus acting as compute accelerators, it makes sense, lets see what happens I buy your concept and doesn't conflict with the information I gathered, very interesting times ahead.If it's a flat hierarchy, with all processor modules being the same hardware, my prediction. If there's a special hierarchical command module that is different hardware from the others, your prediction.
I think it would make more sense to keep current Mac Pro design but have an upgradable Arm setup.The "Building Blocks" for the ASi Mac Pro...
M3 Max SoC:
- N3B
- 16-core CPU (12P/4E)
- 44-core GPU
- 16-core Neural Engine
- 256GB LPDDR5X SDRAM (maximum)
M3 GPU-specific SoC:
- N3B
- 80-core GPU
- 16-core Neural Engine
- 256GB LPDDR5X SDRAM (maximum)
Symmetrical multi-die SoCs:
- Two regular dies for a M3 Ultra (32C/88G/32N)
- Four regular dies for a M3 Extreme (64C/176G/64N)
Asymmetrical multi-die SoCs:
- One regular die & one GPU-specific die for a M3 Ultra-C (16C/124G/32N)
- Two regular dies & two GPU-specific dies for a M3 Extreme-C (32C/248G/64N)
ASi (GP)GPUs:
- Two GPU-specific dies for a ComputeModule (160G/32N)
- Four GPU-specific dies for a ComputeModule Duo (320G/64N)
Maximum ASi Mac Pro CPU Edition:
- M3 Extreme SoC (N3B)
- 64-core CPU (48P/16E)
- 176-core GPU
- 64-core Neural Engine
- 1TB LPDDR5X SDRAM
- Two ComputeModule Duo add-in cards (640G/128N) with 1TB LPDDR5X SDRAM each
Maximum ASi Mac Pro GPU Edition:
- M3 Extreme-C SoC (N3B)
- 32-core CPU (24P/8E)
- 248-core GPU
- 64-core Neural Engine
- 1TB LPDDR5X SDRAM
- Two ComputeModule Duo add-in cards (640G/128N) with 1TB LPDDR5X SDRAM each
Available for pre-order after WWDC 2023 keynote presentation...!
Oh, and One More Thing; the all-new ASi Mac Pro Cube, available with the M3 Extreme or the M3 Extreme-C; we think you're going to love it...!
;^p
The tips i received hint at Apple using std DDR5LP DIMM or SO-DIMM plus CXL2 it means server grade DDR5 ram in modules, as each m2 max should configure at least two from those Dimm it could grew to 8-16 ram slots for the main CPU complex (not the add-on compute accelerators), DDR5 depends on which flavor tops 48 to 512GB per module, assuming 48gb x 16 it could allow 768gb to upto 8TB but no, in case apple decides to support the interface for said 512gb modules the Mac Pro likely to include only 8 ram slots for 4TB.Small detail.. but based on the M2 Max, I thought we extrapolated that an M2 Ultra would peak at 192 GB ram.. so a 4x Module solution for a Mac Pro would top out at 768 GB Ram.. not the 1TB I have seen thrown around. Granted I guess that is an educated guess based on M2.. not M3.
This from the news discussion, an intriguing match... Maybe 🤔??The fact that the module has a model identifier of "ComputeModule13,x" suggests that it is likely based on M1-generation silicon.
I can tell this because, among other things, the iPhone 12 (with A14, the same generation as M1) is iPhone13,x, and the Mac Studio (M1 Max/M1 Ultra) is Mac13,1 and Mac13,2.
... Or m1-utra(r)/M2-Ultra (r)This from the news discussion, an intriguing match... Maybe 🤔??
Interesting times.
Edit, if Said compute module based on Mac studio it likely to offer m1max and m1pro from 16 to 128gb ram on board.
According sources, "compute module" not a card, just an Mac studio, Mac mini etc running instead macOS an minimal iOS to be included on Mac OS as alternative boot, some kind of "thunderbolt hyper Target Mode", the Mac in target mode will provide headless all it's capabilities to it's master device.
Have you ever in HPC and familiar with Xeon-phi?https://www.pugetsystems.com/labs/hpc/top-5-xeon-phi-misconceptions-508/
What I realize is said iOS to be an uOS doppelganger.
Not same purpose, and macOS true Boot image its even too complex for a compute accelerator micro system .
5 inch in height from the mp 7,1 are related exclusively for xeon cooling solution plus an small room for an HDD cage, it's 2023, neither needed.
An ASi m2 extreme likely to run below 400w full load, so it's cooling solution perfectly should align with the MPX module thermals, or TBC the ram configurations but an m2 extreme could reside in-between DIMM and being cooled in the same stream similar to DIMM inside mp7,2, either route the Mac Pro saves a bunch of volume needed to cool it's CPU complex (CPU+related support chips, which for the xeon W accounts for at least 40W)
A barebones Mac pro could arrive without compute modules only with M2 SOC complex (plugging upto 4 m2 Max one on top each other like dominoes). And likely include PCIe5 slots ala MPX fashion but almost mandatory being proprietary to avoid said compute module migrates into non apple workstations -unlikely but Apple's way -, or maybe said compute modules to add an extra management extension so it won't work on non apple systems neither prevent it's PCIe5 slot being used by an Isa peripheral as nvme banks specialized Io etc.
Things sometimes evolve, maybe now it's turn.Target Disk mode already exists (without iOS).
Err. iPad Pro ASi M1 (inherited iOS) includes thunderbolt 4 support.iOS devices do not have any Thunderbolt abilities.
Lower overhead than macOS.iOS unique how here?
So? It's subject for app tune-up, FYI davinci resolve already supports distributed multiprocessing on ASi .Did you even read that FAQ?
FAQ 1 the Phi cores are not 'additive' to the host set of cores. The Phi runs as a different computer and not unified into a completely transparently , coherent whole.
Sandboxing is imperative in compute accelerator, You don't want to upload corrupt code into a device where it could run in kernel mode, as GPU kernel, but not actually a must to run aarch64 in kernel mode as iOS threading model it's less expensive than MacOS, just an interesting capability.One place where iOS has a 'lead' is in more pervasive and heavyweight sandboxing by default
Or data pipes, that's where ASi unified memory shine's you can upload everything into ram and run both GPU & CPU code.via a virtual ethernet interface
Not as bumpy as Windows, seems you forgot the "Big Mac" super computer cluster from Virginia Tech build on Power Mac, it benchmark top was about 29 TFlop (2009), barely better than an Mac Studio now.But as a supercomputer node it has had a bumpy history.
The M1 and M2 iPad Pros use iPadOS which split from iOS 13. While it is based on iOS it has a lot that iOS does not, like proper cursor support and thunderbolt drivers. iOS does not have such things. It would make more sense to run iPadOS as a lower overhead OS for some sort of advanced target modeiPad Pro ASi M1 (inherited iOS) includes thunderbolt 4 support.
No, thunderbolt support just an kernel add-on, you don't need to move to an more complex and bigger OS when you don't need all that functionality, iOS officially "don't support" thunderbolt is an hardware thing (no iPhone yet includes USB4) not an kernel restrictions, both macOS, ipadOS and iOS shares same XNU kernel and Device Kit driver model, indeed both can run each other specific hardware as long provided with device drivers and boot image for the CPU architecture (yes you actually could run macOS at you iPad, iPhone (if an m1/m2 iPhone its released)).The M1 and M2 iPad Pros use iPadOS which split from iOS 13. While it is based on iOS it has a lot that iOS does not, like proper cursor support and thunderbolt drivers. iOS does not have such things. It would make more sense to run iPadOS as a lower overhead OS for some sort of advanced target mode