That's a slim advantage over the Studio M1 Ultra. And how many PCIe lanes could an M2 Ultra have? What would be the point of releasing such a product?
if stick in a narrow shim I/O die between the two laptop dies they could add one or two x16 PCI-e v4 provisioning bundles relatively easily. If went to non exact twin "Max sized" dies and dumped Thunderbolt on one of those probably could eek out one x16 PCI-e v4
Apple could add some x16 lane provisioning and still use it in both the Mac Studio and Mac Pro. The Studio would just have 'dead' lanes. That wouldn't be new. The iMac Pro had about one x16 bundle 'dead'/unused. ( Apple hung the GPU (x16) , Thunderbolt ( 2x x4 ) , and Ethernet (?) off the CPU PCI-e lanes. )
With one or two x16 bundles can attach that to a PCI-e switch and provision 6 slots easily. the MP 2019 does exactly that with two x16 bundles. ( basically tossing Slot 1 and Slot 3
'straight to CPU' characteristics).
If those six slots hanging off a switch are 'useless' , then why are they not 'useless' on the MP 2019? They are a useful value-add and are a differentiator.
If Apple adds nothing and just try to 'Fake the Funk' is some TB controllers hooked to internal peripheral TB controllers and some low budget TB PCI-e closure switches ... yeah that would be in the 'why bother' zone. But the huge presumption there is that M2 Ultra has to be 100% exactly the same construct as M1 Ultra. That seems highly doubtful if Apple had M1/2 Extreme also working but decided to punt because of manufacturing , costs, and some other issues. Pretty good change that 'quad die' would have had some general PCI-e bandwidth solution coupled to it. It shouldn't be a show stopper to move that down to the two "main die" set up. Even more so if Apple aways intended to offer as two-die solution solution there.
The advantage wouldn't be about massive CPU/GPU core count uplift. It would be in adding better I/O. That better I/O doesn't have to be stuffed into the MBP 14/16 because it would be more than useless there. So it isn't on the 'main die' used in laptops.
... More than likely though the Mac 'feature' in the dog-and-pony show will be a M2 MBP 15". (another M2 device need to shuffle out the door before get awkward overlap with M3). ......
Vanilla M2 MBP 15" is a bit of a snooze fest isn't it? The Pro and Max versions have been out for some time. Admittedly it would find quite a few buyers, if the price is right - though Apple will need to sufficiently gimp it to avoid taking sales from higher-end MBPs.
That was a bit of a typo. Should have been MBA 15". It hasn't even shipped yet so it is bit early to declare it a snooze fest. Priced right this system can be as successful ast he MBP 13" touch bar has been over last 2+ years. Apple stated that is the #2 best selling Mac they have since the transition started.
It is the screen and the price point that will sell. Sales don't have to be 99% depending of folks solely looking at the CPU cores. The plain M2 is fast enough for a very wide variety of folks. Battery life (if better than MBP 14") , screen size , weight (if better than MBP 14" ) and cost (better than MBP 14") are likely dominating factors.
The Nvidia T4 has 16GB of VRAM. Apple can run past lost of the Nvidia line up where drive into corner cases where the working set data memory footprint is 1.5-2x as big as the Nvidia VRAM. The more data push into active shuttling between main DDR RAM and GDDR6 VRAM the more time the Apple GPU can make up by just pulling from the main Unified RAM of the mac that is 32-96GB big. These are not the 'affordable' M2 systems. It is the ones maxed out of Unifried RAM to create a cap with Nvidia at the lower-midrange VRAM sizes.
The Mac M2 solutions aren't going to be 'hopeless' for AI/ML training/inference. Reasonable sized models can be worked on with some of these systems for systems that are primarily going to deploy inference on local , modest sized clients.
It isn't something that is going to get deep leverage if the primarily deployment use case is to put the inference part of the AI/ML system into the cloud/remote from clients ( suck data and and deliver just inference results ).
Yes, but Blender isn't being optimised for Metal at the expense of e.g. OptiX or CUDA. They just needed to get off OpenGL, as it's deprecated in macOS. And having a Mac port of something doesn't mean that version will lead the industry.
if ported to Metal it can be ported 'down' the OS cohort stack also. Off to iPadOS , iOS , xrOS , etc. That is the point. DirectX on both the Xbox and WindowsPCs benefits both sides of that equation. Doesn't have to "lead th industry". Just has to generation much higher synergy in the Apple product ecosystem.
Apple not only deprecated OpenGL, they have not done much of anything for Vulkan , SYCL (and deprecating OpenCL also). etc. If Apple is going to 'blow up' any open standards library efforts then they SHOULD be spending more money helping limited budget/resourced , open source projects get around the potholes that Apple created in their roadmaps. That isn't going to get the industry leadig status but it can help them keep some status quo positioning as blow up standards. Apple is betting that the iOS/iPadOS/macOS inertia is going to deliver a DirectX like effect here that Windows commands. ( DirectX isn't about 'winning' the industry. It is about keeping a broader ecosystem moving forward. )
That would make sense if it was just an update, but not for long-delayed, supposedly flagship product. Either people care about this machine or not. If hardly anyone's bothered, why release it at all?
If doing it here at WWDC it would make sense primarily because it is long delayed. The prelude of the WWDC keynote is going to have an introduction covering how the Developer ecosystem has done over the last year ( since the last WWDC). Developers generated XX billion in revenue , YY new apps shipped , ZZ new developers joined the program. Probably going to be XX users have adopted and deeply use XCode cloud , etc. It is all upswings from last year.
WWDC 2022 didn't wrap up the transition. Apple couldn't 'check the box' there. Doing macOS and the Mac Pro first is not so much about the Mac Pro and far more about 'checking the box'. Transition done. There is about zero rational reason to treat that as "Oh one more thing". Apple is late. Everyone knows they are late. If can 'end' that just get that 'fart' out of the way first show rest of the show can walk around like their farts don't smell.
If not going to 'check the box" on the transition and all they have is the MBA 15" then macOS would likely end the first hour to kind of 'refresh peoples' attention spans so can rebuild for another crescendo on the second hour ( xrOS).
Well, the platform is macOS, which still supports the 7,1 just fine (and will for a couple more versions).
I expect they went all-in on it, as they didn't intend to revisit it again until the the rest of the Mac range had gone through the ASi transition. As an x86 platform, I doubt it's possible to integrate ASi SoCs with it in any meaningful way though.
I mean like doing it in a combined hardware + software fusion fashion. The PSVR2 doesn't do the display computations in the headset. It is attacked to a box that is primarily designed and optimized to drive a flat screen. So it still is primarily driving just a flat screen with the headset on. Same swamp as has been in for a very long while.
I don't think the eye tracking inferencing is done in the headset either. so have to send that sensor data all the way back down to the how system ( how fast?) . Versus two SoCs talking to each other over a very low latency , high speed link over a couple of inches ( or less) apart on same logic board.
It is eye tracking and foveated rendering not bolted on afterward like a sidecar. It is like it is there from the start of initial design specs for the silicion as an 100% integrated element (not optional).
I expect it is, but Apple's not going to let AMD back into the range at this point. I think they'd rather just delay the MP until e.g. 2025, or whenever they have a working chip.
That would be too late. Apple can let back compute accelerators without letting back in display GPUs from 3rd parties.
If Apple primarily canceled the "quad die Extreme" because it ended up too expensive for too small a market, then how much market is going to be left over in in 2025 after the TR 8000 (and competitors) + dGPU onslalut (riding on N3) in 2024-2025? By 2025 Apple would likely lost most of any fab node advantage. Apple is likely going to burn off a substantive number of hypermodular priority folks with the soldered in RAM of the whatever they do before 2025... again how much target market is going to be left in 2025?
Apple still has a > $1B celluar modem that hasn't shipped. Apple has 'stretched too thin' Slicon division issues. If the xrOS headset has very highly customer silicon package also that is of substantive size ; that too just adds to the 'too thin'.
The Rip van Winkle approach to doing nothing for the six year gap between the MP 2013 and 2019 happen to coincide in an era where Intel/AMD/Nvidia/others were not competiting as heavily with one another. Same thing in the smartphone SoC space. The competitive 'heat' on Apple over next two years is likely going to be much, much higher. "easy wins' are going to get much harder in the current SoC space that Apple is trying to cover. Expanding to something else... eh ... may not happen.
Apple could shift into a mode where just going to try to help lash multiple Macs together in a more cost effective fashion. For example, instead of a 'quad' SoC that have four Maxes on add-in cards hooked together. ( somewhat how the "attack of the killer micros" subsumed the supercomputer market in late 90's early 2000's. Not going to work for everyone's workload without major software changes in some cases. But sells more of the same SoC . )
I agree. The MP reveal is going to be fascinating on many levels. Can't wait for WWDC 2024
Even more than last year this WWDC 2023 will be extremely telling if Apple keeps discrete computational acceleration off the table. Even if only for their own stuff.