Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Disappointed with Mac Pro 2023?


  • Total voters
    534

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
M3 Ultra/Extreme would probably be out Q1 2025. Enough time for Apple to figure it out. The Extreme would likely have more than 384GB unified memory, Thunderbolt 5 80Gb/s & allow for PCIe 5.0 slots.
Agree about TB5 and PCIe 5.0. But they should also be using LPDDR5x, and Apple should be able to leverage its much higher memory density to offer > 1 TB RAM. Further, they may offer unidirectional TB5, which combines the duplex 80 GB/s channels into a single 160 Gb/s output.
 

Longplays

Suspended
May 30, 2023
1,308
1,158
Agree about TB5 and PCIe 5.0. But they should also be using LPDDR5x, and Apple should be able to leverage its much higher memory density to offer > 1 TB RAM. Further, they may offer unidirectional TB5, which would be 160 Gb/s

21 months is more than enough time for Apple to figure that out.

With this performance trajectory by say 2030 to 0.7nm (A7) would make Intel et al look like they're standing still.

perf-trajectory.png
 
  • Like
Reactions: AlphaCentauri

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
Look at it this way: With the Mac Pro now on Apple Silicon it's gonna get refreshes a lot more often now.
Yep that is another way to look at pressure for Apple. The 2019 Mac Pro was running old Intel and has not been upgraded in a very long time. So pressure from multiple sides probably forced Apple's hand here. Hopefully M3 will be better. M2 hasn't been selling well anyway.
 

Stevenyo

macrumors 6502
Oct 2, 2020
310
478
Returning to the Mac Pro: This would be the case if a, say, "M3 Extreme" (4x Max) required an entirely separate chip, but it doesn't. I asked a former AMD chip designer (cmaier) who posts on another site about this, and he said if Apple did this, they would probably join two M3 Ultras using something akin to the interposer that is currently used to join two M2 Max's into an M2 Ultra, and the development costs for that would be relatively modest (they'd still be using the same M3 Max chip in the Max, Ultra, and Extreme SoC's).

He said there would be strips of the Max that would be used only for connecting to other Max's, but that would only slightly increase the chip size, and thus the cost, over a Max that didn't have these.

What could be high would be the manufacturing costs (e.g., if the yield for the interposer were low), but these would be reflected in the selling price.

He added using the Max in this way would necessitate some performance upgrades not required if it were used alone (high off-chip bandwidth); but while not required, these would benefit all Max chips.
Totally understand and agree. My point was more for all those who were hoping for some SoC that could work with ECC DDR5, PCIe GPUs, etc. Making that is a lost cause.

Adding a second edge interposer is a possibility, and will likely happen someday though the 4 way connection is a lot more complicated to manage as a unified system than the 2 way system. Eg, what if a GPU core in the bottom left die needs data in the RAM connected to the top right SoC? They’d have no direct connection in a basic 2-edge interposer setup.

I do expect someday there will be 4xmax in a Pro, but I’m not at all shocked it didn’t happen with the m2 generation or this case/mobo design
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Turns out that ALL of the Mac Pro 2023's PCIe slots are piped through a PCIe switch with a single 16-lane connection to the SoC: https://social.treehouse.systems/@marcan/110493886026958843

Very disappointing.

Apple has support docs for the system. Approximately same PCI-e Expansion Slot Utility as in the 2019 model.

macos-ventura-mac-pro-system-settings-general-about-pcie-cards-info-pcie-slot-configuration.png



The fact you can assign some slots to either Pool A or Pool B is suggestive not just one feed here. Basically same higher end server level switch where can assign some slots to either the x16 feed or the x8 feed. ( and rogue operating systems probably left with the default configuration. ). It isn't the cheapest switch they could have chosen.

So if had one x16 PCI-e v2 (or v3) card could probably put that on pool B and one x16 PCI-e v4 card on pool A, than can get full bandwidth to both at the same time. Some folks have sunk costs in cards that are wider (x8-x16) but older. Putting two x8v3 cards on a x8v4 backhaul is running basically at full speed.

So it isn't just $3K for PCI-e slots. Can 'slice and dice' the bandwidth to those slots to juggle the bandwidth budget you get.
 

Mimiron

macrumors 6502
Dec 12, 2017
391
400
Yeah I'm extremely disappointed. I was waiting on a 27-inch iMac or iMac Pro announcement but nothing, yet another year. I don't want Mac Studio or Mac Pro all I want is a 27-inch iMac with a dedicated GPU.
 
  • Like
Reactions: George Dawes

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
Yeah I'm extremely disappointed. I was waiting on a 27-inch iMac or iMac Pro announcement but nothing, yet another year. I don't want Mac Studio or Mac Pro all I want is a 27-inch iMac with a dedicated GPU.
There will never be a dedicated GPU in the Apple Silicon era...

... as for a 27" iMac, in a world where they sell the Studio display at barely less than what they used to sell a base model 27" iMac for, I think it's unlikely to come back.

I am still curious about one thing though - Dell had announced a few months back a monitor that had a panel that is the perfect resolution for a 6K 32" retina display. Not sure if it's shipping yet. Everybody knows Dell and Apple both get their panels from LG Display. Is Apple potentially planning the return of the iMac Pro, say, with that panel? M3 Pro/Max with a 32" retina 6K screen could be nice... and would fit the lineup better than a 27" when the base model is now 24".
 

Longplays

Suspended
May 30, 2023
1,308
1,158
I am still curious about one thing though - Dell had announced a few months back a monitor that had a panel that is the perfect resolution for a 6K 32" retina display. Not sure if it's shipping yet. Everybody knows Dell and Apple both get their panels from LG Display. Is Apple potentially planning the return of the iMac Pro, say, with that panel? M3 Pro/Max with a 32" retina 6K screen could be nice... and would fit the lineup better than a 27" when the base model is now 24".

 

Macintosh IIcx

macrumors 6502a
Jul 3, 2014
625
612
Denmark
There will never be a dedicated GPU in the Apple Silicon era...

... as for a 27" iMac, in a world where they sell the Studio display at barely less than what they used to sell a base model 27" iMac for, I think it's unlikely to come back.

I am still curious about one thing though - Dell had announced a few months back a monitor that had a panel that is the perfect resolution for a 6K 32" retina display. Not sure if it's shipping yet. Everybody knows Dell and Apple both get their panels from LG Display. Is Apple potentially planning the return of the iMac Pro, say, with that panel? M3 Pro/Max with a 32" retina 6K screen could be nice... and would fit the lineup better than a 27" when the base model is now 24".
I think Apple wants to move to miniLED, MiniOLED and eventually MicroLED screens across their ecosystem in order to gain a retina+HDR advantage over the competitors, so I don’t expect to see new screens before they can create a decent miniLED 27” screen at least. Retina resolution is not enough in itself anymore. We won’t see an 27” iMac before that is done, if ever.
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
I doubt we will ever see another iMac beyond the entry-level 24" Mn iMac...

Larger models would be too expensive; 5K 27" Studio Display starts at $1599 and the 6K 32" Pro Display XDR starts at $4999; not to mention the whole "Double-Disposable" issue inherent with the iMac form factor...
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
I am still curious about one thing though - Dell had announced a few months back a monitor that had a panel that is the perfect resolution for a 6K 32" retina display. Not sure if it's shipping yet. Everybody knows Dell and Apple both get their panels from LG Display. Is Apple potentially planning the return of the iMac Pro, say, with that panel? M3 Pro/Max with a 32" retina 6K screen could be nice... and would fit the lineup better than a 27" when the base model is now 24".
I wonder why Apple didn't use the IPS Black tech in its ASD, since IPS Black monitors were first releasd in 2022, and the ASD didn't come out until early 2023--maybe LG wasn't yet ready to produce it with the needed ~220 ppi pixel density.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Agree about TB5 and PCIe 5.0. But they should also be using LPDDR5x, and Apple should be able to leverage its much higher memory density to offer > 1 TB RAM.

Memory density without ECC is a waste of time. Hence, not going to get anywhere near > 1TB RAM at all.

I wouldn't bet on PCI-e v5 either. The asymmetric backhaul on this MP ( x8 vs x16 ) points to some issue of QoS issues to juggle at v4. Doubling down isn't gong to make the QoS issues go away. A "> 2 compute die" set up would solve it for a larger than Ultra offering but unlikely Apple is going to split a Mac Pro between 'You get 5.0 and you get 4.0 " set up between two SoC. Apple is on a track where v4 means they can keep the lane counts out of the package down ( ~32 v4 from 64v3 ). There isn't lots of upside of collapse to just x16v5 if that comes with more power overhead than they want to deal with. I'm very skeptical that Apple is in a rush to get to CXL 2.0+ so PCI-e v5 isn't going to 'buy' a whole lot without that.

Apple puts the x16 slots as far as physically possible away from the SoC. That doesn't scream "I'm chasing bleeding edge PCI-e versions" at all !!!


Further, they may offer unidirectional TB5, which combines the duplex 80 GB/s channels into a single 160 Gb/s output.

Unidirectional is TBv5 is better than DPv2.1 how????? Unidirectional TB isn't really even TB anymore. All that is is 'video out' and there are already 'video out' standards with much broader adoptions. ( Intel and AMD are already on DPv2.1 . Nvidia is late as usual but likely picking it up next iteration. Apple is looking like the even bigger laggard here.
 

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
IMG_5636.png
Memory density without ECC is a waste of time. Hence, not going to get anywhere near > 1TB RAM at all.

I wouldn't bet on PCI-e v5 either. The asymmetric backhaul on this MP ( x8 vs x16 ) points to some issue of QoS issues to juggle at v4. Doubling down isn't gong to make the QoS issues go away. A "> 2 compute die" set up would solve it for a larger than Ultra offering but unlikely Apple is going to split a Mac Pro between 'You get 5.0 and you get 4.0 " set up between two SoC. Apple is on a track where v4 means they can keep the lane counts out of the package down ( ~32 v4 from 64v3 ). There isn't lots of upside of collapse to just x16v5 if that comes with more power overhead than they want to deal with. I'm very skeptical that Apple is in a rush to get to CXL 2.0+ so PCI-e v5 isn't going to 'buy' a whole lot without that.

Apple puts the x16 slots as far as physically possible away from the SoC. That doesn't scream "I'm chasing bleeding edge PCI-e versions" at all !!!




Unidirectional is TBv5 is better than DPv2.1 how????? Unidirectional TB isn't really even TB anymore. All that is is 'video out' and there are already 'video out' standards with much broader adoptions. ( Intel and AMD are already on DPv2.1 . Nvidia is late as usual but likely picking it up next iteration. Apple is looking like the even bigger laggard here.
Doesn’t TB 5 directly support DPv2.1?

 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I wonder why Apple didn't use the IPS Black tech in its ASD, since IPS Black monitors were first releasd in 2022, and the ASD didn't come out until early 2023--maybe LG wasn't yet ready to produce it with the needed ~220 ppi pixel density.

Which alternative universe did this happen in. The Apple Studio Display (ASD) was released with the first Mac Studio. They are Apple's 'Dynamic Duo' pair.

That was March 2022. Not 2023.


Given that the M1 Ultra appeared well over a year after the other iMac ( 2021 ) , there is a pretty good chance that the that pair was suppose to launch inside of late 2021 ; not early 2022.

So timing wise it isn't a match at all with IPS black. On top of the density mismatch.

The other issue is that Apple was aiming to keep the costs down ( since adding in a whole A13 and other stuff) , they were highly likely to do something minimal to keep the panel costs down and gap (under ) the XDR as much as possible but still generate large margins. [ The iMac 27" guaranteed a lot more panels sold. The number of panels Apple was going to sell was defiantly going to dramatic drop lower. So extremely likely they were going to push for higher margins on fewer units. ]

There were some reports that LG wanted to decommission the whole line doing this density/size of panels. So Apple wasn't going to be able to haggle them down into even thinner margins. To keep it around Apple was going to have to make it worthwhile.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Doesn’t TB 5 directly support DPv2.1?


DPv2.1 isn't Thunderbolt. Thunderbolt v5 is really a layering on top of other standards. Making some stuff in USB4v2 not optional. Making DPv2.1 alternative mode 'required' isn't really adding it to Thunderbolt. It is an effectively manditory alternative mode. DisplayPort being around as an alternative has been around since Thunderbolt started. That doesn't make DP traffic , Thunderbolt Traffic.

That DPv2.1 mode would work with any DPv2.1 monitor. Those will not need a Thunderbolt certification in the slightest. So claiming that is 'Thunderbolt' is really not truthful.
 

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
DPv2.1 isn't Thunderbolt. Thunderbolt v5 is really a layering on top of other standards. Making some stuff in USB4v2 not optional. Making DPv2.1 alternative mode 'required' isn't really adding it to Thunderbolt. It is an effectively manditory alternative mode. DisplayPort being around as an alternative has been around since Thunderbolt started. That doesn't make DP traffic , Thunderbolt Traffic.

That DPv2.1 mode would work with any DPv2.1 monitor. Those will not need a Thunderbolt certification in the slightest. So claiming that is 'Thunderbolt' is really not truthful.
Forgive me if I don’t fully understand this.

My current understanding (could be wrong):

Thunderbolt is an interface.
DPv2.1 is a protocol that can be passed through the TB interface.


If that’s correct (again, not sure), then what does it matter if you use a TB cable or DisplayPort cable if they both support 2.1? Is there a performance hit or something?
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Forgive me if I don’t fully understand this.

My current understanding (could be wrong):

Thunderbolt is an interface.
DPv2.1 is a protocol that can be passed through the TB interface.


If that’s correct (again, not sure), then what does it matter if you use a TB cable or DisplayPort cable if they both support 2.1? Is there a performance hit or something?

Over time Thunderbolt has evolved in what it covered. There is both the Thunderbolt standard ( a set of things over the physical port, electronically , and connectivity to other subsystems. As well as 'transport' protocol. ) and the baseline transport protocol used for signaling on the wires. Compliance with the standard is what gets systems the right to put that Thunderbolt 'lightning bolt' symbol next to their port on a device. The protocol has forked out a bit over time.

As for as the physical interface port, Thunderbolt ( or previous Lightning) never had its own. In the early Lighting phase Intel was looking to 'superset' the USB port. USB-IF nixed that. So Thunderbolt merged in with the , at that time, new mini-DisplayPort. Pragmatically part of the bargain for that was that Thunderbolt had to help DisplayPort become a more widespread video out standard by making 'alternative mode'/'DP pass through' mode always be present on TB implementations.

As Thunderbolt grew and got lots more inertia behind it there was another stab at merging in with USB-IF port. Intel/Apple/others developed the USB Type-C socket and Thunderbolt adopted that. Of course, they didn't drop their first 'bargain' and Alt-Mode DisplayPort was very much part of the Type-C roll out. That happened at Thunderbolt v3.

More momentum and Intel decided to hand out the basics of the underlying Transport protocol to USB-IF. That was weaved into USB4. USB-IF buy make a much wider scope of things 'optional'. There was no guarantee that as computer host system that is USB4 certify actually fully implement a TBv3 standards at all. Same Type-C port. USB4. but plug in a TBv3 device and it won't work. The only devices in USB4 absolutely required to implement all of TBv3 was "USB Hubs" I think. Host systems and peripherals all can grab 'get out of jail free' card.

At Thunderbolt 4 the standard more so became "no surprises" with fewer optional and 'get out jail free' cards. Fully use the underlying transport protocol and a more compatible way. Thunderbolt 5 will be USB4v2 with less loopholes also.

At this point the Thunderbolt transport protocol was somewhat like the IP protocol the Internet uses. both TCP and UDP can be layered on top of IP. If you want data to be guaranteed to finally arrive in order , without lost packets in standards compliant way , you use TCP/IP. If want to build a custom layer to deal with loss/damaged/out-of-order packets you can roll your own with UDP.



Along the way DisplayPort was trying to catch up to HDMI 2.0+. They decided to use this now 'license free' base transport protocol of Thunderbolt but throw away all the bidirectional features of the protocol and some parts of the routing. They are using a subset. But several implementers have built cables and transmit/recieve silicon to deal with those basic packets at that higher speed. So it was cheaper than doing something completely from scratch.


DisplayPort had never completely dropped their full-size DP port when picked up mini-DP and later Alt-mode Type-C. DisplayPort v2.1 is converging onto the Type-C physical interface.


There are lots of problems with trying to crank up the speed with these physical copper connections and again it is just way more affordable to piggyback off of work that other folks are doing with shared R&D spend.


Type-C port in and of itself doesn't mean the protocols are the same. In USB-IF standards the Type-C power could deliver just USB 2.0 and that is it and still be in compliance. Just a protocol from the last century and done!!! (USB 2.0 passed in 2000 , but was formulated in the previous century). A gross underutilization of that port's potential capabilities, but it is perfectly 'legal' USB-IF standards. The 'U' in USB is for "Universal" which very much leans toward 'Ubiquity' connotation of that. It is trying to be a 'do everything for everybody' standard. That drifts into the 'one port to rule them all' zone. Which is a dual edged sword.


DisplayPort v2.1 has to use the same physical socket as something that meets Thunderbolt standards. The underlying protocols have some base level transport similarities. But to pass Thunderbolt standards have to do more than that. You have to meet the completeness requirements that Thunderbolt lays out.

If one computer on the internet implements program that 'talks' TCP/IP and another computer's program does UDP/IP. They aren't going to talk to one another. Same thing here.

There is a high degree of commonality being force here because doing very high speed over copper wires is problematical. There is much more 'transport tech' sharing going on, but not more homogenous standards.
 
Last edited:

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
Over time Thunderbolt has evolved in what it covered. There is both the Thunderbolt standard ( a set of things over the physical port, electronically , and connectivity to other subsystems. As well as 'transport' protocol. ) and the baseline transport protocol used for signaling on the wires. Compliance with the standard is what gets systems the right to put that Thunderbolt 'lightning bolt' symbol next to their port on a device. The protocol has forked out a bit over time.

As for as the physical interface port, Thunderbolt ( or previous Lightning) never had its own. In the early Lighting phase Intel was looking to 'superset' the USB port. USB-IF nixed that. So Thunderbolt merged in with the , at that time, new mini-DisplayPort. Pragmatically part of the bargain for that was that Thunderbolt had to help DisplayPort become a more widespread video out standard by making 'alternative mode'/'DP pass through' mode always be present on TB implementations.

As Thunderbolt grew and got lots more inertia behind it there was another stab at merging in with USB-IF port. Intel/Apple/others developed the USB Type-C socket and Thunderbolt adopted that. Of course, they didn't drop their first 'bargain' and Alt-Mode DisplayPort was very much part of the Type-C roll out. That happened at Thunderbolt v3.

More momentum and Intel decided to hand out the basics of the underlying Transport protocol to USB-IF. That was weaved into USB4. USB-IF buy make a much wider scope of things 'optional'. There was no guarantee that as computer host system that is USB4 certify actually fully implement a TBv3 standards at all. Same Type-C port. USB4. but plug in a TBv3 device and it won't work. The only devices in USB4 absolutely required to implement all of TBv3 was "USB Hubs" I think. Host systems and peripherals all can grab 'get out of jail free' card.

At Thunderbolt 4 the standard more so became "no surprises" with fewer optional and 'get out jail free' cards. Fully use the underlying transport protocol and a more compatible way. Thunderbolt 5 will be USB4v2 with less loopholes also.

At this point the Thunderbolt transport protocol was somewhat like the IP protocol the Internet uses. both TCP and UDP can be layered on top of IP. If you want data to be guaranteed to finally arrive in order , without lost packets in standards compliant way , you use TCP/IP. If want to build a custom layer to deal with loss/damaged/out-of-order packets you can roll your own with UDP.



Along the way DisplayPort was trying to catch up to HDMI 2.0+. They decided to use this now 'license free' base transport protocol of Thunderbolt but throw away all the bidirectional features of the protocol and some parts of the routing. They are using a subset. But several implementers have built cables and transmit/recieve silicon to deal with those basic packets at that higher speed. So it was cheaper than doing something completely from scratch.


DisplayPort had never completely dropped their full-size DP port when picked up mini-DP and later Alt-mode Type-C. DisplayPort v2.1 is converging onto the Type-C physical interface.


There are lots of problems with trying to crank up the speed with these physical copper connections and again it is just way more affordable to piggyback off of work that other folks are doing with shared R&D spend.


Type-C port in and of itself doesn't mean the protocols are the same. In USB-IF standards the Type-C power could deliver just USB 2.0 and that is it and still be in compliance. Just a protocol from the last century and done!!! (USB 2.0 passed in 2000 , but was formulated in the previous century). A gross underutilization of that port's potential capabilities, but it is perfectly 'legal' USB-IF standards. The 'U' in USB is for "Universal" which very much leans toward 'Ubiquity' connotation of that. It is trying to be a 'do everything for everybody' standard. That drifts into the 'one port to rule them all' zone. Which is a dual edged sword.


DisplayPort v2.1 has to use the same physical socket as something that meets Thunderbolt standards. The underlying protocols have some base level transport similarities. But to pass Thunderbolt standards have to do more than that. You have to meet the completeness requirements that Thunderbolt lays out.

If one computer on the internet implements program that 'talks' TCP/IP and another computer's program does UDP/IP. They aren't going to talk to one another. Same thing here.

There is a high degree of commonality being force here because doing very high speed over copper wires is problematical. There is much more 'transport tech' sharing going on, but not more homogenous standards.
That’s a lot of information, but I was asking if Thunderbolt 5 supports DisplayPort 2.1, which Intel says it does. So I’m trying to parse out why you think that’s not true?
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
That’s a lot of information, but I was asking if Thunderbolt 5 supports DisplayPort 2.1, which Intel says it does. So I’m trying to parse out why you think that’s not true?

TBv5 'supports' probably doesn't mean that it is absolutely required. I think it means there is minimally support for "up to" DPv2.1 pass through. DPv2.1 may , or many not be present on any single TBv5 port. You can more so count on DP v'something". For Thunderbolt 1.1 there was DPv1.2 pass through before there was full DPv1.2 transport over Thunderbolt. Good chance that is the minimal requirement there.

DPv2.1 tops out at approx 80Gb/s and the video priority asymmetric mode of TBv5 will be 120Gb/s outbound. They could either fill that with more video streams DPv1.4 or a subset of DPv2.1.

"... VESA announced version 2.1 of the DisplayPort standard on 17 October 2022.[33] This version incorporates the new DP40 and DP80 cable certifications, which test DisplayPort cables for proper operation at the UHBR10 (40 Gbit/s) and UHBR20 (80 Gbit/s) speeds introduced in version 2.0. ..."
https://en.wikipedia.org/wiki/DisplayPort

Is that two DP40 + 40Gb/s for the 120 or DP80 + 40Gb/s for the 120 ??? Up in the air. The initial "DPv2.1" GPUs are mostly capping at DP40. Don't see Intel actually implementing a DP80 'encoding' controller if there are no GPUs pumping that out. Intel's GPU can do two "DP40" like streams, so a decent chance they'll cover that with encoding.
( Intel making two DP40 streams 'required' will only decrease the number of TBv5 deployments they get. It would probably make almost all of them Intel, but leaning too hard on trying to drive more Intel SoC sales is problematical. Apple 'should' implement some DPv2.1 next iteration, but I'm not so sure they will do it for all of the streams. Depends upon transistor budget increases. )


DPv2.1 is a bit like HDMI 2.1 in that what resolutions you get out of a particular implementation is a bit of a crap shoot. Again, as the output speeds get 'harder' the standards are leaving more 'optional' stuff on the top so that implementors can cut costs as they wish.

DPv2.1 doesn't necessarily mean 8K 120Hz Pro Motion displays though with TBv5.


P.S. Some folks look at the 120Gb/s video and think that has to forced by soaking up maximal DPv2.1 traffic. There are a couple of USB 4 docks out that 'expand' the display output ability of the doc by using "Displaylink" support for normal resolution display to had another port. If trying to catch up to the demand that need 3-4 video stream to normal displays out to a dock .... then may not need encoded/transported/ DPv2.1 to drive increase. 4 DPv1.4 is also bigger.
 
Last edited:
  • Like
Reactions: NT1440

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
TBv5 'supports' probably doesn't mean that it is absolutely required. I think it means there is minimally support for "up to" DPv2.1 pass through. DPv2.1 may , or many not be present on any single TBv5 port. You can more so count on DP v'something". For Thunderbolt 1.1 there was DPv1.2 pass through before there was full DPv1.2 transport over Thunderbolt. Good chance that is the minimal requirement there.

DPv2.1 tops out at approx 80Gb/s and the video priority asymmetric mode of TBv5 will be 120Gb/s outbound. They could either fill that with more video streams DPv1.4 or a subset of DPv2.1.

"... VESA announced version 2.1 of the DisplayPort standard on 17 October 2022.[33] This version incorporates the new DP40 and DP80 cable certifications, which test DisplayPort cables for proper operation at the UHBR10 (40 Gbit/s) and UHBR20 (80 Gbit/s) speeds introduced in version 2.0. ..."
https://en.wikipedia.org/wiki/DisplayPort

Is that two DP40 + 40Gb/s for the 120 or DP80 + 40Gb/s for the 120 ??? Up in the air. The initial "DPv2.1" GPUs are mostly capping at DP40. Don't see Intel actually implementing a DP80 'encoding' controller if there are no GPUs pumping that out. Intel's GPU can do two "DP40" like streams, so a decent chance they'll cover that with encoding.
( Intel making two DP40 streams 'required' will only decrease the number of TBv5 deployments they get. It would probably make almost all of them Intel, but leaning too hard on trying to drive more Intel SoC sales is problematical. Apple 'should' implement some DPv2.1 next iteration, but I'm not so sure they will do it for all of the streams. Depends upon transistor budget increases. )


DPv2.1 is a bit like HDMI 2.1 in that what resolutions you get out of a particular implementation is a bit of a crap shoot. Again, as the output speeds get 'harder' the standards are leaving more 'optional' stuff on the top so that implementors can cut costs as they wish.

DPv2.1 doesn't necessarily mean 8K 120Hz Pro Motion displays though with TBv5.
Got it, thanks for the clarification.

Remember when we thought for a brief moment with USB C that the cable standard nonsense might have had a glimmer of coming to an end? 😂
 

Longplays

Suspended
May 30, 2023
1,308
1,158
M2 hasn't been selling well anyway.
It seems that way because COVID in March 2020 forced everyone doing WFH to upgrade their 4-6yo or older computer to a 2020 model.

The next replacement cycle would occur by 2024-2026.
 

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
It seems that way because COVID in March 2020 forced everyone doing WFH to upgrade their 4-6yo or older computer to a 2020 model.

The next replacement cycle would occur by 2024-2026.
It also forced businesses to replace things that might not have been laptops (e.g. desktops or thin clients) with laptops, etc.

It's hard to speak of a 'replacement cycle' in the modern world, though - once upon a time, the replacement cycle was driven by the demands of newer software. e.g. if you bought a Mac in 1993, it would have been 68K, Office 98 released in 1998 was PowerPC-only, so if you had somehow managed to make do with your Centris 610 until 1998, if you wanted the current software in 1998-1999, you had to get a PowerPC. Today, well, most productivity software doesn't really push the hardware envelope, e.g. here are the requirements for Office 2021:

Computer and processor​


Windows OS: 1.1 GHz or faster, 2-core


macOS: Intel or Apple Silicon (As supported by the three most recent versions of macOS.)


Memory​


Windows OS: 4 GB RAM


macOS: 4 GB RAM


Hard disk​


Windows OS: 4 GB of available disk space


macOS: 10 GB of available disk space


Display​


Windows OS: 1280 x 768 screen resolution (64-bit Office required for 4K and higher)


macOS: 1280 x 800 screen resolution.


Graphics​


Graphics hardware acceleration requires DirectX 9 or later, with WDDM 2.0 or higher for Windows 10 (or WDDM 1.3 or higher for Windows 10 Fall Creators Update).


Operating system​


Windows OS: Windows 10 or Windows 11


macOS: Office for Mac is supported on the three most recent versions of macOS. As new major versions of macOS are made generally available, Microsoft will drop support for the oldest version and support the newest and previous two versions of macOS. Product functionality and feature availability may vary on older systems. For the best experience, use the latest version of any operating system specified above.

Fundamentally, those requirements mean nothing - any machine capable of running Windows 10 or macOS Big Sur will easily, easily meet those requirements.

So I think we've moved on to a world of 'artificial' replacement cycles driven by OS vendors' business decisions and, to a lesser extent, laptop vendors' willingness to provide certain parts such as batteries.

For example, I had a mid-2014 MacBook Pro with 16GB of RAM, 512GB SSD, and a quad-core i7. Last supported version of macOS is Big Sur unless you want to play with OCLP, etc. I traded it in on my M1 Max MBP because i) the battery was starting to swell (again), ii) Apple offered a generous trade-in offer, and iii) the M1 Max MBP sure looked more appealing than the USB-C Intels, but that machine would have perfectly handled 95% of computing needs in 2023. And, as demonstrated by the fact that the OCLP folks have done it, that machine could run Ventura just fine if Apple felt like having it run Ventura. Really, OCLP + a battery replacement and that machine could still be going strong.

Similarly, in Windowsland, Microsoft dealt with this challenge with their artificial processor requirement in Windows 11. I can run Windows 11 just fine on a ~2008-era C2Q, and in fact I recently dug up a 'vintage' C2Q out of the closet and installed Windows 11 on it, but Microsoft threatens that any monthly patch could brick that install, so, oops, replacement time. And if any parts fail on a desktop, well, the computer store is full of PSUs, SATA SSDs/HDDs, etc that will go just fine in those machines. So again, it's really battery availability that is the killer, especially if you don't trust aftermarket batteries.

I would actually argue that if it wasn't for battery issues and OS support issues, the replacement cycle for a 'high-quality' machine (i.e. not some bargain basement piece of junk) used for normal productivity apps, web stuff, etc should be around 10 years. But I think both Apple and Microsoft are terrified of a move in that direction and so they're going to continue cutting OS support to maintain a much more frequent replacement cycle, especially as all those covid-era purchases start to reach 4-5 years old.
 

Longplays

Suspended
May 30, 2023
1,308
1,158
It also forced businesses to replace things that might not have been laptops (e.g. desktops or thin clients) with laptops, etc.

It's hard to speak of a 'replacement cycle' in the modern world, though - once upon a time, the replacement cycle was driven by the demands of newer software. e.g. if you bought a Mac in 1993, it would have been 68K, Office 98 released in 1998 was PowerPC-only, so if you had somehow managed to make do with your Centris 610 until 1998, if you wanted the current software in 1998-1999, you had to get a PowerPC. Today, well, most productivity software doesn't really push the hardware envelope, e.g. here are the requirements for Office 2021:


Fundamentally, those requirements mean nothing - any machine capable of running Windows 10 or macOS Big Sur will easily, easily meet those requirements.

So I think we've moved on to a world of 'artificial' replacement cycles driven by OS vendors' business decisions and, to a lesser extent, laptop vendors' willingness to provide certain parts such as batteries.

For example, I had a mid-2014 MacBook Pro with 16GB of RAM, 512GB SSD, and a quad-core i7. Last supported version of macOS is Big Sur unless you want to play with OCLP, etc. I traded it in on my M1 Max MBP because i) the battery was starting to swell (again), ii) Apple offered a generous trade-in offer, and iii) the M1 Max MBP sure looked more appealing than the USB-C Intels, but that machine would have perfectly handled 95% of computing needs in 2023. And, as demonstrated by the fact that the OCLP folks have done it, that machine could run Ventura just fine if Apple felt like having it run Ventura. Really, OCLP + a battery replacement and that machine could still be going strong.

Similarly, in Windowsland, Microsoft dealt with this challenge with their artificial processor requirement in Windows 11. I can run Windows 11 just fine on a ~2008-era C2Q, and in fact I recently dug up a 'vintage' C2Q out of the closet and installed Windows 11 on it, but Microsoft threatens that any monthly patch could brick that install, so, oops, replacement time. And if any parts fail on a desktop, well, the computer store is full of PSUs, SATA SSDs/HDDs, etc that will go just fine in those machines. So again, it's really battery availability that is the killer, especially if you don't trust aftermarket batteries.

I would actually argue that if it wasn't for battery issues and OS support issues, the replacement cycle for a 'high-quality' machine (i.e. not some bargain basement piece of junk) used for normal productivity apps, web stuff, etc should be around 10 years. But I think both Apple and Microsoft are terrified of a move in that direction and so they're going to continue cutting OS support to maintain a much more frequent replacement cycle, especially as all those covid-era purchases start to reach 4-5 years old.
The replacement cycles of four years is from Apple and those of 5-6 years is from Intel.

The replacement of Apple and Intel are what they notice their average customers do.

They base R&D cycle on that information.

In the 90s it was 3 years.

Ever wonder why iPhones industrial design only changes every 3 years? It is largely because the replacement cycle has lengthened to every 3 years.

Be aware that businesses also cost things out and schedule purchases. So they peg their x86 computers at 5-6 years.

Everything you mentioned about about OCLP, etc just applies to non-corporate or even non-work environments.

- macOS Security Update ends over 9 years later as early as 2007 Macs
- Windows EOL is 122 months since Vista

But the above are not indicative of typical replacement cycle but they're just support time.

People who do OCLP are mostly nerds and hobbyists who want to extend the useful life of their hardware because they may not have the financials to buy a Mac right now.

Users here prefer not to point out the obvious under threat of being Reported/Banned for being disrespectful.
 
Last edited:

JouniS

macrumors 6502a
Nov 22, 2020
638
399
The replacement cycles of four years is from Apple and those of 5-6 years is from Intel.

The replacement of Apple and Intel are what they notice their average customers do.
It would be interesting to know if "average" means mean, median, or mode here.

In a corporate environment, "replacement cycle" may also be a misleading term, as many (most?) computers never get replaced. A new employee gets a new computer when they start, and if it doesn't break, they will often continue using it until they switch to another job.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.