Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

What Are The Possibilities to See Industry Standard PCIe Slots in the New Mac Pro


  • Total voters
    77
  • Poll closed .
(e.g., if target if 430W then do 500W and have a slack buffer of capacity. )
But 70 watts of headroom is nonsense.

Today Nvidia announced a card with two Volta GPUs with doubled memory - 64 GiB HBM2 RAM and 10K CUDA cores per card.

Gonna squeeze that into that 70 watts? Heck, gonna squeeze that into 500 watts?
 
Last edited:
But 70 watts of headroom is nonsense.

Today Nvidia announced a card with two Volta GPUs with doubled memory - 64 GiB HBM2 RAM and 10K CUDA cores per card.

Gonna squeeze that into that 70 watts? Heck, gonna squeeze that into 500 watts?

The point isn't to squeeze it into that 70W gap at all. The issue is to have a gap between your designed target and what you supply. Most of the mid-higher end GPU cards have spike usages (even lower end ones like some instances of the RX460's that couldn't stay in the 75W slot envelope.) . Thunderbolt/USB 3.1 is now a external device power plug ( 50W could be going out the door that way). What folks plug and and what spike loads you have from arbitrary stuff makes it prudent to put in a buffer.

super mega volta cards won't work in a 420-450W workstation system anyway. The vast majority of the workstation market doesn't need that particular kind of card to be viable as a workstation. This is pure fanboy smack talking tangent junk that typically takes these threads sideways into the swamp.

If Apple pushed the Mac Pro power (and cooling ) limit up to 950W a single 2nd card in that class ( secondary slot has 290-300W windows ) would work just fine. That would still like plenty for a primary GPU and CPU. But that system too the norminal targeted level is 900W and below with some slack.
 
Not no one. If Apple is trying to literally put the Mac Pro on the desktop most users have system footprint constraints for their desktops ( largely because there is other stuff they need on their desks besides just the base computer system). The Mac Mini has a reduced footprint because of this need. All-in-One place the system behind the largish monitor to minimiize (effectively reduce to zero) the actual physical footprint on the desk. To say that no one cares about the size of stuff on their desk is an exaggeration. More than a few do; on one simply just isn't true at all.

Sorry, I didn't mean to claim that utterly no one cares. My point is that for the desktop workstation, the need to shave a 1" thick box down to only 0.75" thick is utterly absurd.

For example, when you get to work today, take a look at how many inches of "wasted" desk space lies behind your monitor - - I don't think I've had an office in the last 20 years where it wasn't at least four inches. There's an example of how much "thicker" the iMac Pro could have been to help with its thermals (and an access door for customer replaced RAM, etc)

Another way of getting to zero print on the desk is to move the whole system to desk-side ( or under desk or just plain remote ). Monitor , keyboard , and mouse (and perhaps a dock for front facing ports) are on the desk and the whole system is moved to a slightly more remote location ( side/under desk). In that context, "small as possible" isn't an issue. Apple probably isn't going to do large as possible, but there are fewer constraints.

Sure. And when we look at today's use cases, we also can find where there's a ~10" x 14" space that's 'wasted' to the side for where the laptop is docked - - can't really put anything on top of it, either. FYI, this is what's happening in my office now, with the advent of new laptops that use USB-C instead of the old docking stations that we could hide underneath LCD display shelf.

Random off-the-shelf component with a mishmash of thermal design philosophies also run counter to good thermal design principles. The notion of just be an overly large container that can deal with arbitrary thermal designs is something that Apple is probably going to be resistant to.

To which I say "Deal with it, Apple".

The problem that Apple has is that there's a basic binary choice they have to make: they either take on the responsibility of a 'box' design that can accommodate the works of other vendors, or they have to take on the responsibility of being that subsystem's developer/manufacturer/supplier, for both HW & SW.

And while the latter path is what leads to wonderfully optimized packaging for "thin!" products like the MBA, there needs to be the corporate insight to know & understand when & why it is necessary to go spend that money, versus just buying a much cheaper existing off-the-shelf component.


The MP 2013 was design right up the the thermal tolerances. That was a bad move. If Apple had put just a reasonable bit of slack into the overall system design there would have been less drama with some thermal failures. (e.g., if target if 430W then do 500W and have a slack buffer of capacity. )

Agreed. Reminds me of a design review meeting I had with a contractor two years ago:

V: "And this part of the design does this and that..."
Me: "Okay, but where does the coolant go when the 'leakproof' connector starts to leak?
V: "...um, its a leakproof connector..."
Me: "Yes, got that - but when it does start to leak, where does it go? it will fry the electronics".
V: "...um, its a leakproof connector..."
Me: "Yes, got that - but it will eventually leak. What's the failure mode?"
V: "...um, its a leakproof connector..."
Me: "How about you put a drain in that corner there, a 'Sorb pack and a system alarm?"
V: "...um, its a leakproof connector..."
Me: (Sheeze!)


That the same kind of baseline myopic mantra that keeps BIOS support alive. Gotta design for the lowest common denominator in the race to the bottom market. Intel is the only one who announced dropping BIOS and that's years from now...

Contextually, I think I was alluding to here to how off-the-shelf GPU cards are limited because of how Apple wants to have Thunderbolt protocol piggybacked on top of an otherwise bog-standard Video signal to a display. This is an Apple constrained that limits the availability of GPU cards and raises prices.

What's the real benefit to the customer? FWIW, I'm not suggesting being a luddite who still demands having VGA port, but what I'm really getting at is why Apple is (once again) insisting on being incompatible to even to the new industry standards.

That's said if Apple has their own slot format that is not as big of an issue if they chose something and consistantly stick with it across 10-15 years of upgrades. Once there is a long term flow there will be a market (if only in the newer cards for newer systems are being placed in older ones. )

Agreed - - but since my one office desk drawer has close to a dozen "Apple dongles" for just the video connections that Apple has used over the years -- its been a chronic PITA -- I can't see this ever happening at Apple.

If baseline mainstream market has shown solving the core technical design issue. PCI SIG want to be PCI-e pure. The general AIO card market is oblivious to Thunderbolt and point to Rube Goldberg kludges to get around the problem. MXM cards are really somewhat of pseudo standard that on a subset of solutions are produced for. etc. etc.

If the mainstream market was actually solving the issue with a solid solution standard and Apple was not falling in line then sure .. .that is a pain. This reality is that they aren't trying at all. Apple isn't a "monkey see , monkey do" company which is going to simple just pile into the same straight jackets . That is the fast track to becoming a dead, formerly large, leading, tech company.

Sure, but back to basic use cases: just what problem is the Thunderbolt protocol really trying to solve?

To what degree is it an entirely self-inflicted wound (and self-imposed constraint) that's as simple as that an egotistical "designer" {bleep hole} wants the PC's video cable and data cable to be merged into one cable?

...The other issue is what apple alluded to. Produce regular updates. If that turns into "I refuse to buy those from Apple" then that really isn't flexibility issue at all.

True, but this sidesteps an observation that I've made, which is that if Apple chooses to NOT sell these incremental upgrades to _existing_ Mac Pro customers, then its not really addressing the customer needs for their own lifecycle management: all that has happened is that Apple has made their new product upgrade cycles cheaper.


Really? You haven't been in forums about USB Type-C cables where folks are yelping about why there are 2-3 different kinds of cables. Some that do mainly just power (and only get USB 2 ) , some that can do USB , Power , and TBv3 , and others that do TBv3 but not power.

There are a huge segment of users that have been highly conditioned into a mindset of "if it fits it should work".

As they damn well should. The USB-C mess is utterly inexcusable. Literally EVERY VP who approved that junk should be taken out into San Francisco Bay and keelhauled until they've lost 90% of their body mass.

Internal to the system backups seem like an odd duck. Most of the single points of failure within the system being backed up are exactly the same single points of failure of the backup itself. ( same power supply, same controller , .... )
It is better than none but some backup needs to be decoupled.

Internal to the system is for the general convenience of the Small/Independent who lacks a robust IT Dept to have gobs of automated off-site backups and the like. True, an external is still needed for decoupling, but the point here is that the "clean" desktop aesthetic stems from not having a pile of externals on the desktop for everything too, so accommodate putting one of these back into the box.

Heck, in a much more ideal world, I'd like to copy what some of our Windows PC Towers have for running on secure enclaves, which is a Slot built right into the front of the machine that fits a 3.5" HDD, which allows the entire drive to be pulled in/out. But I do recognize that that's a different use case.
 
So just reading the "tea leaves", where does that leave the mMP?

It seems from Apples POV, the iMac Pro is the "perfect" desktop workstation for 90% of that market (that would use a Mac). Most of the market the iMP is not serving is either 1) people who like to use their own displays (i.e. want nothing to do with a glossy built-in display no matter how many other external displays they can hook up to it), but Apple doesn't care about that (as any computer they offer without a built-in display languishes for years without updates), or 2) users who like to do their own internal upgrades, again, a market Apple doesn't care about or have any interest in encouraging.

The market for *Mac* computers with more power than the iMac Pro is tiny... not the kind of sales figures Apple's interested in.

So is the mMP ultimately mostly for "show"... to keep the tiny niche happy (along with the press that likes to focus on these things) because it has outsized influence in the success of Apple?

It seems that much of the challenge in guessing at what the mMP will be is in figuring out what Apple's "true" motivation for developing the mMP is... and even more challenging, has that motivation shifted at all since they (internally or publicly) committed to a mMP?
There is one other option. Anywhere that the iMP is discussed, you inevitably see post after post of people wanting it for gaming performance (and complaining about crappy boot camp drivers). That ANYONE would be willing to drop $5-8K on a gaming computer shows how underserved that part of the Macintosh market is.
So Apple could well decide that they've captured enough of the high-end niche with the iMP, pivot on their statements a little, and work on a consumer-level i7/i9 box with 1-2 upgrade slots in order to capture most of the rest of the "Hackintosh" community. It would essentially be a large Mac Mini.
Because I'm betting that Apple originally panicked last year when they saw a sudden spike in MacOS installations on hardware in strange configurations that they never actually sold. Ergo, Hackintoshing was going mainstream, and they needed to both offer better high-performance options, and start making architectural changes (the T1 and T2 "security" chips) that they could use to shut down the Hackintosh crowd after they recaptured some large percentage of it.
[doublepost=1522262415][/doublepost]
I would wager that a bunch of people who have switched to Z-series and Precision are rather awe-struck at the cheap next-day in-home/in-office support. No more "haul it to the (so-called) Genius" - just "the tech will be there tomorrow".

And not just enterprise customers - free-lancers and small shops should be thrilled at the warranties. Anyone who gets some income from their PC should be thrilled at real support.

I got the five-year next-day-service contract on the Dell T3610 (same CPU/chipset as hex MP6,1) workstation that I bought to be my home PC. Haven't needed it, don't expect to, but still a great investment.
Had this happen for me. My company bought that support for a Dell workstation, and when the motherboard failed (twice), they had a tech out the next day to my home office for the repair.
 
  • Like
Reactions: singhs.apps
For example, when you get to work today, take a look at how many inches of "wasted" desk space lies behind your monitor - - I don't think I've had an office in the last 20 years where it wasn't at least four inches. There's an example of how much "thicker" the iMac Pro could have been to help with its thermals (and an access door for customer replaced RAM, etc)

The iMac Pro doesn't particularly have thermal issues. Rampant overheating isn't a big complaint so far.


And when we look at today's use cases, we also can find where there's a ~10" x 14" space that's 'wasted' to the side for where the laptop is docked - - can't really put anything on top of it, either. FYI, this is what's happening in my office now, with the advent of new laptops that use USB-C instead of the old docking stations that we could hide underneath LCD display shelf.

If the Thunderbolt display docking station had a USB Type-C connect what would be the problem with placing the clamshell, docked MBP on the display's pedestal foot? When laptop gone store keyboard there. Laptop arrive trade keyboard for Mac in clamshell mode ( plug-in , wake-up on desk , then clamshell store under display). .


The problem that Apple has is that there's a basic binary choice they have to make: they either take on the responsibility of a 'box' design that can accommodate the works of other vendors, or they have to take on the responsibility of being that subsystem's developer/manufacturer/supplier, for both HW & SW.

Binary only necessary if don't want to do any work. If willing to do some engineering there is a continuum of several solutions not either also zero work or 100% work. It is like saying Apple should either buy every chipset augment that Intel sells or build the x86 clone from scratch. That is vast range inbetween that is useful.

The flaw a bit with Apple is that they have been binary. Either total control over every GPU possible or very little control. Again overly simplistic for no good reason. Apple has a "total control" system in the iMac Pro. They don't need another.


Sure, but back to basic use cases: just what problem is the Thunderbolt protocol really trying to solve?

Thunderbolt is trying to tackle a couple things and that is a dual edge sword at times. There are two points. However, one of the major things is to be a docking station connection standard that can be used across a wide range of devices. Thunderbolt carries everything that a docked system needs to connect to a immobile desktop device with a power cord that stays stuck in the wall all the time. [ If it had remained Lightpeak then that probably wouldn't have been true, but the push to add copper to the mix was partially to carry power also. Cheaper also, but power has been a target with the name change. ]


Laptop dock historically supplied power , monitor connection , network connection, more ports that stayed with the desktop ( keyboard, mouse , sketch pad , etc. )

USB 3.1 alt modes try to cover that but TB does a better job. You can have up to a 5K screen and all the normal mainstream desktop ports on one single cable. Some mishmash of the other USB type-C alt modes can't. And probably won't any time soon with reasonable latency all around.

Between a deskside/underdesk system and the monitor the incremental push to increase USB speeds make the usable cable length shorter ( USB 3.1 was a decrease. USB 3.2 pragmatically probably is also in more cases. USB 3.2+n probably would be another shrink.) . If have power cables on each relatively largely separated end Thunderbolt skips over the distance gap.

The second point is that Thunderbolt had alt mode before USB Type-C got alt-mode. Thunderbolt is capable of doing pure DisplayPort pass through ( In the TBv3 context that is pragmatically Type-C Display Port Alt Mode). So this whole thing about the Thunderbolt protocol is a bit of a misnomer. Thunderbolt ports all you to go "TB protocol-less". That has always been a feature through all three versions. If you want "pure" video out then plug in a "pure" video out cable and you will get it. If the cable fits it works.

The question is why do folks what to create more context where plug in the appropriate cable that should work and it does not. How is that more intuitive user interaction and better design?


To what degree is it an entirely self-inflicted wound (and self-imposed constraint) that's as simple as that an egotistical "designer" {bleep hole} wants the PC's video cable and data cable to be merged into one cable?

In the context of the predominantly laptop oriented Mac ecosystem it isn't self inflicted at all. A wide range of peripherals tap into it. As USB type-c rolls out in the Windows portion of the PC ecosystem (which is also dominated by laptops) that is only going to grow larger.

I think Apple would be wrong to push every single possible display output into Thunderbolt. A mix of 4 video output cable TBv3 Type-C ports and either two HDMI 2.1 or a HDMI 2.1 and a "old school" mini-DisplayPort would work fine for 3rd party monitors. ( since apple like symmetry they'd probably pick two HDMI 2.1)



True, but this sidesteps an observation that I've made, which is that if Apple chooses to NOT sell these incremental upgrades to _existing_ Mac Pro customers, then its not really addressing the customer needs for their own lifecycle management: all that has happened is that Apple has made their new product upgrade cycles cheaper.

What existing Mac Pro customers. The 2009 MP is on the vintage list. It isn't even getting OS updates let along hardware augments. The 2010 is now on the so it is getting no more macOS updates. the 2012 probably is going on the list in October to join them in the same state. So brand new cards for vintage and obsolete hardware. That's isn't an Apple thing, no other folks are doing updates for systems in that status either. Pointing back at the vintage stuff and saying Apple has to do it because they did it before is rather doomed to failure.

The MP 2013 pragmatically doesn't have option. Apple should not pick that as an update baseline at all because frankly they didn't appear to put any thought into it being one. So those don't count either.

Apple could start a new line of Mac Pros that could be updated, but the existing base is dead in that dimension. Apple also has the pragmatically TBv3 mandated external standard PCI-e slot enclosure boxes to deal with too. The secondary x16 slot in the Mac Pro probably should have some overlap with the "add in" cards that the rest of the Mac ecosystem will pick up with secondary GPUs. (hence be a standard PCI-e x16 slot ). The normalization would be that it was a secondary ( or 3rd-4th depending upon boxes and contexts ) GPU across the whole line up.

[ Old timers extending past Vintage status could collect some of that fallout if it happens to work but that isn't a viable sales market for Apple or really 3rd parties. It is dead market that is only going to shrivel away at this point. ]


As they damn well should. The USB-C mess is utterly inexcusable. Literally EVERY VP who approved that junk should be taken out into San Francisco Bay and keelhauled until they've lost 90% of their body mass.

USB-C simply means people have to use some common sense. I realize that is in short supply these days but it is pretty simple. Sub 1m cables that are high quality and robust are pretty much universal. Those can cover all the power , TBv2 40Gb/s , and USB 3.1 gen 2 you want. Limited length you get "everything" ( over those three). The longer lengths get more special case but the lengths that folks will likely use to connect two items both on their desktop it isn't that complicated.

USB race-to-the-bottom vendors selling stuff that doesn't work or is mainly "cheaper" is an issue.... but that has been true of USB all along. That's not a new Type-C thing.
 
There is one other option. Anywhere that the iMP is discussed, you inevitably see post after post of people wanting it for gaming performance (and complaining about crappy boot camp drivers). That ANYONE would be willing to drop $5-8K on a gaming computer shows how underserved that part of the Macintosh market is.
So Apple could well decide that they've captured enough of the high-end niche with the iMP, pivot on their statements a little, and work on a consumer-level i7/i9 box with 1-2 upgrade slots in order to capture most of the rest of the "Hackintosh" community. It would essentially be a large Mac Mini.
Because I'm betting that Apple originally panicked last year when they saw a sudden spike in MacOS installations on hardware in strange configurations that they never actually sold. Ergo, Hackintoshing was going mainstream, and they needed to both offer better high-performance options, and start making architectural changes (the T1 and T2 "security" chips) that they could use to shut down the Hackintosh crowd after they recaptured some large percentage of it.
It's a good idea to keep in mind that generally speaking, the "forum" is not representative of the wider market. I'm not seeing any evidence of your suggestion that people are dropping $5K+ for a gaming computer... it's not uncommon for people to be into gaming, or have children who are, so it makes far more sense that Video Guy buys iMP for work but wants to see how it plays Call of Duty (or again, their younger offspring do... don't forget that a large portion of the people who post here are teens and college-aged kids).

That being said, in this case, we KNOW there's a HUGE market for the machine you describe. The big box brands sell tens of millions of these type of machines every year. It's the Mac I've been pining for since forever. And yet it's the computer Apple has never made. It's practically the opposite of every computer Apple has offered the last decade. So what would give you reason to believe they have ANY interest in pursuing such a machine now when ALL the evidence of their product development would suggest otherwise? o_O
 
That being said, in this case, we KNOW there's a HUGE market for the machine you describe. The big box brands sell tens of millions of these type of machines every year. It's the Mac I've been pining for since forever. And yet it's the computer Apple has never made. It's practically the opposite of every computer Apple has offered the last decade. So what would give you reason to believe they have ANY interest in pursuing such a machine now when ALL the evidence of their product development would suggest otherwise? o_O

Nothing gives me a reason to expect that. But if I'm trying to figure out what *I* would do next in their place, it wouldn't be to make a headless iMac Pro, and split that niche. It would be the mid-to-high range headless consumer Mac that users have been wanting for years, and as you point out, sell by the millions from other manufacturers.

Apple doesn't want that market, because it's low-margin and could cannibalize some iMac sales. But it's precisely the market slice that could take them from 10% of the market to 25% very easily. Even if they don't make gobs of money on such a machine, it would massively expand their market share and attract more developers/businesses away from Windows.

Hell, if I were them, I'd sell T2-protected motherboards to Dell or HP, and let them build/sell/support the thing.
 
Fixed that for you. ;)

Standard SSDs from Samsung already have AES encryption built in. For the other functions, Apple could put the T-2 on a standard PCIe card.
I meant "protected", as in protecting Apple from having Hackintoshers be able to run MacOS without it. :p

Though the T2 chip seems to be the root of some of the most annoying bugs in the iMP.
 
  • Like
Reactions: Synchro3
I meant "protected", as in protecting Apple from having Hackintoshers be able to run MacOS without it. :p

Though the T2 chip seems to be the root of some of the most annoying bugs in the iMP.
Bingo! The T2 is a Trojan horse - providing a new level of the vendor lock-in and forced obsolescence that Apple is famous for.
 
  • Like
Reactions: barmann
Apple certainly fails that test today - if you're looking at Apple OSX systems.

Apple is clearly on top as far as forcing people to buy lots of expensive dongles in order to actually use their laptops, however.
Naw, Apple’s just fat and lazy. They need somebody to chase ‘em around the block a bit to clarify their priorities. Microsoft could, if they stopped trying to be Apple and remembered what once made them strong. Google knows, but that means they need to drink Microsoft’s milkshake while their back is still turned.
 
Last edited:
Nothing gives me a reason to expect that. But if I'm trying to figure out what *I* would do next in their place, it wouldn't be to make a headless iMac Pro, and split that niche. It would be the mid-to-high range headless consumer Mac that users have been wanting for years, and as you point out, sell by the millions from other manufacturers.

Apple doesn't want that market, because it's low-margin and could cannibalize some iMac sales. But it's precisely the market slice that could take them from 10% of the market to 25% very easily. Even if they don't make gobs of money on such a machine, it would massively expand their market share and attract more developers/businesses away from Windows.

Hell, if I were them, I'd sell T2-protected motherboards to Dell or HP, and let them build/sell/support the thing.
Bingo! The T2 is a Trojan horse - providing a new level of the vendor lock-in and forced obsolescence that Apple is famous for.
and it caps disk io buy being only pci-e X4
 
The iMac Pro doesn't particularly have thermal issues. Rampant overheating isn't a big complaint so far.

The reviews I've read on the iMac Pro have revealed a couple of interesting points on its engineering:

* Thermal management is pretty darn good. Not 100%, but a ~98% solution.

But...

* Part of this "98%" was achieved by down-rating subsystems performance

* Part of this was also achieved through an expensive cooling design, which may have also been what was responsible for killing the RAM access door.

The big picture ramifications of all of this is that the current iMac Pro's design has ZERO thermal headroom for future iterations (updates). That's effectively the same design flaw that the tcMP ran into: since it leaves effectively no room for future redesigns, its a design dead end.


If the Thunderbolt display docking station had a USB Type-C connect what would be the problem with placing the clamshell, docked MBP on the display's pedestal foot?

Tried this already. With the likes of Lenovo, Dell, they've been designed such that the user "slides" the laptop in until it hits a stop, then drops it down & pushes it to engage the connection port (typically, its on the middle of the bottom of the laptop). Pretty much a 2-3 step process. Removal is a release lever on the dock, then grab the laptop & pull it out. In Engineering design terminology, this is employing a "blind" connector interface. In layman's terms, "blind" means that I don't need to see the point of connection to make/break the connection - they typically auto-align and so forth.

In contrast, I've not yet seen any dock that uses USB-C in a similarly "blind" configuration. As such, to pragmatically use an under-display shelf system, the notional operator would need to:
(a) move the keyboard out of the way (or put the laptop down on top of the keyboard),
(b) manually fish out the (loose) USB-C cable from where it was stashed underneath,
(c) plug the USB-C cable into the laptop,
(d) push the now-connected laptop into the shelf,
(e) manage (push clear) the "bird nest" loop of the extra USB-C cable,
(f) put the keyboard back into its place.

True, this isn't end-of-the-world horrible, but it also isn't any cleaner than the historical Windows Laptop solution of the last decade.

When laptop gone store keyboard there. Laptop arrive trade keyboard for Mac in clamshell mode ( plug-in , wake-up on desk , then clamshell store under display). .

With the conventional blind dock interface, it depends on how tall the shelf unit is for if you need to move the keyboard from its normal operating position on your desk. FWIW, you can see a (old, pretty lame) video example here (skip to ~1:08).

Binary only necessary if don't want to do any work. If willing to do some engineering there is a continuum of several solutions not either also zero work or 100% work. It is like saying Apple should either buy every chipset augment that Intel sells or build the x86 clone from scratch. That is vast range in between that is useful.

Okay, fair enough from a pedantic point of view. However, these two do represent the endpoints, and in my defense, I did say "basic binary choice".

The flaw a bit with Apple is that they have been binary. Either total control over every GPU possible or very little control. Again overly simplistic for no good reason. Apple has a "total control" system in the iMac Pro. They don't need another.

And Apple has made this mistake repeatedly in the past, such as by how they've been slow to manage costs by adopting existing industry standards...internal SCSI hard drives for a classic example, but similarly today, their proprietary version of the M.2 SSDs.

Thunderbolt is trying to tackle a couple things and that is a dual edge sword at times. There are two points. However, one of the major things is to be a docking station connection standard that can be used across a wide range of devices. Thunderbolt carries everything that a docked system needs to connect to a immobile desktop device with a power cord that stays stuck in the wall all the time....

...The question is why do folks what to create more context where plug in the appropriate cable that should work and it does not. How is that more intuitive user interaction and better design?

All reasonably good points, although the experience I'm having today with a new Dell laptop with USB-C is that I'm stuck using the laptop's Ethernet port too, because the Ethernet in the USB-C dock has its own (+different) MAC address which also requires yet another (+different) Windows driver to be able to use it ... granted, this probably isn't a big deal for a home user, but our Windows 10 IT folks haven't broken the code on just how to get this driver installed & working through their security protocols (3 months & counting). As such, I'm on a USB-C laptop with dock, but I still have to plug in more than just the USB-C cable (neither of which are a blind connection, either).

In the context of the predominantly laptop oriented Mac ecosystem it isn't self inflicted at all. A wide range of peripherals tap into it. As USB type-c rolls out in the Windows portion of the PC ecosystem (which is also dominated by laptops) that is only going to grow larger.

I agree that the Windows USB-C market will grow, but will the Mac USB-C market grow too? If and only if they are 100% interchangeable. So then are they really 100% interchangeable in all aspects? Personally, I don't know - - but knowing Apple, I doubt it.

I think Apple would be wrong to push every single possible display output into Thunderbolt. A mix of 4 video output cable TBv3 Type-C ports and either two HDMI 2.1 or a HDMI 2.1 and a "old school" mini-DisplayPort would work fine for 3rd party monitors. ( since apple like symmetry they'd probably pick two HDMI 2.1)

FWIW, my thought process here is really bearing down on how Apple will probably - as a policy - demand that they'll only support PCIe GPU cards that include TBv3, and actively disallow all others.

What existing Mac Pro customers. The 2009 MP is on the vintage list. It isn't even getting OS updates let along hardware augments. The 2010 is now on the so it is getting no more macOS updates. the 2012 probably is going on the list in October to join them in the same state. So brand new cards for vintage and obsolete hardware.

Which explains how & why for the past three (3) years, I've not been able to find any Apple "OEM" graphics cards for sale on the Apple Store for either my 2009 (before it went vintage) or my 2012 (which supposedly still is supported)?

Point here is that Apple can yammer about 'modular' stuff, but that doesn't mean that they're going to sell upgrade components to the Mac Pro customer who bought his hardware two months before Apple shipped this newer modular GPU card or whatever.

That's isn't an Apple thing, no other folks are doing updates for systems in that status either.

I ended up buying a 3rd Party GPU card in 2017, because there's effectively not been any Apple parts for sale (and I'm not about to drag a cMP down to the local Apple Store to merely ask for a 'repair').

Pointing back at the vintage stuff and saying Apple has to do it because they did it before is rather doomed to failure.

Not my intent: my point is that they didn't even do this when their cMP's weren't vintage yet, so the current (well, April 2017) promises of 'modularity' doesn't sound credible to me, at least in the terms by which I interpret what "modular" means (ie, centric to my customer needs - - I'm not about to disrupt a high value worker by sending his one year old mMP out to Apple for ~two weeks to have it upgraded ... even if Apple were to offer this service - -which I also doubt)

USB-C simply means people have to use some common sense. I realize that is in short supply these days but it is pretty simple. Sub 1m cables that are high quality and robust are pretty much universal. Those can cover all the power , TBv2 40Gb/s , and USB 3.1 gen 2 you want. Limited length you get "everything" ( over those three). The longer lengths get more special case but the lengths that folks will likely use to connect two items both on their desktop it isn't that complicated.

USB race-to-the-bottom vendors selling stuff that doesn't work or is mainly "cheaper" is an issue.... but that has been true of USB all along. That's not a new Type-C thing.

Agreed, the race-to-the-bottom has been around for years, but philosophically, that just makes the propagation (and in some ways, seems to be worse?) of this in USB-C less excusable: the vendors don't have the excuse of this being a new problem - especially with Apple's "It Just Works" marketing history.
 
...
The big picture ramifications of all of this is that the current iMac Pro's design has ZERO thermal headroom for future iterations (updates). That's effectively the same design flaw that the tcMP ran into: since it leaves effectively no room for future redesigns, its a design dead end.

It isn't a dead end. Future components that fit that envelope will extremely likely be faster. So the iMac Pro will cover even more workloads in the future than it is does now. That's all it needs to do. To have a future it just needs a path. 420-450W is decent amount of power. Moore's law isn't completely dead. 7nm is coming on line in the next year or so and 5nm is being queued up in the labs.

Having a stopping point headroom only opens the door for a revised Mac Pro that doesn't. Apple could assign the next Mac Pro a 800-900W headroom. iMac Pro is 'bigger' then iMac and the Mac Pro would be bigger than both.

Getting back to the slot issue the thread starts off with, if adding another 300+ Watts to the thermal headroom why wouldn't Apple add in a x16 PCI-e slot to do that. At least one. There are more than a few items with that slot (and 8 pin power) that fit that range. That slot would be primary reason why someone didn't buy the iMac Pro instead.

On the display GPU side. If just take the iMac Pro GPUs (driver , boot, etc. work already done) and put a bigger cooler on them can take same baseline design and crank it back up to "normal" range with a different cooling system. A heft chunk of the GPU R&D is spread over both machines. If budget is tight then design it back into same video subsystem as on the iMac Pro ( 4 TBv3 ports out. ). Detached perhaps from the logic board perhaps, but same hookups just via some connector(s).






Tried this already. With the likes of Lenovo, Dell, they've been designed such that the user "slides" the laptop in until it hits a stop, then drops it down & pushes it to engage the connection port (typically, its on the middle of the bottom of the laptop). ...

True, this isn't end-of-the-world horrible, but it also isn't any cleaner than the historical Windows Laptop solution of the last decade.

Form over function docking station. Billions of folks are familiar with "docking" their mobile phones. It is basically the same. Plug in the cord and your have power (and data if you want it). If not looking for proprietary lock in then a cord is simpler and well within the vast majority of users knowledge of use. Even laptop user's experience away from the desk. Plug in the cord to power the laptop and use it.






With the conventional blind dock interface, it depends on how tall the shelf unit is for if you need to move the keyboard from its normal operating position on your desk. FWIW, you can see a (old, pretty lame) video example here (skip to ~1:08).

the metal plate at the bottom of the thunderbolt display is there just to keep the thing from tipping over. If the keyboard is in the normal usage position then that pedestal foot is empty. But it is nominal place to "store" the keyboard out of the way if need to use the prime space on the desk to do old fashion writing/drawing on paper or read.








FWIW, my thought process here is really bearing down on how Apple will probably - as a policy - demand that they'll only support PCIe GPU cards that include TBv3, and actively disallow all others.

the eGPUs being TBv3 only? Probably, but technically there are reasons for that. One, most modern GPU cards are PCI-e v3. Pragmatically, TBv2 is PCI-e v2. So it is a laggard in performance. The gap only gets worse of PCI-e v4 GPUs arrive later in the future. Second, eEPU was never officially adopted by TBv2 standards (is Apple going to support something the standards doesn't if it serve no huge added value?). Third, if Apple eventually does something like mapping the display back in through the host/primary GPU they'll need the bandwidth for larger displays.


Point here is that Apple can yammer about 'modular' stuff, but that doesn't mean that they're going to sell upgrade components to the Mac Pro customer who bought his hardware two months before Apple shipped this newer modular GPU card or whatever.

It kind of boils down to whether Apple can learn from mistakes or not. They tried to punt the upgrade thing completely with the MP 2013. That did not work for a sizable subset of the Mac Pro market. So if they want to address that subsegment o the market they are going to have to do some thing different this time than they have done in the past (because the past really did not work well. Either style. ).

The pool of TBv3 eGPU macs growing larger over next several years may help. Even if a small fraction of the those use eGPUs then that could double the pool of "module GPUs" you could sell to Mac users. (e.g., Mac Pro about 1-2% ... eGPU usage of Mac Market about 1-2% makes for 2-4%. 4% of 12M is 480K market size which is probably big enough to be healthy. 10% of that (upgraders in a specific year) is 24K. )


One of Apple's primary problems is that there is perceptions that they don't do anything. That the Mac Pro is forgotten most of the time. Iterating more often on GPUs is easier to do than iterating on the whole system. It is a smaller, more focused logic board. They couldn't build every GPU card that anyone wanted, but if making 1-2 cards every 1-2 years is way too hard ...... questionable they should be in the Mac Pro business at all. As pointed out above they could just shift iMac Pro GPUs into a wider thermal zone as they update that system. They just do two iterations on the same GPU. One soldered to the main logic board (iMPro) and another on a card (MPro). If Apple can't pull that off they are in the lame category. Extremely lame. ( the recently announced iPad takes iPhone 7 SoC , legacy screen with an iPad Pro digitizer and sells it as a new system. flipping between logic board and card is about the same thing; repackage tech you already know. )
 
It isn't a dead end. Future components that fit that envelope will extremely likely be faster. So the iMac Pro will cover even more workloads in the future than it is does now. That's all it needs to do.
The same had been said for the nMP when it came out. Yes, there are thermal limits but Apple decided not to invest in upgrades. Saying that it was a dead end was to cast blame elsewhere. Apple made a choice not to produce new video cards in the trash can form factor. They decided not to upgrade to TB3. They decided not to upgrade to new CPUs and chipsets. Apple's management team just wasn't interested. They simply had no interest in the MP market.

So just like the MP, any future iMacPro upgrades will be an afterthought of a marketing decision. If they decide not to then the thermal issues or something else will be pointed to as the reason.

Jobs could make a decision to cut off old tech but not Cook. The languishing list: MP, mini, Airport/Time Capsule, iPod, Air.
 
The big picture ramifications of all of this is that the current iMac Pro's design has ZERO thermal headroom for future iterations (updates). That's effectively the same design flaw that the tcMP ran into: since it leaves effectively no room for future redesigns, its a design dead end.

Well, future redesigns are only impossible from that aspect if you assume all future GPUs will use more power. The most powerful configuration iMac Pro is the AMD Radeon Pro Vega 64, which has a TDP of 225W. That's fairly high, so I don't think you can say that it is impossible to have future powerful GPUs that use 225W power or less.

For example my GTX 980 uses far less power than my GTX 680 did, despite being a massively faster card.

The 6,1 was different because even in it's fastest configuration (D700), the TDP was limited to a measly 108W TDP. That's way too low of a limit and thus they had painted themselves into a corner. On the other hand, finding powerful cards in the 225W range is no problem.
 
Last edited:
They couldn't build every GPU card that anyone wanted, but if making 1-2 cards every 1-2 years is way too hard ...... questionable they should be in the Mac Pro business at all.

Apple's existing policy has been that they'll only upgrade the GPU if there is a new CPU available. So they only get upgraded together. This is a problem considering Intel's slower cycle, and it doesn't work well with Apple being the sole supplier of GPU upgrades. They'll at least do GPU upgrades with MHz bumps if there isn't a new architecture (like the 2012 Mac Pro, and some MacBook Pro revisions), but if AMD has a new GPU now and the next Intel cycle is another year away, it's a problem.

Apple just isn't set up for regular GPU upgrades. It's why I believe they could ship their own GPU slot, but why I don't believe they could make that a successful product.

And history is a good guide too. Even on the cMP, Apple's GPU upgrade options were pretty meh.

I also think it was a mistake to couple the GPU to some main system cooler. It makes it harder to upgrade. The GPUs should keep their own coolers.
 
The same had been said for the nMP when it came out.

Because it is generally true. Take a look at the tables here in this article covering the RX500 series release.

AMD Radeon R9 280 250W ---> RX 580 195W
AMD Radeon R9 270 150W ----> RX 570 150W
AMD Radeon R7 260 95W ----> RX 560 60-80W
AMD Radeon R7 250 65W ----> RX 550 50W
https://www.anandtech.com/show/11280/amd-announces-the-radeon-rx-500-series-polaris/2

However, the top end of the AMD line up went "expansionist" ...

AMD Radeon R9 290 250W ---> AMD Fury 275W --> AMD Vega 64 295W

https://www.anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sapphire-asus
https://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review


Nvidia generally did better across a broader range.
The Apple A-series GPUs have done better in iPhone bounded target range.


Yes, there are thermal limits but Apple decided not to invest in upgrades. Saying that it was a dead end was to cast blame elsewhere. Apple made a choice not to produce new video cards in the trash can form factor.

Although AMD's top end GPUs went up Apple could have downclocked them to fit ( the incremental rise in the top end was largely due to out of the box overclocking. AMD reference designs pushed the GPUs out of the reasonable tradeoff power/performance curve because they were chasing Nvidia bragging rights. the other problem was time. That D300 level 570 equivalent was stuck in time from 2013 to 2016 (and stuck in time on OpenCL too.)

The bigger thermal issue for the Mac Pro turned the two GPUs coupled to the same heat sink that the CPU used (and each other). That was borked in the top end range. They needed a new case to band-aid their solution. (move along a bit better) or a whole new basic case design.

They decided not to upgrade to TB3.

Apple rolled out TBv3 closer to a year after TBv3 launch because the initial set of solutions were a bit borked. Power management was off and Apple laptops at the top end were going to need higher than average power.


They decided not to upgrade to new CPUs and chipsets. Apple's management team just wasn't interested. They simply had no interest in the MP market.

It appears when it got to the "new case" option that they opened the door to the iMac Pro. ( even higher integration). The Mac Pro went from 2010 to 2013 with no significant upgrade so it is highly doubtful Apple had any plans on doing anything in 2014 and 2015 anyway (even with no thermal problems). It isn't just the Mac Pro that was lacking updates; a several in systems in the Mac lineup went comatose too.


So just like the MP, any future iMacPro upgrades will be an afterthought of a marketing decision. If they decide not to then the thermal issues or something else will be pointed to as the reason.

Part of the "afterthought" is a lower targeted upgrade pace. Users are upgrading at a slower pace too. One WWDC I think Schiller made a dig at Windows by saying the average Windows system is something like 5-6 years old and the average Mac system is 3-4 years old. I think that is partially driven by Apple focusing on the ones that do update quicker and putting less effort in the slower moving ones. Sell more to the people who buy quicker.

This whole protest movement but aot of old school Mac Pro users of "boycott the MP 2013" and when the sales tank we'll get what we want. Well, actually no. Buy less and Apple responds even slower. Might eventually get a response but also getting an elongated upgrade cycle too.


Jobs could make a decision to cut off old tech but not Cook. The languishing list: MP, mini, Airport/Time Capsule, iPod, Air.

Jobs was in charge when the overall PC market generally wasn't sliding backwards or stagnant. The Mac Pro hiatus from 2010 to 2013 has a very good he had a hand in throttling or killing it. The "5 year plan" for Mac Pro in 2008,2009,2010 would have covered those gap years.

The Mac market isn't immune to the general trend and has dropped down to 1-2% growth zone on occasion. If blindly use the "if it isn't growing kill it" approach Job's used then Cook wipe out most of the Mac line up. Wipe out too many models and the product line decline only gets steeper.
 
It isn't a dead end. Future components that fit that envelope will extremely likely be faster.

True, but that was also true of the 2013 Mac Pro .. and we know how that turned out.

What much of this still hearkens back to is the design philosophy. If you have a system that benchmarks today at, say {1, 1, 1}, do you choose to put it into a box that measures {1.001, 1.001, 1.001} so that all future design iterations have to meet all of those constraints, or do you build the box to be {1.25, 1.25, 1.25} so that even if your objective is to take advantage of future components being better, if they don't hit *all* of the points, it doesn't result in a catastrophic failure -- ie, you're much more likely to be successful to build & ship a product that's {0.8, 0.7, 1.2} when your box wasn't arbitrarily set at {1, 1, 1}.

So the iMac Pro will cover even more workloads in the future than it is does now. That's all it needs to do. To have a future it just needs a path. 420-450W is decent amount of power. Moore's law isn't completely dead. 7nm is coming on line in the next year or so and 5nm is being queued up in the labs.

Sure, although the classical history of performance-centric upgrades is that there's always the demand for more, and attributes such as thermal, size, cost ... often take second place. If the souped-up card has 40% more horsepower, but comes in at 465W, the power user customers will want that horsepower bump, even when their system is technically only rated for 450W max dissipation...

On the display GPU side. If just take the iMac Pro GPUs (driver , boot, etc. work already done) and put a bigger cooler on them can take same baseline design and crank it back up to "normal" range with a different cooling system. A heft chunk of the GPU R&D is spread over both machines. If budget is tight then design it back into same video subsystem as on the iMac Pro ( 4 TBv3 ports out. ). Detached perhaps from the logic board perhaps, but same hookups just via some connector(s).

Sure, this is a good management of R&D resources approach and time will tell if it is going to be employed or not. However, the other half of it is the "..and what's the next step up?" question of higher performance for that customer niche.


Form over function docking station. Billions of folks are familiar with "docking" their mobile phones..

Yeah, including the regular wooden toothpick to clean the lint out of the recessed Lightning port.

the metal plate at the bottom of the thunderbolt display is there just to keep the thing from tipping over. If the keyboard is in the normal usage position then that pedestal foot is empty. But it is nominal place to "store" the keyboard out of the way if need to use the prime space on the desk to do old fashion writing/drawing on paper or read.

True, although this is still missing my point. My point was that the desktop behind my desktop LCD display is functionally wasted space, so a "bigger box" doesn't represent a disadvantage.

FWIW, to put some IRL numbers on this, my current setup ('workgroup manager') uses one of the newer (narrower) 24" wide cubical desks where I have the PC set up, with a generic 25" Samsung LCD display. "Wallhanger" kits to mount a display on a cubicle wall isn't allowed, so display is shoved back until its pedestal is hitting the cubicle wall. Thus arranged, the pedestal dictates that my side of the LCD display is 5" away from said wall. Since the LCD display is ~1" thick, this means that if this were an iMac, its case could be 4" thicker with basically no negative consequences for desktop space utilization, since the space it is occupying is wasted space in this office setup: If not for ease of access to a few buttons, I could put a small form factor Dell desktop behind here too (I'm in the process of getting one of those set up in my other office; ETA next week).


... the eGPUs being TBv3 only? Probably, but technically there are reasons for that. One, most modern GPU cards are PCI-e v3. Pragmatically, TBv2 is PCI-e v2. So it is a laggard in performance. The gap only gets worse of PCI-e v4 GPUs arrive later in the future. Second, eGPU was never officially adopted by TBv2 standards (is Apple going to support something the standards doesn't if it serve no huge added value?). Third, if Apple eventually does something like mapping the display back in through the host/primary GPU they'll need the bandwidth for larger displays.

Still is all a mess, which was my basic point. What's the tangible benefit of making things harder for yourself?

It kind of boils down to whether Apple can learn from mistakes or not. They tried to punt the upgrade thing completely with the MP 2013. That did not work for a sizable subset of the Mac Pro market. So if they want to address that subsegment o the market they are going to have to do some thing different this time than they have done in the past (because the past really did not work well. Either style. ).

I'm increasingly concerned that Apple isn't learning from their mistakes.


The pool of TBv3 eGPU macs growing larger over next several years may help...

I seem to recall the same mantra being claimed for TB1 ... and then again for TB2 ...

One of Apple's primary problems is that there is perceptions that they don't do anything. That the Mac Pro is forgotten most of the time. Iterating more often on GPUs is easier to do than iterating on the whole system. It is a smaller, more focused logic board. They couldn't build every GPU card that anyone wanted, but if making 1-2 cards every 1-2 years is way too hard ...... questionable they should be in the Mac Pro business at all.

True, although much of the hurt on these lack-of-iterations is functionally self-inflicted because of their effective opposition to designing a system that can just use off-the-shelf Windows PC GPU cards. Even if it comes back around to just writing drivers (because Nvidia won't, or whatever), I should be able to go to Apple as the OEM to download the stuff, rather than to rely on the "shoestring resources" of a random 3rd Party such as MacVidCards in his day.


As pointed out above they could just shift iMac Pro GPUs into a wider thermal zone as they update that system.

Sure, but to do that will take a new case & fan design, since the current one was left with zero headroom - - and that's precisely the sort of thing that I'm pointing at as profoundly BAD Corporate Leadership in lifecycle engineering management ... they've already painted themselves into a corner with the iMac Pro where their "upgrade" path has already been constrained - - unnecessarily - - for future tech to provide higher GPU performance only within the same thermal envelope *AND* same power envelope *AND* same physical envelope ... {1, 1, 1}.
[doublepost=1522700817][/doublepost]
Well, future redesigns are only impossible from that aspect if you assume all future GPUs will use more power...

True, but what I'm really saying is that when GPU horsepower growth is a future goal, to design your system to only be able to accommodate upgrades that use less power / dissipate less heat, you've introduced additional engineering constraints that you'll have to pay for.

The 6,1 was different because even in it's fastest configuration (D700), the TDP was limited to a measly 108W TDP. That's way too low of a limit and thus they had painted themselves into a corner...

And they paid for it.

On the other hand, finding powerful cards in the 225W range is no problem.

And if the 6,1 had been designed with a 250W TDP to accommodate future growth options, we obviously would have had upgrades (and not be having this discussion in the first place).
 
Sure, but to do that will take a new case & fan design, since the current one was left with zero headroom - - and that's precisely the sort of thing that I'm pointing at as profoundly BAD Corporate Leadership in lifecycle engineering management ... they've already painted themselves into a corner with the iMac Pro where their "upgrade" path has already been constrained - - unnecessarily - - for future tech to provide higher GPU performance only within the same thermal envelope *AND* same power envelope *AND* same physical envelope ... {1, 1, 1)
Given that the iMac Pro runs silently 99.6% of the time, how can you say that they’re at their maximum heat dissipation already? All they would have to do is set the fan to normal i7 iMac noise levels in order to shift more heat than they are currently. That also ignores the fact that they do have full control over the iMac’s case design, the back could be made deeper, and so there’s nothing preventing them from modifying it with bigger fans or even a liquid cooling system.

When you’e not constrained to fitting everything into a tube of a specific size, all sorts of incremental changes are possible.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.