Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
goMac was saying that Fingers in Apple are pointed at AMD, for very hot GPUs.

My take on the finger pointing: Everyone at Apple is just as frustrated as everyone here, they really wanted to do an update, they had a dumb case design, they unrealistically thought GPUs would get cooler, and AMD over promised.

There are fingers being pointed at the case design too, but as the interview showed, Apple management is still having trouble coming to terms with that.
 
  • Like
Reactions: ssgbryan
The adapters aren't quite that cheap, but another problem is once you've moved off of TB2 you're just going to have a bunch of wasted PCIe lanes.

Well if wait around the 200 series like PCH then have lanes to waste. ;-) 16-24 on the PCH chipset. http://www.anandtech.com/show/10959...ration-kaby-lake-i7-7700k-i5-7600k-i3-7350k/7

At this point, targeting the Xeon E5 v4 and associated PCH limitations makes no sense. No product until at last 2018 means baseline assumptions on 200 series capabilities is more than unaggressive.

I'd be suspicious there isn't firmware/GPIO issue of having both TBv3 and TBv2 controllers to support in the boot context. Even if there isn't, it is just less complexity in just doing one ( and since 2018+ ) TBv3 one.

What would be practical, but also un Apple like would be to provide some mini-DisplayPort only ports. More than a fair amount of Mac Pro 2009-2013 users are using that connector for external monitors. Just "plug in and use" would be viable. Same with having some Type A USB ports. There is a tons of deployed, sunk cost external "modules" to connect to. No PCIe lanes "wasted" because none used.

That would just leave the folks with TBv2 equipment with this somewhat expensive adapters to buy/use. However, there are probably far less of them than folks with DP monitors in this user base.


P.S. A TBv3 controller capped internal with a TB2-to-3 adapter electronics. Yeah that would be a waste. Not only in cost but in expectations since the legacy DisplayPort pass-through mode doesn't work on it. So the expectation mismatch at the systems port would be extremely high and non TB standards.
[doublepost=1492120071][/doublepost]
Ok I was under a small amount.

:p

$73 from Amazon

A 600+ % increase from $10 .... yeah small. :)
[doublepost=1492120881][/doublepost]
My take on the finger pointing: Everyone at Apple is just as frustrated as everyone here, they really wanted to do an update, they had a dumb case design, they unrealistically thought GPUs would get cooler, and AMD over promised.

If there were really testing upgrades to the exact same case what they have done doesn't make a lot of sense. Yes the could not have done a full upgrade but the entry-mid level cards should have fit. They were under the top end of the "GPUs would get cooler" assumption of the original design. RX 570 - Rx580 fit even under the original assumptions. The 4 year gap between what they are still shipping and what is here now means can hit the same to slightly better predominance with the next level down cards. Instead of labeling them D310 and D510 , just label D510 (RX 570 ) and D710 ( RX 580 ).

That would have been some actions to aid to the 'mea culpa' discussion. "Everything we wanted to do didn't work (so going in different direction in future), but here is part of what we did get working." . That's is immensely better than "oh well, down the rabbit hole for another year... we have nothing., absolutely, nothing working. "

User repetitively "blowing up" their dual D700 system. How about "Hey, try these D710s while we work on a much better overall system solution. Those will keep your investment going better until we eventually figure this out. " Apple management can't see the value in that???? Seriously?



There are fingers being pointed at the case design too, but as the interview showed, Apple management is still having trouble coming to terms with that.

Grip ... how about a Fortune 100 computer company ( Mac business) that manages to only get out one new product in all of 2016. Seriously kind of wonder if this group would know good engineering if it jump up and bit them on the ass.

Clue stick. .... they aren't trying very hard to do upgrades.
 
If there were really testing upgrades to the exact same case what they have done doesn't make a lot of sense. Yes the could not have done a full upgrade but the entry-mid level cards should have fit. They were under the top end of the "GPUs would get cooler" assumption of the original design. RX 570 - Rx580 fit even under the original assumptions. The 4 year gap between what they are still shipping and what is here now means can hit the same to slightly better predominance with the next level down cards. Instead of labeling them D310 and D510 , just label D510 (RX 570 ) and D710 ( RX 580 ).

That would have been some actions to aid to the 'mea culpa' discussion. "Everything we wanted to do didn't work (so going in different direction in future), but here is part of what we did get working." . That's is immensely better than "oh well, down the rabbit hole for another year... we have nothing., absolutely, nothing working. "

User repetitively "blowing up" their dual D700 system. How about "Hey, try these D710s while we work on a much better overall system solution. Those will keep your investment going better until we eventually figure this out. " Apple management can't see the value in that???? Seriously?

Maybe they did and just felt it would not sell anywhere near enough to make it worth it. They were asked if they felt that mild (probably sub-10%) performance improvements were investigated and they said yes, but they felt it wasn't enough to justify offering since the retail price of the updated system would be the same (if not more).

And how many would have paid the Apple premium for upgrades GPUs? Especially if they had experienced multiple issues with the baseline ones? And what if they were closing in on the end of AppleCare? Or if the upgrade GPUs only offered a one year (or less) warranty?
 
My take on the finger pointing: Everyone at Apple is just as frustrated as everyone here, they really wanted to do an update, they had a dumb case design, they unrealistically thought GPUs would get cooler, and AMD over promised.

There are fingers being pointed at the case design too, but as the interview showed, Apple management is still having trouble coming to terms with that.

Seriously you are going to blame AMD for this? Apple had 3 years to design something and don't forget this is just a small part of their revenue they really don't care about his market segment its just for saving face. No one is to blame except apple, there are a ton of design concepts they could use with modern GPU's. Look at the pc space they are all boxes so if you want a modern gpu you will have to put up with a box plain and simple no matter who is the vendor
 
Vega is not "savior" low TDP GPU. The primary reason why the 'vega' subset in the Raven Ridge is a lower TDP is because many of apsect AMD crows about in Vega dog and pony show are gone and so are the clock rates. HBMs , memcache, etc. It is a much smaller and substantially different implementation of the same underly microarchitecture but it isn't in the same class as Polaris 10 or 20 implementation.

As for Polaris 20 XTX .... that 'XTX' may be as much bluster and puffery as the '20' .

As I posted in another thread though, it is kind of puzzling if Apple was trying to make a Polari 10/20 work in the Mac Pro why they would bury it. Something in the RX 580-570 would fit thermally and not be a thermal mismatch to the E5 v2. It probably would be more stable under high workloads than the D500 and D700 they are using now. They could get off of OpenCL 1.2 constraints in the hardware implementation.

It would not solve all of the problems they discussed at their roundtable but at least they would have done something. Even the 2012 Mac Pro has a firmware bump and some more CPU options. What they are done now is whole lot of nothing and announced going back down into the rabbit hole again for another year or so. They could at least demonstrate that they can execute on a "Plan B". [ There had to be some kind of entry and/or mid level card they were working on. No way none of those fit the current configuration either. Maybe mid range and definately the top end ... sure. At least one of the options they were working on fit to a decent extent. ] What Apple needed more for the last 6-9 months was something to limp alone along into the future on while they figured out what they really wanted to do. "we got nothing and back to the rabbit hole" only really magnifies the problem.

GPUs weren't their only problem with the mac pro. With Broadwell-EP they don't have enough PCIe bandwidth available for multiple thunderbolt 3 controller, dual GPUs and an SSD.

But I do agree they could have stuck a couple Polaris 10 chips in there and hit 10ish TFLOPS up from the current 6. I wonder if they didn't want to sacrifice the double precision compute performance or something. Tahiti had a unusually high amount of DP compute for a consumer chip. Maybe thats what FCP performance relies on or something.
 
I've said it before and I'll say it again.. The biggest problem Apple has is graphics drivers .

Nvidia even came out with Mac drivers for the 1080's.

Apple needs to go with Nvidia.
I’d prefer Apple get rid of their crappy implementation and let us choose either.
 
  • Like
Reactions: ssgbryan
Nvidia even came out with Mac drivers for the 1080's.

Nvidia has been supplying Web and Cuda Drivers for the Mac for some time now. And yes, Pascal Drivers were just release this week.

AFAIK AMD (ATI) has not supplied their drivers for their cards since before the G5 went out of production.

Lou
 
It's not just a driver problem. Apple hasn't been tuning its openGL api for a long time, and since the beginning Apple has been pleased just that it works (by removing slowly bugs), not how fast it is. For the past three-four years Apple hasn't added any new features to GL or CL api's. Drivers have been just part of the Apple graphics story.... and now we have Metal, that is still under construction. Metal should be the answer for the speed problem, but lets see... I have a feeling that Vulkan and DX12 are still going to be faster than Metal.
 
Nvidia has been supplying Web and Cuda Drivers for the Mac for some time now. And yes, Pascal Drivers were just release this week.

AFAIK AMD (ATI) has not supplied their drivers for their cards since before the G5 went out of production.

Lou
I suspect that Apple have ‘asked’ them not to.
 
I just had a "vision."

I think the Mac Pro 7,1 will indeed be more modular.

Remember the Mac Pro 5,1? The tower? Yeah. Well, the Mac Pro 7,1 will be like the tower's internals.

You know, how the Mac Pro 5,1 and 4,1's motherboard trays are "modular," removable trays?

Well, imagine, that the Mac Pro 7,1 will have the same idea, except, Apple separated the MP 4,1's "chambers" of the CPU tray and PCIE slots, which are on the above compartments into separate "modular" components.

Do you see it now?

I'll try to draw a rough sketch of it later.

Maybe, the two "modular" components will house the "CPU tray" with the heatsinks, PSU, RAM and M.2 slots. While, the other component will house PCIE slots for GRFX cards. If, Apple feels generous, they can even include SATA ports there and HDD cages....

What do you guys think?
 
With all this talk about what the MP7,1 will be, why can't it just be two things. A small form factor like the MP6,1 but with a single GPU, and a 6 to 10 core CPU selection, plus slots for 2-4 SSD sticks. The second MP7,1, could be a MP5,1 respin with at least 2 TB3 ports, but no more than 4. The big MP7,1 could house 2 full size non-proprietary GPUs, and a wider range of CPU choices, possibly even dual sockets. Thunderbolt support could be provided via a modest mobile GPU on the mainboard to provide the required Thunderbolt DisplayPort signal, and support boot screens if no PCIe GPUs are present.

This seems like the best way to serve the widest possible "pro" user group possible, without creating an overly complicated one size fits all solution, that is likely not going exactly fit anyone's needs. I realize Apple will never do this, due to multiple reasons, but if they did it might be the best apology Apple could make.
 
  • Like
Reactions: PortableLover
Maybe they did and just felt it would not sell anywhere near enough to make it worth it. They were asked if they felt that mild (probably sub-10%) performance improvements were investigated and they said yes, but they felt it wasn't enough to justify offering since the retail price of the updated system would be the same (if not more).

I'm not talking about updates the whole system. The change propagation of a CPU and PCH chipset change would spread over multiple logic boards. That would cost a decent amount of money. I'm talking about two Graphics card where the changes are all localized just to thoae two boards. The actual GPU and likely the RAM in the upgraded D510 version would be in almost exactly the same place. If went to 8GB VRAM on D710 if the density of the RAM chips didn't go up then there might be a minor change.

How well are these 4+ year old systems going to sell even if the price change. Either way Apple is probably close to perhaps not making money on these. There is only so slow these contract factories can run and if the demand with the new prices is lower than the minimal factory output then Apple is running a loss on these too. The Osborne Effect is likely going to do that next year since many folks will even stop buying at the new prices even if think it had value this year. Apple probably has chopped their margins down to very slim with this price cuts and if their projets are off by even small amont they are likely going to loose money on the MP 2013 over next year or so until the replacement comes.

So it is more a question of what are they "buying" with this. The price cuts are going to buy hardly any goodwill. The folks who put a high value on "doing something" (action) are going to bolt; that is also lost money. [ The next generation Xeon E5 1xxxx v5 workstations are all going to launch this Fall 2017 and Apple is going to have nothing; not even the slightest, minimalist movement. You don't think they are going to loose a decent chunk of customers? Dell and HP folks are probably salivating. This is going to be like shooting ducks in a barrel. New GPUs would be deep protection but Apple isn't even trying to dodge the onslaught at all. ]

If Apple has already been working on entry and mid level cards the R&D costs on those are already sunk costs. It is gone. It is 100% lost if they don't sell any of those cards they put work into. This current move looses that money too. The only way they don't loose money is they had not even done the work at all. ( it would have made no sense to start in Sept 16 - January 17 on a Plan B like this. )

No, this really isn't about losing money. It is more about ego and lowest risk. Having taken a risk and gotten burt, Apple is doing the safe thing. There is probably higher accuracy estimate of how much money they are going to loose with the "do nothing" option. The ego part is being OCD on this "best ever, can't innovate my ass", "bold" , "courage enough", bragging that they are dogmatically committed too. Most are mad about being embarrassed and not really thinking about the customer problem solving; more so "cover my ass" syndrome when the blame game starts. Internal politics.

New cards have a higher likelihood of costing a bit more money, but probably bounded by the $2-10M range. At some point though Apple has to think about what they are going to spend on customer retention for this screw up. That's not a lot. Probably means several folks bonus is toast but for Apple overall that isn't a major write off.

This also ignores the life cycle costs for Apple. The longer Apple stretches the MP 5,1 the longer they have to support it. Again go back to those dying on a regular basis D700's . 2-3 years from now is AMD going to charge reasonable prices for those special binned GPU chips. Or will Apple have to stock a major warehouse inventory of this stuff for 4-6 years? Apple could do a cheesy relabel of the updated GPUs as 6,2, so technical got a model change. That would put the old ones into retirement. Or just keep same model but use the new GPUs as replacements long term if run out of D500/D700 stock.



And how many would have paid the Apple premium for upgrades GPUs?

As pointed out above it is not just upgrades but failures. If it is a significantly more stable system them probably a decent number. Not a high percentage but it would boost the run rate. If Apple is going to keep at least one embedded GPU in the Mac Pro, they really do need to figure out the depot upgrade service model. If Apple replaces two D700s on Applecare and then has to replace them again six months later... that's costing Apple money too. Apple doesn't need dubious replacement parts either.

Especially if they had experienced multiple issues with the baseline ones? And what if they were closing in on the end of AppleCare? Or if the upgrade GPUs only offered a one year (or less) warranty?

That has hidden assumption that sales of the Mac Pro collapsed after 2014. It dropped lower, but I don't think it was a tremendous collapse most of the hand wavers here have pronounced because it is hard to imagine how Apple would have kicked the decision down the road so far. Part of the issue was that sales were probably worse that expected but not disastrous. AppleCare sales in 2015 and 2016 still have windows on them where it is Apple's 'dime'. The assumption is that the overwhelming majority of AppleCare sales are pre 2015. I don't think there is that much to back that up.

End of AppleCare doesn't end Apple's hardware liability. It just ends where Apple pays for the majority (or all) of it. Sticking your "out of AppleCare" customers with dubious replacement parts isn't going to win any customers long term either. A customer may grumble when this is time lost and AppleCare pays. However, time lost and the repair fails and the customer paid for that too. Good luck selling them another Apple workstation.
 
Last edited:
I just had a "vision."
Too much pot...

I've another vision:

The New Modular Mac Pro resembling a cube, having cartdrige -like GPU modules integrating GPU and Cooling, easy to be user upgradable but not ISA/PC compatible, hopefuly Apple using is an Opensource GPU conector and Hardware design so 3rd party GPU upgrades for the mMP are warranted to be available, Apple shown us GPUs from nVidia GTX1060 to an expensive exclusive dual Pascal GP100 a 10.000$ option very cheap since it cost 14.000$ to regular PCs.

In my vision there is only a Single socket, but Apple offers from 6c core i7 to 32 cores Xeon, also there is an new cooling system based in pipes filled with an strange gas extremely efficient, so efficient the CPU fan doesn't operate just when required, same system is available on the GPUs also there is an configuration know as the stealth Mode having dual nVidia GTX1060 and a low TDP Xeon this Mac don't need to turn on none of its fans at full load.

Also on my vision the modular Mac Pro Has 2 8xpcie lines NVMe modules, apple told us these are the fastest in the world and first to integrate Intel Optane and Commodity 3-D Nand, coming in sizes from 512GB upto 4TB enabling upto 8TB SSD in lightning fast Raid 0 or 4TB with Raid 1 Safety.

Also in My vision this Mac Pro Has 8 Thunderbolt 3 Ports plus 4 Thunderbolt 2 Ports all the ports being full video Output capable, but nonetheless it also has 4 full size Display Port 1.4 ports allowing easy connect to 5K and 8K Panels on Single Stream Mode.

In My Vision the modular Mac Pro has All Black Aluminum and All Black Glass, and a Shinny Iridescent Apple Logo.

The most strange thing is that it wasn't a mac, its was named the New Macintosh.

Also in my Vision Phil Schiller ask us to kick her ass if this mac isn't the most genial and powerful mac ever.
 
Last edited:
  • Like
Reactions: pat500000
GPUs weren't their only problem with the mac pro. With Broadwell-EP they don't have enough PCIe bandwidth available for multiple thunderbolt 3 controller, dual GPUs and an SSD.

For a "Plan B" system were only needs a new system to limp along for another 12-20 months, they could skip Xeon E4 v4 and just reuse most of the system.

Bluntly, I thought 6 TB ports was dubious before the Mac Pro came out when the rumors started hinting toward 6. It is still more than a little loopy. Most of what Apple talked about in their roundtable does not really defend 6 ports all that well.

" Craig Federighi ...We said: ‘a lot of this storage can be achieved with very high performance with Thunderbolt. So we built a design in part around that assumption, ...
... . If you wanted a great RAID solution in there, it probably made a lot more sense to put it outside the box than actually be constrained within the physical enclosure that contained the CPU. So, I think we went into it with some interesting ideas, and not all of them paid off.
https://techcrunch.com/2017/04/06/t...-john-ternus-on-the-state-of-apples-pro-macs/

More than 2 TB ports. Yes. more than 4 TB ports that gets onto a slippery slope. x4 PCIe SSDs actually took off faster than I think Apple suspected. Placing those external on TB and restricting the Mac Pro too one and only one storage drive..... I think that is one of the bets that didn't pay off. The articles hyped about GPUs, but I think that one too is pretty clear. Whether next Mac Pro goes to 2, 3, 4 PCIe SSDs I'm not sure, but I feel pretty safe in saying that it likely won't be one.

Much of the driver on 6 TBv2 ports was that the 3rd party new and legacy displays were going to need DisplayPort and that at least 1-2 TBv2 ports would get covered by pass-thru DP duties. So pragmatically you needed 6 to get to a useful 4 for TBv2 work. That was the argument often trotted out for 6 TBv3 ports. TBv3 requires new cables/adapters to do DP connections. It may make more sense for next Mac Pro to just use plain mDP or add more HDMI ports for those that will simply be doing 100% video out duties. [ TBv3 really isn't going to help all that much with two cable demanding 8K displays. Covering TBv3 ports for that is kind of dubious; two ports "down" in count and only one display. ]

If Apple shoots for a literal desktop computer again then I doubt 3.5 or 2.5 bays will come back. But 2-3 SSDs sockets. That would make sense. OS/Apss drive + Scratch/Workload/samples-cache drive. It isn't RAID as much as more capacity at reasonable prices that is lacking.

If Apple significantly reverses and goes Deskside then may see up to a couple HDD drives come back. I wouldn't count on that. More than several TB archives I don't think they are going to target. Those will probably still be TBv3 targeted.



But I do agree they could have stuck a couple Polaris 10 chips in there and hit 10ish TFLOPS up from the current 6. I wonder if they didn't want to sacrifice the double precision compute performance or something. Tahiti had a unusually high amount of DP compute for a consumer chip. Maybe thats what FCP performance relies on or something.

Tahiti also had alot of heat that didn't fit either. That's what they were stuck on... having cake and eating it too. Versus pick the best part that fits the constraints you set. They already don't have ECC VRAM so large, iterative DP workloads aren't really a good match anyway. FCPX and data where the individual bits doesn't have much value.... DP is not a loss. ( I think Apple talked up DP workloads but I'd be surprised if they held onto much penetration there at all.).

At its core Polaris would be a stop-gap solution; not a complete one for all the problems they outlined. It is something to 'ride' out the coming 2H 2017 and probably almost all of 2018 storm on. The dike would still be leaking... this would just make it leak less. The stuff coming out this upcoming Fall is going to walk all over what Apple has.
 
I don't whom on earth the report considers "community" or classifies as "pro" ...

Team :apple: specifically mentioned high-end cinema, VR content creation, and scientific work as the Pros who need the power a module Mac Pro can provide, verses the iMac Pro they have been working on for other Pros who need more then a MacBook Pro
 
I've said it before and I'll say it again.. The biggest problem Apple has is graphics drivers .

Nvidia even came out with Mac drivers for the 1080's.

Apple needs to go with Nvidia.

Apple has been going with Nvidia , AMD , Intel , and ,up until relatively recently, Imagination Tech***. The Mac graphics stack has multiple layers to it. The lowest level has been done by the GPUs vendors. Apple has controlled the upper levels of the OpenGL stack (and up through their libraries ). Even Metal has some split. Where subcontractors sat might be varied over vendor, but it is really a vendor assignment.

What Nvidia is doing is releasing drivers asynchronously from macOS distribution. That is a dual edge sword because the APIs to do that aren't pragmatically there and that's why there is an explicit dance and incantations that must be done on each OS upgrades.

AMD, Intel, and Imagination Tech aren't running rogue over macOS releases and Apple's schedule because when you do it doesn't help win new design bake-offs. Nvidia has users hooked on CUDA like crack so it appears they are willing to blow up those design wins to keep dropping crack rocks. When the Mac Pro 5,1 is put into obsolete status in a year or so (and OS updates end) that approach may have blow back.


*** Since technically it is Apple's shader implementation asking ImagTech to do drivers for something they didn't do is task misalignment. Typically, low level implementers do what they have deeper ties with.
[doublepost=1492294491][/doublepost]
With all this talk about what the MP7,1 will be, why can't it just be two things.

Same fundamental reason why there were not two 2008 , 2009 , 2010, or 2012 models. The 2009-2012 models used a daughter card to differentiate between one and two socket systems. That meant that sales volume for the core model was shared across those two.

If split there is little in what Apple talked about for that to be viable for Apple. Apple cut off the 2 processor because it is extremely likely that was the much smaller "half" between the two submarkets. Folks yelping about how they and 100 their buddies all have 2P systems ..... who likely have more detailed comprehensive data about how many of each were sold... you or Apple? There is not rational reason at all for Apple to cut off the larger half. As much as many folks moan and groan about costs the lower half was likely larger.

It isn't like the 2008-2009 models were monster volume sellers either.

There is notion that once split both will "halves" will grow faster. There is alot to what Apple talked about their marketing data in that round table session that says otherwise. Trend-lines with folks moving "down market" to iMacs is one. Sore spot being single mega GPU (and implicitly workloads going GPU) is another.

There are folks whose workloads are heading toward 20-40 x86 CPUs and/or 4 GPUs but much of that is heading toward computer farms rather than sitting on top of individual desks in close proximity to users.


A small form factor like the MP6,1 but with a single GPU, and a 6 to 10 core CPU selection, plus slots for 2-4 SSD sticks. The second MP7,1, could be a MP5,1 respin with at least 2 TB3 ports, but no more than 4. The big MP7,1 could house 2 full size non-proprietary GPUs, and a wider range of CPU choices, possibly even dual sockets. Thunderbolt support could be provided via a modest mobile GPU on the mainboard to provide the required Thunderbolt DisplayPort signal, and support boot screens if no PCIe GPUs are present.

IMHO, far more likely is a compromise between. Room for two Apple customer GPU, but the Thermal Zone the "Compute GPU" is in would have a standard PCI-e slot ( perhaps shared with that custom slot) and optional card edge out. One CPU. Same baseline design shared by both variants. No mobile GPU. The "video" GPU would be present in each version, but would be a decent to very good desktop option (at least at major product updates).

No proprietarily SLI/Crossfire. No 2-3 way Compute/Inference/Training GPUs.

This seems like the best way to serve the widest possible "pro" user group possible, without creating an overly complicated one size fits all solution,

Apple isn't trying to cover the widest possible group. They said so explicitly in the round table discussion. They are trying to cover the most people with a highly limited set of products. It is a balancing act. They are probably just looking to cover "enough" Pro subgroups to have a viable product. Their objective is not to become top 3 in the comprehensive workstation category. So cover almost all users at higher costs isn't the strategy. They are pockets of pros they know in advance they are going to miss.

Two radically different motherboards with two radically different cases probably does not meet the highly limited set of products criteria at all. A Mac Pro with a default video out mechanism that is completely out of alignment with the rest of the Mac pro systems (which have order of magnitude bigger in deployments) doesn't lower overall ecosystem complexity either. It don't think Apple wants any Mac product that is radically different than the others. That isn't going to help build a cohesive ecosystem long term.

What Apple needs to let go of is funneling all video in every circumstance (even optional add-on. non standard configuration) out through TB also. That is a major component of the 'box' they are put themselves in that Apple can't seem to assign enough good engineering resources to keep up with the market every year. They need an 'out' for when they screw up. I think Apple knows there isn't enough Mac Pro volume and sales to keep a large team permanently assigned to the Mac Pro. They could hope to get to a sequence where there were just occasional gaps, but I doubt they want to commit to doing alot more than they already are. The long term growth projections probably are just not there to support that. ( there is no data showing the workstation market is growing at 2002-2005 era rates . It isn't quickly imploding, but it isn't particularly growing either. )
 
  • Like
Reactions: CWallace
They are probably just looking to cover "enough" Pro subgroups to have a viable product.
they named enginerr, scientist, but actually the biggest interest is to serve 3D/VR/AR, FCP.X and ProTools, luckyly they will throw a bone for AI/ML, so most likely they wont offer Dual CPU neither Quad GPU capabilities for the latest, maybe Dual GPU supporting 300W GPU in single GPU setups by limiting the PSU, but who knows its cheap to upgrade the PSU to 1000W when you are at planing stage, so ther is a chance for full power dual GPU setup, 6 DIMM, multiple NVMe, but unlikely dual CPU, neither 6 Memory slots (even no BTO option beyond 96GB Ram), little chance for GPU upgrades to use ISA PCIe and not Propietary or apple-over-enginered PCIE-DisplayPort GPU bus...
 
  • Like
Reactions: jblagden
*** Since technically it is Apple's shader implementation asking ImagTech to do drivers for something they didn't do is task misalignment. Typically, low level implementers do what they have deeper ties with.
You mean ImagTech write AX video drivers for Apple? I was sure Apple did it themselves. In WWDC sessions, we see Apple employees discussing driver development for AX chips.
(One of the Apple driver devs is Fiora Aeterna. She's funny. She may be 30 years old but her voice sounds like a grandmother :D).
 
You mean ImagTech write AX video drivers for Apple? I was sure Apple did it themselves. In WWDC sessions, we see Apple employees discussing driver development for AX chips.

Now that the shader logic is Apple's, yes. However, in the initial years 1-4 of iPhone, did you see this for non OpenGL ES stuff ? Apple could have pointed out some optimizations. (e.g., ImagTech compressed texture format , etc. etc. ), but WWDC commentary on that doesn't mean they are actually writing the lowest level driver.

Before actually taking over the whole stack at the hardware level Apple probably started to push in at the software level so they could make a clean crossover. However, that is not their usual approach taken with macOS where there are always are more one GPU target.
 
  • Like
Reactions: jblagden
No way you'll get standard graphics cards in mMP.
However modular it will be, I don't see Apple wanting you to put any horrendous card inside any computer they make now. I'm pretty sure they'll prefer to lock the design so that you can't use different cards, with different coolers, different outputs. You could with the tower design but those days are gone now.

It would be nice to know what they were doing all this time. Did they really have a prototype that went south? Or did they just wake up the other day and remembered that the nMP was long past the upgrade time and so they had to come up with something?
I guess we'll never know, unless someone from the engineering team spills the beans or leaves Apple and talks about it in the future.

Funny, here you can't still choose to buy the top model 8core+D700 directly, but you can choose the 6core+D500 and custom it to the higher specced model.
 
  • Like
Reactions: Aldaris
they named enginerr, scientist, but actually the biggest interest is to serve 3D/VR/AR, FCP.X and ProTools, luckyly they will throw a bone for AI/ML, so most likely they wont offer Dual CPU neither Quad GPU capabilities for the latest, maybe Dual GPU supporting 300W GPU in single GPU setups by limiting the PSU, but who knows its cheap to upgrade the PSU to 1000W when you are at planing stage, so ther is a chance for full power dual GPU setup, 6 DIMM, multiple NVMe, but unlikely dual CPU, neither 6 Memory slots (even no BTO option beyond 96GB Ram), little chance for GPU upgrades to use ISA PCIe and not Propietary or apple-over-enginered PCIE-DisplayPort GPU bus...
I can see 8 ram slots just to keep up with newer cpu's. Say quad channel or quad channel dual cpu.

Now with Intel they may need dual cpu to get a good number of pci-e lanes.

Need to have e-net maybe even 10G-E ports

For storage at least 2 pci-e cards up to 4 (needs 16 lanes to fill 4) and maybe 1 sata bay some work loads need a big storage disk / cheaper sata ssd. Now e-sata will be nice and it's basically free with the chip set.

Video at least pci-e slot X16 maybe an choice for a 2th one (sli / cross fire / compute card) 2. Even you don't have the lanes for X16 X16 dropping both to X8 X8 when slot 2 is in use is ok.

USB A ports 3.X and maybe 4-6 TB3 buses based on how meany free lanes that you have.

What about maybe an apple display with an EXT-pci-e cable not TB3 but to up full X16 with an build in video card?
 
  • Like
Reactions: Aldaris
No way you'll get standard graphics cards in mMP.
However modular it will be, I don't see Apple wanting you to put any horrendous card inside any computer they make now. I'm pretty sure they'll prefer to lock the design so that you can't use different cards, with different coolers, different outputs. You could with the tower design but those days are gone now.


If they are willing to go back to a "desk side" general category design baseline then not in the "no way", then a single card is more than tractable to do. The secondary compute card could have no edge outputs but that doesn't mean it is required there not to be an optional edge plate there. Put some extra blowers around the single slot to scoop up the get a TDP range of 300W out of the case with a limited noise threshold and that would allow a range of stuff to be put inside the box. One double wide slot in the volume that 3 ( maybe 4 slots ) take up should allow a cooling system that is relatively quiet even with that upper limit. Apple's now optional compute cards could easily operate under that limit so would have a bit of "overkill" on cooling in that context. ( that won't hurt anything). Multiple fans for widely different thermal zones just sucks up more volume.


If they are going to stick stickily with a literal desktop design then it gets more tricky. Throwing volume at working around somewhat random directions the 3rd party cards will discharge heat starts eating up desktop volume that typically needs to be shared with other stuff. The thermal limit would probably get tighter 275W or the allowed noise limited go up in the non standard configuration ( or some combination of both).

However, the current Mac Pro's footprint being smaller than the Mac Mini was unnecessarily ( roughly 6.6 x 6.6 = 43 .56 vs 7.7 x 7.7 = 59.29 square inches ). Marginally bigger than the Mini would still be relatively small ( e.g. 75-100 square inches. ). That is still a pretty big drop from the 2010 Mac Pro's 195 square inches. (and a Dell 3000 series mini tower is 117 so a bit smaller than that. The 3000 SFF is 42 but squeezes out normal PCI-e slots). Larger footprint allows a taller device with similar relative proportions. All of that adds up to more volume to work with. Fill that with multiple thermal zones and Apple can squeeze a card in.


It would be nice to know what they were doing all this time. Did they really have a prototype that went south? Or did they just wake up the other day and remembered that the nMP was long past the upgrade time and so they had to come up with something?
I guess we'll never know, unless someone from the engineering team spills the beans or leaves Apple and talks about it in the future.

There is another path were the deep, hard core fans of "box with slots" hardware designs simply left. Apple bleeds talent too sometimes. It is probably a combination of factors. I highly doubt they started on a new Mac Pro in 2014 ( or even early 2015). So yes there is a delay in starting in really thinking hard about the issues of the next generation. (ab bit of some future magic will fix this ... speaks of doing nothing in the short term. ).

I also highly doubt OpenCL proceeded on a highly accurate path mapped out during early design phases of current Mac Pro. It isn't just the Mac Pro itself but the environmental context that changed.



Funny, here you can't still choose to buy the top model 8core+D700 directly, but you can choose the 6core+D500 and custom it to the higher specced model.

If most of the buyers drop down into the BTO configurations then having two starting points is probably enough. I don't think it saves much to crank out standard configurations if most folks aren't buying standard configurations. Retail "off the shelf" sales in a shopping Mall Apple store were not going to be the primary sales drivers.
 
Are you really so hopeful that Apple, even after this admission, will really go back to a tower design with slots? Am I just being pessimist believing that nothing will actually change? They're mostly a design focused company now, and control freaks, so why on earth would they go to great lengths to do something out of their scope just to please those few % of "Pro" users that need (or keep complaining but don't actually "need" it)?
One graphics and 1 optional compute card sounds great, but put yourself in Apple's shoes and imagine the support nightmare it would be. I don't think they want or ever will again, to have to work with all GPU vendors in developing specific Mac cards. And even less so that people will put in any standard card. To me, that's a given. Might be wrong though.
And can you imagine the Apple of today allowing you to insert a graphics card in a Mac Pro (or any Mac for that matter) with flashing LEDs, or funny blowers, or VGA, DVI or even HDMI ports? You can even comment on HDMI but I'll bet you that even that will go away in the next iteration of the MP.
They want a clean back (and front) with nice looking layout, not a PC-like colored multi-port mesh that looks unprofessional at best. they will minimize the number of different ports, until they can do away with a single one for everything. And that might be TB3. And maybe the new display will hold the rest of the ports like the TBD.
But this is my view, again, I might be wrong.
I have some doubts it will come next year. If it indeed does, I'm still holding on to the Basin Falls (C620?/X299) and SKL-W , again 2 GPUs (more options this time around, easier to exchange - hence modular) and 2 SSDs (2nd BTO) and 6 TB3 ports. I'd even bet USB-A gone. That would be a clean design, 48 PCIe CPU lanes fully used, 2nd SSD on PCH as well as all else that will still exist.
[doublepost=1492682908][/doublepost]dec, I was saying there are still the 2 base models to choose from, only the higher one doesn't have the Buy button. You can still BTO the other one, which you would anyway for a higher capacity SSD.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.