Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If you read the interview, its more subtle than that: the problem they perceive with the nMP thermal design is that the triangular core design assumes that the load is going to be spread roughly equally over a single CPU and two similar GPUs. What they can't do with the existing design is build system with a single, higher-powered GPU, which is what some workloads demand.

Nothing said in that interview was a lie... but Apple execs also weren't totally transparent.

It's very true that the 2013 design wouldn't have worked with a single, high power GPU. That part is true.

But the part they left out is the design also didn't work well with two mid-upper end GPUs either. It never really worked well for the 7870s they used. And there wasn't much on the market it worked well with either.

They made the same assumptions they made when they switched to Intel, that GPUs would get more and more efficient with AMD each year, and that the GPUs would grow into the design. Intel was able to deliver on that promise in 2006, AMD couldn't over the last four years.
 
modular, in this instance, might just mean a desktop computer that you plug a separate monitor into.

or, apple's desktop plan might be two models. an imac and a single other desktop. no more separate 'pro' and 'mini' model.
 
The future for Apple in the pro market isn't going to be the current professionals. Its the FUTURE professionals they are focusing upon. Most creative pro products don't have an entire platform ( OperatingSystem/Desktop/phone/Tablet/ProApplications ) and are dependent on others to work. Jumping ship for present professionals will hurt Apple short term, long term not so much.
That's if they manage to turn around their current slump.
 
If Apple released a cheese grater Mac Pro with the latest internals by the end of the year, I think 97% of the "Pro" community would be thrilled. I don't see it happening because it would not be "innovative", but it seems doable.
I don't want cheese grater back. It was a very loud machine. I love the whisper quietness of my trashcan. I hope Apple can keep the noise level down with the new design as well.
[doublepost=1491841018][/doublepost]
Hobbyists upgrade their cMP CPU's and GPU's. Not businesses. Graphic Design studios buy and use these machines until they are dead. Not once I've seen some graphic designer open up their cMP and add a new GPU unless the old one was dead. And in that case, they'd send it back to Apple for a replacement card.
 
I don't want cheese grater back. It was a very loud machine. I love the whisper quietness of my trashcan. I hope Apple can keep the noise level down with the new design as well.
I am sure people are only referring to the good things about the Cheesegrater, no one would like to see its (few) shortcomings back. The small holes of that design actually helped the horizontal ventilation, the noise was instead cause by the fans and sometimes the cables/internals/dust being hit by the air flow.

If the mMP will contain some form of internal PCI card slots then it will be challenging to make it as quiet as the trashcan. But then the potential of the hardware ceiling will probably be higher which means the computer will not need to load into high temperature as often, so fans can run at low speed or even have some of them disabled. Anyway noise is one area Apple will pay unhealthy attention to so I am not worried. The actual fear is the opposite, if they will nerf the case's flexibility for the sake of quietness (again).
 
The liquid cooling solutions out there have actually matured a lot since the G5 days, and people sometimes even add DIY heat pipes to help, Apple has the advantage to fully control the motherboard design (at least the socket area) so that is probably the easier part.

The hard part will be the card slots. If PCI slots are present, it means a chassis with enough room to accommodate varying combination of total heat and shapes which hampers air flow. A typical full case, including Cheesegrater, would issue a lot of unused headroom for this purpose. Apple will need to display some self-restrain for not trying too hard to compact this PCI cage, or even making it into some particularly inefficient shape.
 
  • Like
Reactions: toke lahti
I'll admit I haven't read the whole thread, so my apologies if someone has already floated this theory.... But I suspect this all has to do with the failure of Apple's "post-PC" philosophy—or more specifically, with the market's demonstrated lack of interest in an entirely post-PC world. I've noticed that nobody is using that term anymore at Apple.... Since the iPad Pro had relatively modest sales (not terrible, but not through the roof), Apple has had to re-think the whole post-PC mindset. So they're having to figure out how to gain ground (again) in a world that refuses to let go of the PC. This was obvious enough to pro users, but apparently not to Apple. I openly admit that I would love to be able to compose meaningfully on an iPad Pro, but I would still always have a MacBook Pro or Mac Pro for final production.

Also, there has been a significant rise of the so-called "prosumer" (a ridiculous f***ing term that never should have been coined), who are rapidly outgrowing their hardware. My guess is that Apple now recognizes that there is actually money to be made in converting those prosumers over to pros, if they do it right (i.e., make it scalable).

So, my guess about the "modular" approach is that they will be taking the idea very far indeed—as in, totally scalable modules of CPU (Intel and/or ARM), GPU, SSD, and perhaps even RAM (though this will probably be integrated into some kind of CPU module, for throughput). They may even enable some of these modules to be used with MacBook Pros, so that components of the "studio" rig can be taken on site. It's the only reason they'd need such a huge amount of time. Doing it this way, they'll be able to keep things up to date more effectively, and at less expense, and perhaps even at a more reasonable cost to the customer. I know everybody will scream about that last point, but Apple has never got it in neck about price as much as they did with the TB MacBook Pros. They're feeling it, even if they're not mentioning price as a sticking point. Will such a system ever be as cheap as a Hackintosh? Obviously not. But if it's entirely customizable, and entirely plug-and-play, with totally native macOS support, it will definitely sell. (And I'm saying this as someone who has built and run about 5 Hacks over the years... Including a laptop... which was hell... bought a TB MacBook Pro after 3 months. Ha! You can't replace the user experience of a MacBook Pro... Apple's laptops are just very, very well made. Sorry. Cue the customary MacRumors bitching and moaning.)

Seems to me that a refreshed Mac Pro line would have a lower starting point to take advantage of economies of scale for Apple - picking up the Mac Mini users who harp on constantly about the 2012 i7 quad model because the nMP entry level went up to silly amounts.

Rather than making everyone buy 2 GPUs they start with no GPUs and allow people to buy approved GPUs on daughter cards later if they don't spec up the machine upon purchase.
 
Seems to me that a refreshed Mac Pro line would have a lower starting point to take advantage of economies of scale for Apple - picking up the Mac Mini users who harp on constantly about the 2012 i7 quad model because the nMP entry level went up to silly amounts.

Rather than making everyone buy 2 GPUs they start with no GPUs and allow people to buy approved GPUs on daughter cards later if they don't spec up the machine upon purchase.

This. Ever since the announcement this has made the most sense to me. Develop something that can work for people who would traditionally want a Mac Mini right up to the fastest Mac Pro. Sounds mad, but that's "modular" in its extreme sense. I accept that I may be taking "modular" too literally (as some have pointed out) but there's no way Apple will simply roll out another cheese grater, however much we loved those machines.

(And for the record, I owned a cheese grater and currently have a trashcan and I love them both for different reasons.)
 
How dare you even think "Apple's laptops are just very, very well made"! Where did you get that crap from? I just so happen to be posting with my 2006 1.83Ghz. I added an SSD for the main drive, pulled the burner and replaced it with a large sized spinner. I use a Targus laptop cooler and the MBP has been running flawlessly since 2006! It has never been to the shop or had any "work" done to it. So you should rethink the "Apple's laptops are just very, very well made" angle! Hey, wait a minute! You're right! :p

I gotta disagree - my $1,700 Apple laptop had every single internal board replaced at 30 months & it died again with the exact same problem 190 days later.

The $250 Aspire D1 netbook I used while waiting on the laptop to get back from the repair center is still crunching SETI blocks.
 
  • Like
Reactions: albebaubles
Has anybody just disconnected one of the GPUs on the MP 6,1? It would be handy to be able run it with a single D500 for studio use and only power up card 2 for rendering, etc.

I'll see what happens at WWDC, but frankly, this USB-C/Thunderbolt 3 thing -along with dongle-gate -has turned into a diaper fire. :eek:
 
I don't want cheese grater back. It was a very loud machine. I love the whisper quietness of my trashcan. I hope Apple can keep the noise level down with the new design as well.
[doublepost=1491841018][/doublepost]
Hobbyists upgrade their cMP CPU's and GPU's. Not businesses. Graphic Design studios buy and use these machines until they are dead. Not once I've seen some graphic designer open up their cMP and add a new GPU unless the old one was dead. And in that case, they'd send it back to Apple for a replacement card.
What on earth would make you think it needs to be loud? A 1984 240D with modern internals, (ie a V6 common rail turbo diesel), wouldn’t be loud would it?
 
I think the cMP is pretty.

I agree the cMP is a sophisticated and studied example of engineering and industrial design. I also have a working G5 and I can say with experience that the cMP is a much improved design (inside) in every way. The only down side to both is moving them around the studio requires a little muscle. They both needed wheels.
 
  • Like
Reactions: JamesPDX
Hobbyists upgrade their cMP CPU's and GPU's. Not businesses. Graphic Design studios buy and use these machines until they are dead.

That might be true to graphic design but for motion design or video, is pretty off. My last GPU upgrade, two years ago, made my playback (render) go from 4 to 23 fps. If that doesn't impress you, GPU nowadays are 3 times as fast. Comparing to my old 5770, my 970 is 8 times faster.

I didn't bother upgrading my CPU tho', it would add only 30% more power.
 
Has anybody just disconnected one of the GPUs on the MP 6,1? It would be handy to be able run it with a single D500 for studio use and only power up card 2 for rendering, etc.

This is sort of how it works now. Generally the first card does all the work and handles all of the display outputs. The second card only fires up for compute tasks, otherwise it is idle and doesn't use much power.
 
  • Like
Reactions: JamesPDX
Unless they announce it in the next two months, and ship in EARLY 2018, stick a fork in it. It's done. Don't even bother.
If they're designing around Skylake Xeons, especially Skylake-SP Xeons, they've probably only got sample quantities and will have to wait till they're available in quantity before being able to officially abl to launch a redesigned Mac Pro.
 
If Apple released a cheese grater Mac Pro with the latest internals by the end of the year, I think 97% of the "Pro" community would be thrilled. I don't see it happening because it would not be "innovative", but it seems doable.

That is where the issue is. People are saying Apple is not innovating anymore. So they tried to innovate with the 2013 Mac Pro and it failed for most of the people. It is good system for me since I use it for FCP and it is still the best in that area. But people want Apple to innovate more than produce products that work. Sometimes it might generally be a failure. I bet you review sites and tech sites would say a new cheese grater is bad because Apple is not innovating. They can't win here. Obviously A LOT of people wanted Apple to innovate since that quote "Can't innovate anymore my ass" is obviously from a lot of sources saying they are lacking innovation. If people would stop with that, we would have an updated cheese grater right now.
 
  • Like
Reactions: ssgbryan
That is where the issue is. People are saying Apple is not innovating anymore. So they tried to innovate with the 2013 Mac Pro and it failed for most of the people. It is good system for me since I use it for FCP and it is still the best in that area. But people want Apple to innovate more than produce products that work. Sometimes it might generally be a failure. I bet you review sites and tech sites would say a new cheese grater is bad because Apple is not innovating. They can't win here. Obviously A LOT of people wanted Apple to innovate since that quote "Can't innovate anymore my ass" is obviously from a lot of sources saying they are lacking innovation. If people would stop with that, we would have an updated cheese grater right now.
apple hit the pci-e wall. Right now there choices are.

switch the 2 video cards to shearing an X16 link with x8 x8 or x16 x16 with an pci-e switch.

Dump alot of stuff on to the X4 DMI link.

Go Dual intel CPU

Go AMD
 
apple hit the pci-e wall. Right now there choices are.

switch the 2 video cards to shearing an X16 link with x8 x8 or x16 x16 with an pci-e switch.

Dump alot of stuff on to the X4 DMI link.

Go Dual intel CPU

Go AMD
Or, push 32 lanes into a switch, and put a lot of stuff on it.

Statically splitting 16 lanes into x8 + x8 for GPUs wouldn't be smart. Each GPU would never get more than about 8 GB/sec. With a 16 lane to dual 16 lane switch - each GPU would get 16 GB/sec if the other GPU wasn't transferring data.

With a 32-lane to <many more>-lane switch you'd get about 32 GB/sec IO, dynamically allocated to active devices.
 
Or, push 32 lanes into a switch, and put a lot of stuff on it.

Statically splitting 16 lanes into x8 + x8 for GPUs wouldn't be smart. Each GPU would never get more than about 8 GB/sec. With a 16 lane to dual 16 lane switch - each GPU would get 16 GB/sec if the other GPU wasn't transferring data.

With a 32-lane to <many more>-lane switch you'd get about 32 GB/sec IO, dynamically allocated to active devices.
That will be problem in a previous gen. memory tech. Vega is changing this a bit. It requires you to have minimal PCIe bandwidth to be precisely the same size as Memory you have on the GPU. So if you have 8 GB of HBM - you can go with x8. If you have more than that - you have to go higher.
 
That will be problem in a previous gen. memory tech. Vega is changing this a bit. It requires you to have minimal PCIe bandwidth to be precisely the same size as Memory you have on the GPU. So if you have 8 GB of HBM - you can go with x8. If you have more than that - you have to go higher.
This is incomprehensible.

PCIe bandwidth is not measured in GiB.

Your "dark web" sources are on very good drugs.
 
This is incomprehensible.

PCIe bandwidth is not measured in GiB.

Your "dark web" sources are on very good drugs.
Your mistaking me with Mago. That is first thing. Second thing is that its Game Developers words, and people who analyzed the information about memory paging system in Vega GPUs.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.