Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well it sounds like more of the "good stuff" is coming back half of the year (AMD Vega vs. Polaris, higher-spec Kaby Lake CPUs and chipsets)

The Gen 7 ( Kaby Lake) processors for the iMac are already out. They aren't coming in the second half of 17 they are already here.

http://ark.intel.com/products/family/95544/7th-Generation-Intel-Core-i7-Processors#@Desktop

Frankly, although the Gen 8(?) (Coffee Lake ) were tagged for Q17 time frame it is likely the 6 core models will slide into 2018. Coffee Lake is almost the same process ( perhaps a minor fin height tweak and/or layout optimization to squeeze out an incremental clock bump. ) on essentially the same micro architecture.
There are few good reasons for Intel to bump the chipset significantly for Coffee Lake. ( same micro architecture and clock bump... same thing that happened on "Tick" on Tick-Tock' cycle usually staid on same socket and perhaps a relative minor chipset bump. )

And Vega's HBM2 targeting doesn't particularly match well with mobile GPUs ( which is what the iMac would likely adopt for TDP constraints. ). So the whole wait until late Q4 really makes no sense. Far more sense to release something now so that in mid-late 2018 can release again if Intel and/or AMD stay on course.
Polaris is geared toward the mobile end of the overall spectrum. (maybe a process tweak revision of Polaris coming, but that is unlikely a huge game changer for the iMac. )


so it might be better for Apple to wait for the best stuff they can get to maximize the usability over the longer future product life-cycles than ship something now and have the forums go ballistic later this year when the new products start hitting and we have to wait years for them when the next update cycle happens.

No iMac by mid-late June is an Apple clusterf*ck, not 'brilliant' strategy.

No Macbook? That's in the same boat. ( the multi-chip module server processors are coming first on 10nm so the 10nm mobile stuff is highly likely to slide off into 2018. There is no "Coffee Lake" stop gap in the Y processor range. )

For the Mac Pro waiting until Q4 '17 only makes sense if going to disappear down a hole for another 3 years. They are waaaaaaaaaaaaay past late and doing substantive damage now. On the schedules from a year ago, Vega and Xeon E5 v4 would have been out at this point. Unless they were working on a "back up" Xeon E5 v5 logic board in parallel with an E5 v4 one ( which seems extraordinarily unlikely because it appears only have the absolute minimal resources assigned to the product's development. ), they would not have laid the foundation to do a shift.
 
Last edited:
I'm getting the feeling that Tim and co want to put out products based on design constraints that intel and amd just can't meet. They'd rather make us wait and sell old tech at what must be amazing margins at this point.

The product lineup right now is "good enough" for 70-80% of the user base. They are gambling on losing the other 20-30%.

Waiting till 2018 is nuts.
 
Seems that Apple will release a new mac pro this year, same design but with a user replaceable cpu,ram, ssd and both gpu
rofl. good one.

OWC offers CPU and SSD upgrades so they're both user-replaceable (or you can have OWC do the work for you with a warranty). The GPUs are not (they are on cards, but the interface is proprietary), but I imagine it is possible for Apple to design them to be user-upgradeable, as well.
 
Seems that Apple will release a new mac pro this year, same design but with a user replaceable cpu,ram, ssd and both gpu
Strictly speaking, the old trashcan had user replaceable everything, nothing was soldered to it, 3rd party teardown confirm this.

All of the parts were Apple custom architecture though, so it was de facto un-upgradeable.
 
  • Like
Reactions: Hank Carter
The D700 is at 274w

This is incorrect. Each D700 is in the ballpark of 125-150 W max. The whole computer has a 450 W power supply, so its not possible that both GPUs combined exceed the power supply.

At this point I think Apple is waiting on AMD's Vega and Intel's Skylake-EP. It wouldn't surprise me to see a WWDC announcement with a september release.
 
Last edited:
TDP that is in the BIOS of the GPUs, is 129W for D500 and D700, and 136W for D300.

No idea why it is this way. Polaris GPUs would fit in it, just slightly under clocked, with reference RX 470 fitting perfectly in the TDP limit(Reference RX 470 appears to have 125W TDP power gate).
 
This is incorrect. Each D700 is in the ballpark of 125-150 W max. The whole computer has a 450 W power supply, so its not possible that both GPUs combined exceed the power supply.

At this point I think Apple is waiting on AMD's Vega and Intel's Skylake-EP. It wouldn't surprise me to see a WWDC announcement with a september release.

TDP that is in the BIOS of the GPUs, is 129W for D500 and D700, and 136W for D300.

No idea why it is this way. Polaris GPUs would fit in it, just slightly under clocked, with reference RX 470 fitting perfectly in the TDP limit(Reference RX 470 appears to have 125W TDP power gate).

Whoops, my bad. Thanks.
 
This is incorrect. Each D700 is in the ballpark of 125-150 W max. ....

At this point I think Apple is waiting on AMD's Vega and Intel's Skylake-EP. It wouldn't surprise me to see a WWDC announcement with a september release.

Apple is highly unlikely waiting on Skylake-EP ( basically what is the Xeon E5 2000 series and big part part of what is the E5 4000 series. ). Skylake-W yes ( what is/was the Xeon E5 1600 series). Since, it looks like there will be a minor socket shift between the 1 versus 2+ cpu packages(https://en.wikipedia.org/wiki/LGA_2066) , they probably won't share the same "5" as a socket designation. E4 ? (-W) and E5 (-EP).

Likewise, every above Vega 10 (which is the first to roll out) points to > 200W TDPs. And certainly substantively higher prices ( HBM v2 ... which isn't going to come affordable low-mid price points).
The lower to mid range of AMD rollout is going highly likely be Polaris based for the vast majority, if not all, of 2017. Whether there is a Polaris 12 (custom tuned for Apple system applications ) that is a tweak update of Polaris 10 or not is still a bit fuzzy, but the Mac Pro needs at least one mid range card. There is no way that can do a completely Vega line up from top to bottom in price range. Between TDP and pricing problem, Vega being the only blocker is extremely dubious.

The Polaris line up fits the TDP profile without having heavily down clock to adjust the envelope. Apple could tweak the enclosure ( more diameter -> more air flow throughput ) so there was an incrementally bigger tolerance range. That would allow a better safety zone. Adjusting to a bigger buffer and then filling it all the way up to the brim will likely lead to same issue that drove expanding the TDP tolerance range in the first place.

Skylake-W only makes sense if Apple 'bet the farm' on Mac Pro design process 1+ year ago on it (and scrapped some early E5 v4 work .)
 
Last edited:
Apple is highly unlikely waiting on Skylake-EP ( basically what is the Xeon E5 2000 series and big part part of what is the E5 4000 series. ). Skylake-W yes ( what is/was the Xeon E5 1600 series). Since, it looks like there will be a minor socket shift between the 1 versus 2+ cpu packages, they probably won't share the same "5" as a socket designation. E4 ? (-W) and E5 (-EP).

Likewise, every above Vega 10 (which is the first to roll out) points to > 200W TDPs. And certainly substantively higher prices ( HBM v2 ... which isn't going to come affordable low-mid price points).
The lower to mid range of AMD rollout is going highly likely be Polaris based for the vast majority, if not all, of 2017. Whether there is a Polaris 12 (custom tuned for Apple system applications ) that is a tweak update of Polaris 10 or not is still a bit fuzzy, but the Mac Pro needs at least one mid range card. There is no way that can do a completely Vega line up from top to bottom in price range. Between TDP and pricing problem, Vega being the only blocker is extremely dubious.

The Polaris line up fits the TDP profile without having heavily down clock to adjust the envelope. Apple could tweak the enclosure ( more diameter -> more air flow throughput ) so there was an incrementally bigger tolerance range. That would allow a better safety zone. Adjusting to a bigger buffer and then filling it all the way up to the brim will likely lead to same issue that drove expanding the TDP tolerance range in the first place.

Skylake-W only makes sense if Apple 'bet the farm' on Mac Pro design process 1+ year ago on it (and scrapped some early E5 v4 work .)
Polaris 12 is 640 GCN core chip. Its not meant for P10 replacement ;).

Lisa Su few months ago in conference call with investors have said that they will have "sort of top to bottom" release of Vega GPUs.

Thirdly. I think people overestimate the manufacturing costs of HBM2. The reason why AMD went with 2048 Bit memory bus design for all of the Vega GPUs is to reduce manufacturing costs. Bandwidth is sufficient enough to feed the GPUs, there are only two stacks required, and therefore it also simplifies the manufacturing costs of designing the interposer.

I can very well see with ease a GPU made with Vega architecture, with 2 stacks of HBM2 in 349-399$ price target.
 
Polaris 12 is 640 GCN core chip. Its not meant for P10 replacement ;).

Lisa Su few months ago in conference call with investors have said that they will have "sort of top to bottom" release of Vega GPUs.

Top to bottom over how many years? Eventually NCU and graphics pipleline will trickle down but on the short-medium range future the GDDR5 versus HBMv2 pricing differences are going to drive differences. Neither Nvidia or AMD graphics roll outs over the last 3-4 years has not consisted of "rebadges"/"retreads" of the perious deisgn iteration to hit the lowest cost points. The AMD RX 400 series has some GCN 1.0 entries at the very bottom of the line up!!!

https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units#Radeon_RX_400_Series

Eventually they'll be flushed out but "top to bottom" takes years. The Mac Pro update doesn't have years.




Thirdly. I think people overestimate the manufacturing costs of HBM2. The reason why AMD went with 2048 Bit memory bus design for all of the Vega GPUs is to reduce manufacturing costs. Bandwidth is sufficient enough to feed the GPUs, there are only two stacks required, and therefore it also simplifies the manufacturing costs of designing the interposer.

There is no imposer with GDDR5. None. Going to be hard to beat zero cost when not even there. Likewise zero stacks is going to be cheaper than two. Yes, two is cheaper than four, but zero is even less.



I can very well see with ease a GPU made with Vega architecture, with 2 stacks of HBM2 in 349-399$ price target.

The entry level D300 equivalent ( ~ R9 270 in the consumer space) was priced at $179-199. So that is about a 100% increase. If talking about entry level Mac Pro's GPU increasing by 100% .... yeah that would be a problem. It will drive the system price higher.

At the top end of the Mac Pro pricing range (top end GPU cards , etc.) there is way more slack in pricing constraints than at the bottom. The entry MP 2013 gimped out the door with just 12GB of RAM (one unfilled DIMM slot) and chopped down VRAM ( versus W7000 ). There isn't tons of slack there.
 
Last edited:
Top to bottom over how many years? Eventually NCU and graphics pipleline will trickle down but on the short-medium range future the GDDR5 versus HBMv2 pricing differences are going to drive differences. Neither Nvidia or AMD graphics roll outs over the last 3-4 years has not consisted of "rebadges"/"retreads" of the perious deisgn iteration to hit the lowest cost points. The AMD RX 400 series has some GCN 1.0 entries at the very bottom of the line up!!!

https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units#Radeon_RX_400_Series

Eventually they'll be flushed out but "top to bottom" takes years. The Mac Pro update doesn't have years.
The GPUs you quoted, are not part of RX 4XX lineup. They are R7 4XX line. Polaris has its own branding starting with RX 4XX.

About the time span, it is a good question. The fact that they have Polaris line going to be rebadged shows that this is what AMD offers for low end and mainstream markets, however.

Vega 10 - high-end
Vega 11 - Mid-Range(GTX 1070/1080 level of performance).
This year we are supposed to see some of Ryzen CPUs with GPUs. They are going to use Vega architecture, so it would account for "sort of top to bottom launch" ;).

Right now, we know that Vega 10 is first to be released from Vega lineup. 4096 GCN cores, with 8 GB of HBM2.

There is no imposer with GDDR5. None. Going to be hard to beat zero cost when not even there. Likewise zero stacks is going to be cheaper than two. Yes, two is cheaper than four, but zero is even less.
You have to use 12 memory cells to get 512 GB/s bandwidth, with GDDR5X. To get the same effect with HBM2 you have to use just 2 memory stacks ;). Do you think manufacturing costs of HBM2 are 6 times higher in this particular case? When you consider what is required the manufacture each GPU the manufacturing costs should be similar to GDDR5X GPU with similar bandwidth. Its just scale of what you have to consider. And HBM2 memory subsystem will use less power. MUCH less.

The entry level D300 equivalent ( ~ R9 270 in the consumer space) was priced at $179-199. So that is about a 100% increase. If talking about entry level Mac Pro's GPU increasing by 100% .... yeah that would be a problem. It will drive the system price higher.
The base GPU for Mac Pro could very well be Polaris 10 XTX/XT2/whatever_its_called.
 
Apple is highly unlikely waiting on Skylake-EP ( basically what is the Xeon E5 2000 series and big part part of what is the E5 4000 series. ). Skylake-W yes ( what is/was the Xeon E5 1600 series). Since, it looks like there will be a minor socket shift between the 1 versus 2+ cpu packages(https://en.wikipedia.org/wiki/LGA_2066) , they probably won't share the same "5" as a socket designation. E4 ? (-W) and E5 (-EP).

Right, I can't keep track of Intel code names. Dual socket Skylake-EP is getting its own separate platform while -W is going to be the workstation and single socket variant on basin falls.

Likewise, every above Vega 10 (which is the first to roll out) points to > 200W TDPs. And certainly substantively higher prices ( HBM v2 ... which isn't going to come affordable low-mid price points).
The lower to mid range of AMD rollout is going highly likely be Polaris based for the vast majority, if not all, of 2017. Whether there is a Polaris 12 (custom tuned for Apple system applications ) that is a tweak update of Polaris 10 or not is still a bit fuzzy, but the Mac Pro needs at least one mid range card. There is no way that can do a completely Vega line up from top to bottom in price range. Between TDP and pricing problem, Vega being the only blocker is extremely dubious.

Vega seems like a very compute oriented architecture, like Tahiti was with its high DP ratio. Its probably feasible that they could fit a couple Vega 10's in the mac pro, but its certainly not cost effective. I could see Vega 11 (assuming its smaller than 10 and bigger than polaris 10) being a good fit. But very little is known about it at this point. All the talk of Vega being able to use high speed SSDs directly attached to the GPU as extra VRAM just screams Apple/Mac Pro. This makes me believe its gotta be on the next mac pro.

Polaris 12 has already been outed as smaller than Polaris 11. I could see Polaris 10 being the D300 replacement though.

I can very well see with ease a GPU made with Vega architecture, with 2 stacks of HBM2 in 349-399$ price target.

You and a lot of the other AMD fanboys are living in fantasy land if you think that Vega 10 is going to be priced this low. This is a GPU bigger than GP102 with exotic memory and an interposer. Its going to be priced > $500. Probably up close to $700 with the GTX 1080 Ti assuming its competitive.

You have to use 12 memory cells to get 512 GB/s bandwidth, with GDDR5X. To get the same effect with HBM2 you have to use just 2 memory stacks ;). Do you think manufacturing costs of HBM2 are 6 times higher in this particular case? When you consider what is required the manufacture each GPU the manufacturing costs should be similar to GDDR5X GPU with similar bandwidth. Its just scale of what you have to consider. And HBM2 memory subsystem will use less power. MUCH less.

Yes, it is cheaper to use GDDR5X. Thats why GP100 uses HBM and is very expensive but GP102 uses GDDR5X and is available for $700 apiece.
 
You and a lot of the other AMD fanboys are living in fantasy land if you think that Vega 10 is going to be priced this low. This is a GPU bigger than GP102 with exotic memory and an interposer. Its going to be priced > $500. Probably up close to $700 with the GTX 1080 Ti assuming its competitive.
First you accused me for being a fanboi, which I am not, then you have shown complete and utter ignorance, or lack of logical thinking.

Why it had to be Vega 10 priced at 399$? Maybe it would be smaller Vega chip? Have I wrote which Vega GPU would be priced at 399$? The quote:
I can very well see with ease a GPU made with Vega architecture, with 2 stacks of HBM2 in 349-399$ price target.
You get the context right now? It was supposed to picture, which market(s) can be affected by HBM2 technology.

Think, before you write.

Yes, it is cheaper to use GDDR5X. Thats why GP100 uses HBM and is very expensive but GP102 uses GDDR5X and is available for $700 apiece.
No, because consumer Pascal is the same uArchitecture as Maxwell, and is designed to work with GDDR5(X). GP100 is the true new Pascal architecture, and was designed to work with HBM2. Simple as it can be.
 
Last edited:
No, because consumer Pascal is the same uArchitecture as Maxwell, and is designed to work with GDDR5(X). GP100 is the true new Pascal architecture, and was designed to work with HBM2. Simple as it can be.
Oh no - now we have FAKE PASCAL :rolleyes:

How'd Nvidia get 49-bit memory, unified memory and FP16 into Maxwell for the GeForce products?

Links to support FAKE PASCAL, please.
 
  • Like
Reactions: tuxon86
Oh no - now we have FAKE PASCAL :rolleyes:

How'd Nvidia get 49-bit memory, unified memory and FP16 into Maxwell for the GeForce products?

Links to support FAKE PASCAL, please.
What you are describing, is the GP100 chip, which is real Pascal uArchitecture. Consumer Pascal, (GP102, GP104, etc) are Shrunken down Maxwell cards.

Want best possible proof? Use similarly clocked GPU from both architectures, with similar core counts, and similar bandwidth - you will have EXACTLY the same level of gaming performance. GP100 is 30-40% faster clock for clock, core for core, compared to GP102 chip, in this particular scenario.

Nvidia even described differences between the architectures on their GP100 blog post at launch. Whole uArchitecture structure, for Consumer Pascal cards is exactly the same as for Maxwell, apart from adding better memory compression, and few meaningless features, that nobody benefits from. GP100 - that is completely different story.

In other words. Biggest impact on performance of Nvidia architectures, you have from registry file sizes. Jump from Kepler to Maxwell brought us 192 vs 128 core SM size, and those amounts of cores, in each SM have had the same size of RF's. Effect? Nvidia claimed that 128 cores of Maxwell have 90% of performance of 192 Kepler cores. People believed that Tile-Based Rasterization increased the performance of the GPU, but that is not entirely the case. TBR increases Efficiency, but does not increase the throughput of the GPU. Adding TBR to the Kepler, alone, would make the GPUs only more efficient, but not end in a situation in which GTX 980 was faster than GTX 780 Ti, both in gaming and Compute, despite having less cores, for Maxwell GPU. Consumer Pascal have exactly the same RF size as Maxwell has, for 128 Cores. GP 100 on the other hand, the same file size from Pascal/Kepler is available to 64 cores, so those 64 cores should, again have 90% of performance of 128 Maxwell/Pascal cores.

Expect that Nvidia will use GP100 uArch for Consumer, and professional Volta GPUs.

P.S. One more thing. If you want to see the biggest proof, search for benchmarks that are comparing GP100 chip with GP102 in compute. There are already reviews, of both GPU dies. GP100 chip is 30% faster on average in FP32 than GP102, despite having the same core count and slightly lower core clocks. Its the effect of the RF sizes available to lower amount of cores in GP100 chip. I hope that I brought to you understanding of what is happening.
 
Last edited:
At this point I think Apple is waiting on AMD's Vega and Intel's Skylake-EP. It wouldn't surprise me to see a WWDC announcement with a september release.

Who will be left to buy it that hasn't moved on?
And who would be crazy enough to stick with a company
who never talks about their "Pro" flagship and another possible 4 year upgrade cycle.
People are tired of spending a lot of money only to be treated like mushrooms.
Kept in the dark and fed b******t.
 
What you are describing, is the GP100 chip, which is real Pascal uArchitecture. Consumer Pascal, (GP102, GP104, etc) are Shrunken down Maxwell cards.
....

No links, like to "their GP100 blog post at launch"? And note that the GP100 is designed for the scientific HPC market which requires top FP64. The other Pascal chips are focused on gaming and machine learning - where FP32 and FP16 are very important.

Are you really basing your argument on the (obvious) fact that one would expect higher end Pascal chips with HBM to outperform cheaper ones with GDDR5?

Why don't you compare a P40 to a GTX 1080, and let us know what you find? (With links, of course.)
 
Last edited:
  • Like
Reactions: tuxon86
Who will be left to buy it that hasn't moved on?
And who would be crazy enough to stick with a company
who never talks about their "Pro" flagship and another possible 4 year upgrade cycle.
People are tired of spending a lot of money only to be treated like mushrooms.
Kept in the dark and fed b******t.

If you hate it so much and have given up why are you still posting in a mac pro forum?

No links, like to "their GP100 blog post at launch"? And note that the GP100 is designed for the scientific HPC market which requires top FP64. The other Pascal chips are focused on gaming and machine learning - where FP32 and FP16 are very important.

Are you really basing your argument on the (obvious) fact that one would expect higher end Pascal chips with HBM to outperform cheaper ones with GDDR5?

Why don't you compare a P40 to a GTX 1080, and let us know what you find? (With links, of course.)

Yeah, this is the right answer. GP100 is only faster "clock for clock" in tasks that require fp64 performance. Its got a monster 5 TFLOPS of fp64 performance. I would bet the "fake" pascal GTX 1080 Ti is faster in gaming than the "real" GP100 because it has more single precision compute. Not that anyone would buy a GP100 for gaming. This whole real pascal vs fake pascal is just a bunch of crap.
 
If you hate it so much and have given up why are you still posting in a mac pro forum?

I don't hate anything.
I'm angry at what Apple has done to their once wonderful lineup of computers.
If Apple doesn't do a 180 and announce a fantastic group of desktops at WWDC
I'll spare you my exasperation and spend more time on high end pc, photo,
video, gaming and 3D specific forums. This is a great forum and group of people around
here, but like you say, I've just about given up all hope for a powerful, good value,
but not cheap, desktop from Apple.
 
I drive a 2005 Infiniti FX35 and it is a terrific car. But it's beginning to show its age (150K+ miles), and would love to have basic niceties like USB, Bluetooth and CarPlay. Unfortunately, Infiniti hasn't bothered to update it since 2009, which was essentially a facelift and a power bump, but nothing more. Then they renamed it QX70 a few years later but technically it has remained the same since '09.
What's even more crazy, they're still selling this eight to thirteen year old tech (the original FX was released in 2003) for 2017 prices!

I'm really waiting for Infiniti to pull their heads out of the sand and release a new FX/QX, but at this point I doubt it will ever happen. They showed a concept car for a new QX50 but it looks like the QX70 is dead and buried at this point. I guess people like me are expected to just go for a QX50 when they come out, but I haul gear on a regular basis and really need the extra cargo space.

If my trusty old FX kicks the bucket (knock on wood...) I may just have to go look for an Acura or Mercedes...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.