Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
Just BTW apple.com (not the mobile version) now says Mac Pro supports up to three 5K displays (under tech specs). Wonder if that means something is coming or if it's a huge mistake.

Perhaps not an error. With 6x TB2 ports, there could (theoretically) be one 5k monitor running from each pair/bus on the nMP(6). Almost certainly not at 60Hz though... And it would leave limited bandwidth for anything else.

But could this be a clue that the Apple/5k/27" monitors may arrive before the nMP(7)? Curious.

Apple-nMP(6)-TechSpecs-20150921-5kx3.jpg
 
Yes, Mantle/Vulcan/Metal is a software technology, that is not dependent on AMD hardware.

Mantle is dependent on AMD hardware.

Sure it has been optimized for Radeon, but it can be used with any other GPU tech.

Nope. Vulkan is the generic version of it that can run on other hardware. But it's not exactly the same.

Perhaps AMD agreed to help Apple to build Metal in agreement where Apple becomes Radeon only house for - let's say - five years. And it started 2013 with nMP.

Mantle and Metal are not at all related or similar. Metal was an in house project for Apple's iOS GPUs. Nothing to do with Radeons. Nothing to do with AMD. If AMD had been involved, we'd be looking at Mantle or Vulkan, not Metal. Metal was not originally developed with any intention for the Mac, and even on the Mac it still has a mobile GPU feature set.
 
Much of the copy still says 3 4K. Only place I see them actually changing it is in the tech specs section on the US site.
 
A bit early to call TB3 MIA. Most of the Gen 6 (Skylake) options aren't shipping yet. Middle-end of Nov ...then that would be odd. However, ....

MIA at Apple; but any Mac movement post June is also MIA. Dell, not so much.

Alienware 13 , 15, 17 have Thunderbolt v3 ports......
http://www.alienware.com/landings/laptops.aspx

hmm it doesn't look like TB3 really. It claims "USB Type-C™ with support for SuperSpeed USB 10 Gbps and Thunderbolt™ 3 technologies". I wonder what functionally it may have.
 
hmm it doesn't look like TB3 really. It claims "USB Type-C™ with support for SuperSpeed USB 10 Gbps and Thunderbolt™ 3 technologies". I wonder what functionally it may have.

Of course, it would also be great to have something like the Alienware Graphics Amplifier that might work with OS-X/Macs/TB. But it looks like that day is still a long way off. Plus... Drivers!

http://www.anandtech.com/show/8653/alienware-graphics-amplifier
Quote: "The good news is that Alienware has more or less solved the problem (of external GPUs), but the bad news is that the interface is a proprietary Alienware/Dell design that is only available on their laptops."
 
hmm it doesn't look like TB3 really. It claims "USB Type-C™ with support for SuperSpeed USB 10 Gbps and Thunderbolt™ 3 technologies". I wonder what functionally it may have.

Thunderbolt v3 controllers have a USB 3.1 gen 2 controller embedded inside. Thunderbolt v3 is using the USB Type C connector as an additional 'Alternate mode'. In the quoted list above, if the first two were not present and someone claimed it have TB v3 then that would be a mystery. If have "USB Type-C" and "SuperSpeed USB 10 Gbps" then there is absolutely no disconnect with TB v3 following those two.

The folks who think TB is evil can just use the port as a USB 3.1 gen 2 port. Or as a miniDisplayPort port.

Thunderbolt never was a 'USB killer'. Nor vice versa. Thunderbolt v3 subsumes USB and adds more. The whole 'killer' obsession is to completely miss the point.
 
  • Like
Reactions: rdav
All rumors point now very clear Apple will skip Xeon 2600-V3 for the Mac Pro update, which means adoption of the imminent Xeon E52600-v4 early next year, the updated Mac Pro will arrive for next WWDC.

Note: Xeon E5-2600v4 ditches 4 core cpus, the base model starts at 6C/12T jumping upto 22C/44T (servers, destops upto 16C/32T).

This also implies to skip current AMD FirePro W9100 based GPU for the next gen GPU based on the same process as the R9-Nano with upto 40% increase in processing power at lower tdp (near doubling the D700).

Apple will move to std M2.NVMe as well, or at least using a proprietary m2 socket on NVMe signals.

TB3/USB-C of course comes included, with fewer TB2 "legacy" ports, no USB3 Ports, no TB2 with video output, upto 4 5K monitors could be connected to the updated new Mac Pro.

Lan Port will Adopt 10GbT std at last on RJ45/Cat6e infrastructure
 
Last edited:
HBM2 seems to be going well, with final testing Q4 and mass production Q1'16. We could see boards early Q2, maybe in time for nMP.

Source? I've seen some article hand wave about Q4 final testing but when dig into their supposed source all the references are to general 2016 shipping; not "really early Q1 '16". That really early "Q1 '16' seems more so driven by Nvidia fanboys wanting Pascal to drop earlier than it probably will.

Samsung ramping up HBM2 having not done HBM1 in volume would be surprising. SK Hynix hasn't been targeting Q1'16 in most info leaked from them. With multiple suppliers there may be some "ship first" contest with HBM2, but stable, high volume shipments quicker than HBM1 I'm a bit skeptical. Throw on top the new 14-16nm rollouts for the associated GPUs and there are lots of moving pieces here that could get log jammed. If volume is constrained it is doubtful that Apple is going to get a special supply if they are going to leverage binned (even tighter power ) models for their custom boards.

I'd say Skylake Xeons will be out only by mid 2017 at best, but then again maybe not, since most of the work I done with Broadwell.

Xeon E5 v5 probably won't be that late. There is potentially some coupling between it and a subset of the late 2016 Xeon Phi models dropping. The bandwidth increases at the server space should be quite a gap enhance between it and anything AMD could come up with. Just like the desktop/laptop Gen 5 - Gen 6 ( Broadwell - Skylake) transition got compressed, the server/workstation probably will also. However, compression in the server space means 8-12 months though ( instead of usual server 14-18 ).

E5 v5 comes with a whole new socket , chipset, and interconnect so v4 ( Broadwell) doesn't really lay any ground work.

The other probable hiccup is that there are some indications that Intel is going to decouple E5 1600 v5 (single cpu Workstation) like functionality from the the E5 2600 v5 offerings. Perhaps the workstation will get its own socket number E5 1700 v5 or prefix E4 1600 v5 (which implicitly may mean a defacto different socket electrically). That too may shrink the timelime between v4 and v5. If max out at 8-10 cores ( so die is smaller verification task) and slightly mod the desktop chipsets they may save time.
 
All rumours point now very clear Apple will skip Xeon 2600-V3 for the Mac Pro update, which means adoption of the inminente Xeon E52600-v4 early next year, the updated Mac Pro will arrive for next WWDC.

March ( E5 v3 ) means June ( WWDC ) release? There is little to no good rational to squat on this after what is likley a long delay after principle work was done on this design.


Note: Xeon E5-2600v4 ditches 4 core cpus, the base model starts at 6C/12T jumping upto 22C/44T (servers, destops upto 16C/32T).

The current Mac Pro doesn't use 4 , 6 , or 8 core E5 2600 models now. What difference will changes in the low end of the 2600 series line up make to a future Mac Pro? Extremely likely absolutely none. Single socket the 2600 series models are largely a huge mismatch to the selection.

Intel can either drop 4 core from the 1600 v4 line up or stop kneecapping the base and turbo speeds. For the core i7 4900 and 5900 models they dropped 4 but also kneecapped the I/O of the entry 6 (or dropping the entry model in the 5900 series). Can have 6 cores only if throttle back those 6 in some contexts. Intel doesn't want to sell a ~$300 6 core processor without baggage. Perhaps 14nm makes the margin high enough they'll break past that barrier.

the 1600's die probably scales up to 8 max which means that doing 4 is likely quite straightforward. The question would be whether they can and want to uncork those 4 toward running more 1-2 core workloads at a more optimized rate. They certainly did not with v3. The 4 core performance stagnated. I'm sure if that was on purpose ( kneecapping to hit lower price point) or internal network and/or thermal management issues ( infrastructure too skewed to core count and slower clock rates). The latter could be fixed with a move to 14nm. The former ( kneecapping to hit a lower price point) probably would not get any better moving to 6 core from 4. The cores will just be capped on clock to hit the price point.


TB3/USB-C of course comes included, with fewer TB2 "legacy" ports, no USB3 Ports, no TB2 with video output, upto 4 5K monitors could be connected to the updated new Mac Pro.

No USB 3 only ports is silly. The new chipset has it built in (cheaper than it was with the discrete controller in current Mac Pro ). The number of TB ports is likely going down if use TB3. So taking away 'plain' USB 3 ports at the same time is beyond goofy. You'd actually want to move as much potential USB 3 only traffic off of those limited TB 3 ports.

Again also with the reduction in TB ports you are not going to get more 5K external monitors connected. Just not going to happen.
 
4 core was never the 2600 realm, it's the 1600.
And Broadwell will have the same 4 core SKU as far as I can tell.
4 and 6 cores on the nMP are 1600 parts, only 8 and higher get 2600 SKUs.
Now the 8 core can be 1600 too, maybe the cost will drop, unlikely for Apple though.
Apple using M2? Don't bet on it.
Also, ditching USB3 integrated controllers would be stupid really. 4 or even maybe 6 ports will be there.
USB 3.1 will come as well courtesy of the TB3 controllers.
And how would you exactly connect the 4 5K displays? Hardly.
And 10GbE shouldn't come just yet, with Purley later on.

Yes, Skylake is a completely different beast but I was referring to the 14nm process that was the primary reason this all got delayed. Now with that sorted out, Skylake will probably have a much easier ramp up.
1600 is gonna be sort of a distant cousin it seems, which will set 1S workstations even further apart from 2S and on. nMP will be constrained to that setup or not. 1600 will continue the current path or even come closer to desktop parts, while 2600 and up seem to target servers only.
 
...
Now the 8 core can be 1600 too, maybe the cost will drop, unlikely for Apple though.

Depends upon what they are shooting for.

Current 8 core option E5 1680 v2 ( really a mutated 10 core 2600 with two cores flipped off to crank the clock speed a bit more...but still priced like a 10 core 2600 model )

base 3.0GHz turbo up to 3.9GHz price $1723.
http://ark.intel.com/products/77912/Intel-Xeon-Processor-E5-1680-v2-25M-Cache-3_00-GHz


E5 1680 v3 ( v4 probably has better clock but just to put out pricing versus clock . )
base 3.2GHz (gain ) turbo up to 3.8GHz (backslide ) price $1723 (still no change )
http://ark.intel.com/products/82767/Intel-Xeon-Processor-E5-1680-v3-20M-Cache-3_20-GHz

E5 1660 v3 (again v4 better. .... )
base 3.0GHz ( backslide ) turbo up to 3.5GHz ( backslide) price $1080 ( ~700 cheaper)
http://ark.intel.com/products/82766/Intel-Xeon-Processor-E5-1660-v3-20M-Cache-3_00-GHz

What Apple needs with an updated Mac Pro is more CPU options. There are more than just 3 trade offs to make. The current Mac Pro would have been better with a 10 core option ( between the gulf of 8 and 12 ). Some folks need more cores than max turbo. Other folks need a better mix of both. With old single and dual package line they had six CPU offerings. It wouldn't hurt to have 4-5. That isn't much more complexity.

[ Same with GPUs. If only going to be one supplier they need more than 3 options to cover the relatively broad group trying to target. Need a more affordable GPU option. Even if just chop the price on D300 pair (if GPU sockets staying mostly the same. ) ]

If Apple can have 5 different implementation Watch bands then can have around the same number of CPU/GPU options.




Apple using M2? Don't bet on it.

Now that they are optionally allow 3rd party trim in the OS. The mindset for slightly mutated M2 slot is alot weaker. Apple isn't doing much by making the key notches slightly less physically compatible. Electrically that don't seem to be doing much at all beyond M2. If simply just trying to keep out 3rd party TRIM, proprietary key notches might make sense. This next iteration Mac Pro it would be surprising if they stubbornly clung to that.

The next iteration were design starts after Apple enabled 3rd party TRIM... it borders on a waste of time and effort. Additionally if open dual internal SSD at that point (when more internal PCIe lanes available) then makes even less sense (slot will ship open in more than a few configs).


Yes, Skylake is a completely different beast but I was referring to the 14nm process that was the primary reason this all got delayed. Now with that sorted out, Skylake will probably have a much easier ramp up.

Process wise? Yes. The technology that is going into E5 v5 not so much. Especially, in the 2600 v5 space. New CPU package interconnect coming, Integrated very high speed networking ( 10GbE), bump in PCIe bandwidth, substantially higher internal bandwidth. That has more moving parts than the "bigger, faster" GPU that the mainstream desktop/laptop got.

Even if they fork off the 1600 from the "big iron" stuff ( new CPU package interconnect and integrated 10GbE ) they still have higher internal bandwidth to deal with. That may be closer to a incremented iGPU. Depends upon whose "internal comm ring" they pull to do the job. The mainstream one or one of the 2600 ones.

Similar issues with chipset. The DMI link is changing and the services integrated in chipset are changing. That isn't going to get you a shorten test cycle. If still on Xeon partners more rigorous test/QA evaluation protocols then they just take more take more time than the "hurry up and ship" desktop ones.
 
I do remember that Xeons are divided into 3 tiers.
With Ivy, currently in nMP, the lower tier was 4 and 6 core only, while 8 core was already the mid tier that goes up to 10 cores, and finally a 12 core in the higher end.
With Haswell, the lower already goes to 8 cores, which means the 1680 v3 should be a real 1600 SKU, not a rebadged 2600. Still, Apple wouldn't lower the cost just for that reason.
I agree with the number of available options, but having too many options doesn't seem the way Apple wants to go. They don't seem to want to spread out when it comes to available configurations, maybe for support reasons, or plainly because of economy of scale with the parts they buy from suppliers - maximize amount with minimum different SKUs.
Good point on the Watch bands though :)

Don't you think that now that PCs are all M2 Apple would seem to surrender to a better (more universal) solution, that wasn't created by them? Wouldn't that be considered as sort of a loss? Not a trend set by them? I find it highly unlikely. They started using PCIe SSDs as a standard solution first and now would have to bend to the newcomers standards? Hell, that would be like admitting failure or lack of vision in the first place.
And not to mention that this way anyone could replace drives on any Mac, which would most likely be (or they see it as such) a service nightmare. But that's just me wondering.

OK, Skylake is a completely different platform but it also should be quite underway. And precisely because there's so much new stuff in there, test and validation should take quite a while.
But who knows, maybe with all the delays they have been working on it.
Skylake was also mentioned as having PCIe 4 some time back and apparently it got canned, so a few things might change still.
 
Mantle and Metal are not at all related or similar. Metal was an in house project for Apple's iOS GPUs. Nothing to do with Radeons. Nothing to do with AMD. If AMD had been involved, we'd be looking at Mantle or Vulkan, not Metal. Metal was not originally developed with any intention for the Mac, and even on the Mac it still has a mobile GPU feature set.

What makes you think that Metal is not rebranded Mantle with some local spices? Mantle is a DICE and AMD co-production, and the planing state started around 2008 and coding project started somewhere early 2012. This interview (in German, link at the end. Use google translate or similar if needed) with DICE's rendering architect Johan Andersson has been released late 2013, and in it he says that he wants to see Mantle everywhere (smartphones, tablets and Mac OS). He says that any GPU manufacturer can implement it! It is noteworthy that Apple went AMD only in Mac's 2013. And Metal for iOS came one year later. To speed up the project, Apple has most likely done it with co-operation with AMD. And Apple has ported it to iOS first, because the mobile devices would benefit most of the new API.

Mantle was not meant to be AMD only, but they were the only GPU manufacture who actually joined the project in the beginning.

http://www.heise.de/newsticker/meld...ber-AMDs-3D-Schnittstelle-Mantle-2045398.html
 
Last edited:
The bands are cheaper than CPUs:) or not ?

I know joking, but Intel has done the overwhelming vast majority of the work here in CPU socket filling context.

So from Apple R&D perspective? No. It is 1-3 more models that fit in the same single CPU socket they are already doing the work for. Have largely the same TDP (and Apple can select to make that tighter. ) . So they don't really need to design a cooling system. It is another set of validation tests to run, but they are quite similar to the tests already running. Johny Ive and his band of merry men are not needed in any way shape or form in this subcontext. That cost overhead? Gone. if OS X works on the 12 core (or whatever new upper max is) model then a number between 4 and 12 probably won't surface any significant extra work in kernel , drivers, and extremely high percentage of Apple apps.

Market research might be a bit higher since need to still pick a narrow subset out of Intel's offerings. Who finds what distinctions is probably something Apple can't fully discover just by trying out demos on internal folks. They'd have to talk to folks. But that should be that high.

Logistically, they have to carry more product inventory, but 30% mark up on $800-2600 components is alot of slop filling profit. In terms of numbers in inventory it is likely several orders of magnitude less than the bands and the profits much higher to an equal order of magnitude.

GPUs are harder and more costly, but Apple doesn't have to do them all in a single year... especially if there are going to be gaps in the major Mac Pro product upgrades.
 
I do remember that Xeons are divided into 3 tiers.
With Ivy, currently in nMP, the lower tier was 4 and 6 core only, while 8 core was already the mid tier that goes up to 10 cores, and finally a 12 core in the higher end.
With Haswell, the lower already goes to 8 cores, which means the 1680 v3 should be a real 1600 SKU, not a rebadged 2600.

The 1680 v2 had 25MB of cache (the addition 5MB of the two cores flipped off) . The 1680v3 has only 20MB. So yeah it is in the same die as the rest of the 1600 line up.

HaswellEP_DieConfig_575px.png

http://www.anandtech.com/show/8423/intel-xeon-e5-version-3-up-to-18-haswell-ep-cores-/4

Some of the backslide on turbo is probably in part coupled to not enough data to keep it fed. Keeping it cool in a tighter space is another. But the 1680v3 is actually listed openly in ark.intel.com. Have to use google search to get to the 1680 v2 page. For whatever reason though Intel has stuck with the quirky price (puts loads more money in their pocket no doubt). It should be a bit lower than that (-$200 or so). We'll see when v4 arrives. Probably depends upon how hard Intel pushes the base/turbo rates up.


I agree with the number of available options, but having too many options doesn't seem the way Apple wants to go.

4-6 options isn't really alot (could be split into two major sub groupings). They have also lost a small amount of creditability with the "millions of permutations" that is suppose to be a 'insanely great thing' for the Watch in contrast with the you can only choose from a list of 3-4 dogma.


Don't you think that now that PCs are all M2 Apple would seem to surrender to a better (more universal) solution, that wasn't created by them?

Apple surrendered to the Internet ( not using Appletalk all that much anymore.) In Steve Jobs Flash rant didn't he say that Apple likes standards when appropriate.

M2 is a bit "too loose". There are multiple form factors and 2-3 years ago there was really no solid consensus from PC industry that they were going to adopt. The new Samsung 950 Pro M.2 is going to be sold at retail. The previous Samsung high end PCIe SSDs were not ( OEM only). The question of whether the slim stick format closest to what Apple implemented was going to be adopted broadly or not is apparently over. It is being sold at retail by major suppliers.


Wouldn't that be considered as sort of a loss?

It is probably cheaper for Apple to buy. They buy standard connectors and standard parts. Slap their own firmware on them. Same reason why buying Intel and buying ARM architecture license is cheaper. Share R&D costs with others and get better, more affordable products.

Apple had a bug up their but over TRIM. But if over that..... cost, support, and ease of buying wise ... it is better if buying from the industry. If Apple got to point where they rolled their own high end controller and were just buying flash chips in bulk ( like the iOS devices ) to solder onto the board then standards wouldn't make much of a difference. For now though, removable storage is good. High amounts of write activiy means the SSD is likely to wear out before the Mac system will.


It is better that Thunderbolt is somewhat widely adopted than Apple Display connector that went no where fast.


Not a trend set by them? I find it highly unlikely.

Apple already set the trend before the standard was really dry. Almost all the Macs are stick SSD only.
this is one of those things like EFI where Apple is more so waiting for the PC industry to catch up to the trend that is already moving.


They started using PCIe SSDs as a standard solution first and now would have to bend to the newcomers standards?

Bend? It was fundamentally the same thing electrically. Primarily the physical key notches in the connectors were different. The dimensions of the board were slightly different. More different for difference sake than a "bend" in direction long term.

Skylake was also mentioned as having PCIe 4 some time back and apparently it got canned, so a few things might change still.

Not at this stage. The design has been frozen. And in retrospect not too surprising that Intel traded PCIe 4 for their own in house interconnect after spending many many millions to buy it. Intel bought Aries / True Scale. Omni Path is a follow on to those ( http://www.anandtech.com/show/9561/exploring-intels-omnipath-network-fabric ).

Lots of boards haven't fully taken advantage of PCIe v3. Cranking the speed of PCIe v4 even higher is likely going to be at least as bumpy as the transition from v2 to v3 was. Sandy Bridge was late because of that. The folks highly demanding PCIe v4 were mainly the Infiniband and 100GbE folks. Intel's Omni-Path is better near term route that probably will be more cost effective. AMD barely got to PCIe v3 and who else is in hot persuit. ( Power and Sparc are also messing with other stuff also. )

Just doing 8 more PCIe v3 lanes is easier. Some folks can plug in more x16 and x8 cards and board vendors can solder on more 10GbE , Thunderbolt v3 and sockets for M2 x4 PCIe v3 SSDs.[/quote][/QUOTE]
 
The GPU they SHOULD use.

Note that they are claiming that performance isn't neutered.

The gap is smaller, but not miniscule.



slide07_575px.jpg


http://www.anandtech.com/show/9649/nvidia-gtx-980-in-notebooks

But performance isn't the sole criteria for selection. Cost , willingness to bundle the way Apple wants (in the "Pro" class) , and OpenCL adoption/support are also likely about as equal criteria. Fail on any of those and Nvidia would loose the design bake off.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.