Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
There is no need for Crossfire/SLI.

Metal uses Mantle as a base. If you want to see benefits of Mantle feature set: http://radeon.com/en-us/deus-ex-directx-12-mgpu/
100% scaling of performance.

Whole point of Mantle feature set was to be able to create multiGPU scaling on eGPUs, so you could have perfect scaling over external expansion.

This is of course implementation on AMD GPUs.
Thanks, this is interesting. Pretty sure nobody has implemented this on any macOS games using Metal though, which makes this absolutely worthless to me, personally. It's like everything else with Apple lately - yes, your hardware could do this, but we're not going to let it in macOS (eGPU, Crossfire / SLI, VR, etc.).
 
Thanks, this is interesting. Pretty sure nobody has implemented this on any macOS games using Metal though, which makes this absolutely worthless to me, personally. It's like everything else with Apple lately - yes, your hardware could do this, but we're not going to let it in macOS (eGPU, Crossfire / SLI, VR, etc.).
Metal allows for multiple GPU, to be used for GPU acceleration. Its only up to developers to use it.
 
Kinda sad that Apple haven't pushed the bar of GPU compute with Mac Pro. Perhaps they rolled it too early with GCN Gen 1. With all the latest advancements of GCN, and Apple's control of the OS, they could possibly drop back to 1 GPU to avoid multi-GPU scaling at all. As for the compute-3D split, it can be implemented via virtualization instead, and have resources shared dynamically between the 3D vGPU and compute vGPU.
 
I thought WoW already has Metal support rolled out with Legion, hasn't it?
It does, but the frame rates are still abysmal with my d700s. I've just about given up gaming on macOS. Dusted off an old Supermicro box (around the same specs as a MP 5,1) and stuck a GTX 1070 and a Vive on it. Haven't looked back yet.

I'd happily go back to the mac if they would get off their collective asses and get some official support for eGPU going or - you know - just upgrade the damn cylinder with some Nvidia GPUs!
 
http://www.fudzilla.com/news/processors/42010-macs-with-10nm-intel-in-late-2017

Good editorial on what might be going on with upcoming Intel CPUs and Apple.

One more thing. Because Apple might have partnership with AMD, does not mean we will see them turning away from Intel CPUs. Apple and Intel also have partnership.

I'd called it a good editorial if the thing was actually edited. I had to reread paragraphs of that to parse what the hell they were trying to say.
 
http://www.fudzilla.com/news/processors/42010-macs-with-10nm-intel-in-late-2017

Good editorial on what might be going on with upcoming Intel CPUs and Apple.

Good?????? Why is adding a LPDDR4 memory controller limited by going to a sub 14nm process design. I don't buy it at all. The H class processors that the 15" MBP uses aren't going sub 14nm. Coffee Lake is another iteration at 14nm.

https://www.macrumors.com/2016/09/22/intel-mobile-roadmap-coffee-lake/

To get LPDDR4 support just means changing the non-core elements. Just like how Intel changed the iGPU in Kaby Lake (gen 7) without a process shrink all they need to do is tweak the memory subsystem for LPDDR4 support.

The problem here is timing more so than process size. LPDDR4 has been a standard since 2013. The reason why it would be late to Coffee Lake is far more so because Coffee Lake didn't start until later ( after knew that Canonlake was waaaaay off schedule in terms of yield. Yes the process shrink is a major factor in the schedule slip. )

Intel restricting Cannon lake to smaller dies is pretty indicative that the lateness slide isn't going to be much bigger than what they Intel has already projected. They are already triaging the situation.

Apple is highly unlikely going to stuff 32GB into MBP 13" (or Macbooks) before put that capacity size into MBP 15" models. There is no Cannon lake H class on the chart. Kuo was off in the weeds on this one and this editorial just drives further out into the swamp from there.

I suspect that Coffee Lake not showing up until 2018 might be Intel being too conservative if there are just straightforward non-core updates and incremental refinements/optimizations. If AMD executes then wouldn't be surprised if this is pulled forward. If AMD shoots themselves in the foot (again) then Intel gooses higher profits on higher 14nm yields a bit longer in the > 30W range . They can offset lower than desired yields in the 10nm space for the smaller dies that way. Blend he right profit mix they can make 10nm work while others jump straight to 7nm because they have nothing substantive to blend with.

One more thing. Because Apple might have partnership with AMD, does not mean we will see them turning away from Intel CPUs. Apple and Intel also have partnership.

Depends on what Mobile solutions AMD can product in the 2nd half of 2017. If they are executing better than Intel then they could be out.

The longer it takes AMD to get their 14nm Zen solutions out to market the longer it is going to take the 7nm stuff to show up. Right now Intel is the substantially lower risk option for late 2017 Macs that are being design finalized now.
 
I wouldn't see Apple dismissing the dual GPU anytime now, it was a hard bet and they're not going back on it.

One thing is clear, the 16GB limitation will be here for a while. And honestly, only a few people (Pros essentially) need it and that's a solid argument for Apple. And the heavy stuff you'd prefer to do in a workstation instead of in a laptop. On the other hand, Apple is not offering updated workstations (yet).
All this race to the smaller process node is becoming ridiculous.
 
  • Like
Reactions: Synchro3
Any supply chain gurus here? I wonder how much it would have affected the MSRP if Apple had gone ahead and bumped the RAM to 32GB? Also, how many people would find the decreased battery life worth the tradeoff for increased RAM?

No doubt Apple knows their customers better than we do, but I really really wonder how many folks would be in an uproar over what - an hour of decreased battery life? Two hours?
 
Any supply chain gurus here? I wonder how much it would have affected the MSRP if Apple had gone ahead and bumped the RAM to 32GB? Also, how many people would find the decreased battery life worth the tradeoff for increased RAM?

No doubt Apple knows their customers better than we do, but I really really wonder how many folks would be in an uproar over what - an hour of decreased battery life? Two hours?

If it were a regression from previous generations? Way more people would be upset about that than the 16GB limit.
 
Any supply chain gurus here? I wonder how much it would have affected the MSRP if Apple had gone ahead and bumped the RAM to 32GB? Also, how many people would find the decreased battery life worth the tradeoff for increased RAM?

There is no room for 32GB on RAM on the logic board. There are only 4 package spots for RAM chips. Getting 32GB into 4 packages would be expensive. The alternative to to enlarge the logic board but that means more new, smaller batteries.

The 32GB laptops you hear about take so-DIMMs. They don't try to solder those RAM packages into the planar logic board. For example.

Crucial 16GB DDR4 so-DIMM http://www.crucial.com/usa/en/ct16g4sfd8213

There is are 4 ram packages on that so-DIMM. You would need two of those to get to 32GB. That makes 8 packages. There isn't room for 8 packages. Search all you want, there are no 32GB so-DIMMs out right now. That isn't a supply chain issue, that is talking about future products issue.

So the supply chain you talking about is not just some different version chips but an additional MB product. different logic board, different case , different batteries, different etc. The changes would not be localized to some optional soldered on packages.

Apple not making 1U or 2U severs , xMac , or boxes with slots isn't a supply chain issue. For better or worse there is a fixed number of Mac products they are willing to make. Adding another Mac product is a much bigger hurdle than adding another build-to-order option.



No doubt Apple knows their customers better than we do, but I really really wonder how many folks would be in an uproar over what - an hour of decreased battery life? Two hours?

$500-600 more dollars and lower battery life? Yeah there would be whiners.
 
  • Like
Reactions: Flint Ironstag
Kinda sad that Apple haven't pushed the bar of GPU compute with Mac Pro. Perhaps they rolled it too early with GCN Gen 1.

Apple bet on OpenCL more so than GPU compute. GCN Gen 1 was fine for circa 2013 version of OpenCL (version 1.1 ramping up on 1.2). What has been largely AWOL is Apple putting substantive effort into OpenCL 2.0 (and up).

There is nothing past 1.2 here:
https://support.apple.com/en-us/HT202823 "Mac computers that use OpenCL and OpenGL graphics"

That list is calcified on OpenGL support too. Apple changed the bet from OpenCL to Metal and that has had relatively minimal impact on the Mac Pro. Metal isn't a substitute for OpenCL ( there is some overlap in relatively narrow use cases, but generally not equivalent. ). If Apple has stuck OpenCL in a v1.2 deep freeze than Apple has pretty much walked away from what they bet on.

With all the latest advancements of GCN, and Apple's control of the OS, they could possibly drop back to 1 GPU to avoid multi-GPU scaling at all.

I lost track of Metal's progress. I don't think it covers 2nd GPU and/or shared memory computations all that well for a MacPro . I don't think the modern GCN updates tracking OpenCL 2.0+ are going to have deep leverage. With fully strapped on Metal blinders and a giant jug of TB v3 Type-C kool-aid Apple could throw the compute GPU away for a panel full of TB v3 type-C ports.

They'd be even less competitive than other multiple GPU (and/or CPU) workstations than before, but that would help out with some non-compute issues. (substantial drop in top end compute lowers power demands. Multiple internal x4 storage devices would have more bandwidth. )


As for the compute-3D split, it can be implemented via virtualization instead, and have resources shared dynamically between the 3D vGPU and compute vGPU.

In terms of closing up some security loopholes virtualization may help, but virtualization doesn't help in performance. Virtualization in terms of resources is only effective when have substantially underutilized resources. Taking high compute loads and collocating them with 3D graphics loads isn't going to necessarily get you better performance.
 
Apple bet on OpenCL more so than GPU compute. GCN Gen 1 was fine for circa 2013 version of OpenCL (version 1.1 ramping up on 1.2). What has been largely AWOL is Apple putting substantive effort into OpenCL 2.0 (and up).

There is nothing past 1.2 here:
https://support.apple.com/en-us/HT202823 "Mac computers that use OpenCL and OpenGL graphics"

That list is calcified on OpenGL support too. Apple changed the bet from OpenCL to Metal and that has had relatively minimal impact on the Mac Pro. Metal isn't a substitute for OpenCL ( there is some overlap in relatively narrow use cases, but generally not equivalent. ). If Apple has stuck OpenCL in a v1.2 deep freeze than Apple has pretty much walked away from what they bet on.
That's the point. No sign of OpenCL 2.0 at all.



I lost track of Metal's progress. I don't think it covers 2nd GPU and/or shared memory computations all that well for a MacPro . I don't think the modern GCN updates tracking OpenCL 2.0+ are going to have deep leverage. With fully strapped on Metal blinders and a giant jug of TB v3 Type-C kool-aid Apple could throw the compute GPU away for a panel full of TB v3 type-C ports.

They'd be even less competitive than other multiple GPU (and/or CPU) workstations than before, but that would help out with some non-compute issues. (substantial drop in top end compute lowers power demands. Multiple internal x4 storage devices would have more bandwidth. )

In terms of closing up some security loopholes virtualization may help, but virtualization doesn't help in performance. Virtualization in terms of resources is only effective when have substantially underutilized resources. Taking high compute loads and collocating them with 3D graphics loads isn't going to necessarily get you better performance.
The point is to improve utilisation. With two GPUs in Mac Pro, you are forced to split work across them, and applications must be designed for multi-GPU scaling to take advantage of both. With that power budget, they could have fit just one GPU.

What I am talking about is virtualising a strong GPU (like GP100 or AMD's Fiji equivalent on 16/14nm) into one GFX vGPU and one compute vGPU - kinda like how trash can splits the work in macOS. Since these are just virtualised, you may have the underlying resources dynamically partitioned between them. So let's say if one app is burning the GFX and no one else is doing compute, the GFX vGPU can steal all the resources from the compute vGPU to do raster graphics, instead of just one graphics card blowing fans but not another in the trash can.

Well, of course there are also alternative ways to do finer QoS. Just that the point is the latest GPU has the technology in place to avoid multi-GPU unless absolutely necessary, in the context of multitasking.
 
Last edited:
The rMBP is using LPDDR3 and you can't get more than 16GB.
Going with a non-LP solution would increase power draw dramatically and require extra room.
[doublepost=1478434769][/doublepost]I'm still pi$$ed at Apple for not releasing the new rTBD3. Most of the work is done, with LG. LG put out their models, but those just don't have the look. Why can't Apple do what they do best and design their own models? To go with their hardware, read nnMP? And rMBPs of course.
Maybe they'll just wait till it becomes mainstream and drive down costs but that isn't their MO.
It would be almost only putting the hardware inside a different case, a gorgeous one.
10b 5K 27" panel USB-C driven would be just what the doctor ordered :)

Just realized their offering special prices on LG monitors, up to 25% off. Great. Seems viable to have Apple monitors in line with last gen. 5K for under 1K?!
And all the adapters got their prices slashed too.
 
Last edited:
  • Like
Reactions: Aldaris
The rMBP is using LPDDR3 and you can't get more than 16GB.
...

Just realized their offering special prices on LG monitors, up to 25% off. Great. Seems viable to have Apple monitors in line with last gen. 5K for under 1K?!
And all the adapters got their prices slashed too.

At first I thought it was April's 1st... Apple reducing prices?:)

After this I read more carefully and finally it is for 2 months only... ok, so business as usual, this was their max... 2 months... for the love of the pros...
 
Yes, it's limited in time but still. They're feeling generous :)
Maybe it will stick, even after the year's end.
And when the nnMP comes out a rTBD will follow to make a complete set.
Wishful thinking I guess, they pretty much said they're out of the monitor business.
 
I have to say that although I didn't have any intention to buy an Apple monitor around this time, I prefer the matte displays from NEC or EIZO, but I really would like to see again one of these well designed Apple displays using the current technology, imho these LG offerings are not so tempting...
 
The rMBP is using LPDDR3 and you can't get more than 16GB.
Going with a non-LP solution would increase power draw dramatically and require extra room.
[doublepost=1478434769][/doublepost]I'm still pi$$ed at Apple for not releasing the new rTBD3. Most of the work is done, with LG. LG put out their models, but those just don't have the look. Why can't Apple do what they do best and design their own models? To go with their hardware, read nnMP? And rMBPs of course.
Maybe they'll just wait till it becomes mainstream and drive down costs but that isn't their MO.
It would be almost only putting the hardware inside a different case, a gorgeous one.
10b 5K 27" panel USB-C driven would be just what the doctor ordered :)

Just realized their offering special prices on LG monitors, up to 25% off. Great. Seems viable to have Apple monitors in line with last gen. 5K for under 1K?!
And all the adapters got their prices slashed too.
Exactly!

If apple was working so closely with lg they couldn't spare one of Ives tea hands to help make a more aesthetically pleasing-apple complimenting enclosures?
 
  • Like
Reactions: Flint Ironstag
Exactly!

If apple was working so closely with lg they couldn't spare one of Ives tea hands to help make a more aesthetically pleasing-apple complimenting enclosures?

wall mount that thing.. not that this is even an option for the majority of usages but if it is an option for you, consider doing it that way as it will probably look pretty ok then..
(assuming it's mountable in the first place)
 
wall mount that thing.. not that this is even an option for the majority of usages but if it is an option for you, consider doing it that way as it will probably look pretty ok then..
A wall mount does not help against the ugliest aspect of the monitor, the top bezel, that top bezel alone is the reason i could not have the monitor sitting infront of my face ;)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.