Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

xsmi123

macrumors regular
Original poster
Jun 30, 2016
134
50
Sylvania, OH
I didn't realize that people missed this announcement. I believe these will be the cards used in the Mac Pro up dates when we get them.



 
  • Like
Reactions: keysofanxiety
Wowowww this looks amazing.

$10,000 for the development kit alone, though! It'll be released in 2017 at the earliest. So unfortunately it probably won't be coming to a Mac Pro near you – not in the next couple of years, anyway :(
 
As for SSG. I think to make it work you need coherent fabric connecting the SSD to GPU and CPU.

I wonder if it is possible to connect both GPUs in Mac Pro with coherent fabric and SSD in the same way, to create one huge computing unit.
 
I wonder if it is possible to connect both GPUs in Mac Pro with coherent fabric and SSD in the same way, to create one huge computing unit.
It would almost certainly be doable.

The questions are whether it can be done within the space/power/thermal constraints of the cylinder, and whether it would be very expensive with few applications that could exploit it.
 
It would almost certainly be doable.

The questions are whether it can be done within the space/power/thermal constraints of the cylinder, and whether it would be very expensive with few applications that could exploit it.
One thing. Low-Level API called Metal. There you are - fixed your problem.
 
For example?
It's the law.

Which is what tuxon86 said, but explained using precise language like:

amdahl.png
 
Last edited:
I wouldn't want to see it in a cylinder. I, for one, am hoping that No One was telling the truth about the test mule he saw with four PCI slots and cards the same color as the case.
 
This card and the Mac Pro's cards intersect in one interesting way: One of the Mac Pro's graphics cards already holds up to 1 TB of SSD (and the other makes reservations for it). The Pro's cards have actually been working towards this sort of solution.
 
This card and the Mac Pro's cards intersect in one interesting way: One of the Mac Pro's graphics cards already holds up to 1 TB of SSD (and the other makes reservations for it). The Pro's cards have actually been working towards this sort of solution.


How? This is the first I've heard of this.
 
It's the law.

Which is what tuxon86 said, but explained using precise language like:

Yep, I forgot about that, thanks for reminding me.

However, how big is the market in this specific scenario? Because I think that solution we are discussing is for 99% of users that want powerful Mac, and have need for it, and have money for it. Apple is going all in with parallel computing, so I do not think Amdahl's Law is accurately used in this particular case.
 
This card and the Mac Pro's cards intersect in one interesting way: One of the Mac Pro's graphics cards already holds up to 1 TB of SSD (and the other makes reservations for it). The Pro's cards have actually been working towards this sort of solution.

That's just where the nMP's hard drive is, as it's probably the only place they could find room for it. Not the same as the AMD tech.
 
I didn't miss it, I just don't think it's relevant to anything in the Mac line up, but we'll see.

So you feel the nMP will continue... I don't. At least, I hope not.

Or do you feel that the D series is not a true Firepro Card?
 
Last edited:
That's just where the nMP's hard drive is, as it's probably the only place they could find room for it. Not the same as the AMD tech.

No, not the same, of course. But it's there. And the other card has vestigial support for a second connector. I'm just pointing out the curious "coincidence."

Personally, I don't think that it's entirely a mere accident of fate (e.g., couldn't fit otherwise). On the other hand, SSD on the graphics cards is probably a little premature for mainstream use.
 
Yep, I forgot about that, thanks for reminding me.

However, how big is the market in this specific scenario? Because I think that solution we are discussing is for 99% of users that want powerful Mac, and have need for it, and have money for it. Apple is going all in with parallel computing, so I do not think Amdahl's Law is accurately used in this particular case.

Parallel computing isn't a new thing. Thing is, only relative few task can effectively be parallelized. Graphics can and have been using it since they can be splitted in distinct unrelated rendering task. But for most other task that we use computer for the overhead of making sure task don't depend on the result of other task make it not worth the effort.
 
In future, more and more specialized tasks will be given to co-processors. Currently many tasks need a lot of single threaded non-parallel CPU time but same time the process needs access to unified memory to talk with different devices. Today that takes expensive CPU time to transport data from here to there.

The co-processor future is already happening with Apple's' Ax SoC series; there are M9/DSP/ISP... etc. but they are difficult to benchmark, and comparing Apple's' Arm-chips to Intel variants is not fair. Most likely A9X could run Logic Pro X same speed as Intel desktop CPU, because DSP could take most of the CPU load.

For Mac's, Intel is the road block preventing Macs to be fully HSA compliant and so co-processors can create a lot of overhear on CPU and limits its use. The main problem is Intel's closed memory architecture; it cannot share / doesn't want to share memory with co-processors. No support for universal heterogeneous Unified Memory Access architecture.

So, it could be that Apple wants AMD to make a good x86 CPU for them to replace Intel. To fund this, they've used AMD only GPU's for years now. Zen or a custom chip could give Apple the key to be different in personal computing space. They first fully HSA compliant computer. Maybe this is why we havn't seen a lot of updates recently? Because Apple want's to make a jump?

Second tech blocking Apples way to create HSA architecture (where they'd have keys for the system) was CUDA. So it had to go too... sure, latest CUDA supports HSA, but it is a closed system. OpenCL 2.0 supports shared virtual memory, which is a key component coming with HSA. So, no proper openCL 2.0 support before macOS and Mac hardware are HSA ready.

Well, this is my theory? Got better?
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.