Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
So you honestly believe 7,1 GPUs will be pin compatible and just bolt into a 6,1?

That is what is being discussed, not whether older displays work on newer cards.

Keep running with those goalposts.

Pin compatible? Yes, definitely. I don't think Apple is re-architecting the slots. They're just PCI-E slots. Why would the pins change?

As you've pointed out, Apple doesn't put the ROMs onboard the cards. That's obviously a solvable problem. I'm not going to advocate one way or the other on if they'll do that, other than to point out that there is nothing that would stop them from doing so for an upgrade.
 

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
And I humbly disagree and say that my experience with electronics tells me that those pins will need to be re-assigned due to all of the changing tech.

Layout can stay the same, but what they are connected to will be different.

Are you literally predicting that 7,1 GPUs will just bolt into a 6,1 for a quick update?

Why would Apple ever do that?
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
OK, let's make this as simple as possible.

What is coming out of the 7970s in a 6,1?

Older versions of HDMI and DP than any new card would use, right?

Why would rest of circuitry on a 6,1 know how to deal with additional bandwidth, signals that didn't exist when firmware and other components were designed and made?

(Answer: They wouldn't !) Just like the USB and TB being different specs, so would video out of cards.

Are you really hypothesizing that 7,1 GPUs are going to be bolt in replacements for 6,1? All I am saying is that they won't. There is literally a 0% chance of it. Reading others things in is up to you.

You first argument this should be impossible, while it doesn't, technically is feasibe for apple to keep the GPU form factor along nMP 6,1 & 7,1, inclusive the pinout configuration, ANOTHER QUESTION IS APPLE OFFERING GPU UPGRADES, not impossible but unlikely.

Neither the new D?10 GPUs on the nMP 6,1 has any bandwidth bottleneck, neither the older Dx00 would be incompatible with the 7,1.

Apple having nMP mules, of course, and surely must be built on AsRock-Rack Server/WS Motherbords since are very close in chipset also having PXE, Apple engineers shouldn't have the issues that hackintosh builder has running OSX on non-apple hardware since they control the EFI loader, they should have an OSX UUEFI loader compatible with PC.

But assume they just got the C612 Mainboard and E5v4 CPU, they only need the GPUs, why not test the new mainboard on a loaded thermal core with real GPUs.

Maybe the new GPUs wont be retro compatible with 6,1 for open public but Apple will do this on the GPUs and MB EFI not overcomplicating the nMP with an all new GPU card, while current form factor is feasible and allows earlier testing of the new hardware on older GPUs until AMD deliver the newer.

I read you post, and between lines its obvious your feeling, I do reserve more comments, I don't want this thread to be closed on the usual suspects behavior.
 
  • Like
Reactions: ShadovvMoon

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
And I humbly disagree and say that my experience with electronics tells me that those pins will need to be re-assigned due to all of the changing tech.

Might be good to fill in the blanks around "why". "I'm right because I don't want to be wrong!" doesn't work here.

Newer versions of DisplayPort are clock changes, not pin changes. Same thing with newer versions of PCIe. Why would the pins have to change? None of the underlying technology has pin changes, it's all clock changes. The Thunderbolt ports communicate with a Thunderbolt controller, but the Thunderbolt controller is still taking a DP input with the same number of pins from the card. The number of pins on a DP output isn't changing.

The only change I could see happening is that the D700 replacement wouldn't be an upgrade option due to power requirements. If the 7,1 gets a bigger power supply, and the D700 replacement is Vega, only the D300 and D500 options might be available. The only constraint I see is power.
 
  • Like
Reactions: ShadovvMoon

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
And I humbly disagree and say that my experience with electronics tells me that those pins will need to be re-assigned due to all of the changing tech.

Layout can stay the same, but what they are connected to will be different.

Are you literally predicting that 7,1 GPUs will just bolt into a 6,1 for a quick update?

Why would Apple ever do that?

Which tech?

nMP 6,1 PCIE3 ==> to ==> nMP 7,1 PCIE3

nMP 6,1 DP1.2 6 channles/GPU only 1 on use ==> to ==> nMP7,1 DP1,2 6 channels/GPU both in use

USB has nothing to do with GPU FF

TB2/TB3 has nothing to do with GPU FF, both specifiations require DP1,2 signals.
 
  • Like
Reactions: ShadovvMoon

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
Which tech?

nMP 6,1 PCIE3 ==> to ==> nMP 7,1 PCIE3

nMP 6,1 DP1.2 6 channles/GPU only 1 on use ==> to ==> nMP7,1 DP1,2 6 channels/GPU both in use

USB has nothing to do with GPU FF

TB2/TB3 has nothing to do with GPU FF, both specifiations require DP1,2 signals.

I give it a few posts till he simply declares he doesn't have time to explain this to us and takes off...
 

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
I respectfully think you guys are both full of it.

No way in high holy heck the newer cards will work.

Pretending not to understand how electronics work isn't going to help anybody.

The only people who could know for sure that they would work be Apple and/or AMD engineers.

If you'd like to share this info in that light, go ahead.

I mean, at the very least, don't you think the NVME drive is going to need more throughput? It's already constrained for bandwidth, extra lanes = extra pins = new GPU won't be same pins.
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
Pretending not to understand how electronics work isn't going to help anybody.

I know, that's what we've been telling you.
[doublepost=1464483350][/doublepost]
I mean, at the very least, don't you think the NVME drive is going to need more throughput? It's already constrained for bandwidth, extra lanes = extra pins = new GPU won't be same pins.

The NVME drive already has 4 lanes. Unless there is some magical 8 lane drive that you think is coming... If the lanes aren't changing the pins don't change.
 
  • Like
Reactions: Mago

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
I respectfully think you guys are both full of it.

No way in high holy heck the newer cards will work.

Pretending not to understand how electronics work isn't going to help anybody.

The only people who could know for sure that they would work be Apple and/or AMD engineers.

If you'd like to share this info in that light, go ahead.

I mean, at the very least, don't you think the NVME drive is going to need more throughput? It's already constrained for bandwidth, extra lanes = extra pins = new GPU won't be same pins.

Did you know what does an PXE switch ?, did you know Super micro Drives 8 GPU on a node from a single Xeon E5v3/4 using PXE switch (even more are possible).

nMP 7,1 sure will use an PXE switch to share the 8 remaining CPU lines to 3 ALPINE Ridge TB3 controller and one nVME SSD, each requiring 4 PCIe3 Lines, dual 10GbT Lan can be driven by PCH's 8 PCIe2 lines, and the WiFi-BT module on some internal USB3 as many manufacturers does.
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
nMP 7,1 sure will use an PXE switch to share the 8 remaining CPU lines to 3 Falcon Ridge TB3 controller and one nVME SSD, each requiring 4 PCIe3 Lines, dual 10GbT Lan can be driven by PCH's 8 PCIe2 lines, and the WiFi-BT module on some internal USB3 as many manufacturers does.

You clearly sound like someone who doesn't know anything about electronics.
 
  • Like
Reactions: Mago

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
I'm going to guess that neither one of you has tried getting other GPUs working on 6,1.

I have.

The number of PCIE switches is dizzying. There is a reason Netkas & I are only ones on planet to get eGPU working on 6,1 in Windows.

The reason STARTS with us not being a couple of armchair experts postulating about things we have never actually done.

Negotiating the PCIE switches is a nightmare, no reason to believe that they will stay same.

And I'm sorry you don't understand about NVME.

It DOESN"T HAVE AN NVME DRIVE YET.

So, (follow along here) to ADD an NVME drive, we need to ADD some PCIE lanes. The PCIE SSD in 6,1 runs slower in 6,1 than it does in a 5,1. It's already out of bandwidth.

See how that works?
 
  • Like
Reactions: Zarniwoop

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
So, (follow along here) to ADD an NVME drive, we need to ADD some PCIE lanes. The PCIE SSD in 6,1 runs slower in 6,1 than it does in a 5,1. It's already out of bandwidth.

There are so many things wrong here, but let's just start with the obvious one...

How many PCIe lanes does an NVME drive need?
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
I'm going to guess that neither one of you has tried getting other GPUs working on 6,1.

I have.

The number of PCIE switches is dizzying. There is a reason Netkas & I are only ones on planet to get eGPU working on 6,1 in Windows.

The reason STARTS with us not being a couple of armchair experts postulating about things we have never actually done.

Negotiating the PCIE switches is a nightmare, no reason to believe that they will stay same.

And I'm sorry you don't understand about NVME.

It DOESN"T HAVE AN NVME DRIVE YET.

So, (follow along here) to ADD an NVME drive, we need to ADD some PCIE lanes. The PCIE SSD in 6,1 runs slower in 6,1 than it does in a 5,1. It's already out of bandwidth.

See how that works?
look at this baby:http://www.asrock.com.tw/mb/Intel/X99 WS-E10G/ I present you extra-officially the closest cousin to the nMP7,1 but on X99 chipset instead C612 (Apple may use X99 indeed), it uses an PLX 8747 to DRIVE, not 3, but 7 GPU at 16 PCIe3 lies each upto 4 on SLI, but still you can connect 3 MIC Xeon Phi to this.
 
  • Like
Reactions: thedenethor

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
Just the same way that your cMP works on a GTX1080, same PCIe3, same DP1,2 signals.

Further the nMP71 don't need an PXE switch to drive the GPU, only to drive the TB3 and the NVMe.

You are factually in error.

Pretty sure you know it.

But when 7,1 comes out, I'll be sure to have everyone come see this thread.

The whole reason a 1080 can deal with older displays is due to it conforming to standards.

Why would internal Apple circuitry have any need for standards compliance?

Do you know why AGP x8 cards had to be taped to work in AGP x4 Macs?

Because Apple reassigned pins from the official standard and ran switch points and DC Voltrage through AGP pins not used in x4 but used in x8. They have no reason to include all of the standards compliant backwards compatibility in internally designed custom boards. Why would they? Unless their goal was to be helpful and provide upgrades. Which doesn't sound AT ALL like them.
 
Last edited:

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
You are factually in error.

Pretty sure you know it.

But when 7,1 comes out, I'll be sure to have everyone come see this thread.

Sorry, not PXE, it's PEX the PCIe switch... Thanks Sir.
[doublepost=1464485565][/doublepost]
Do you know why AGP x8 cards had to be taped to work in AGP x4 Macs?

Because Apple reassigned pins from the official standard and ran switch points and DC Voltrage through AGP pins not used in x4 but used in x8. They have no reason to include all of the standards compliant backwards compatibility in internally designed custom boards. Why would they? Unless their goal was to be helpful and provide upgrades. Which doesn't sound AT ALL like them.

So what happened at Apple later when they launched your loved cMP, ilogically compatible with commodity GPUs...

Or your cMP isn't an Apple product?

I'll keep this on records too.
[doublepost=1464486239][/doublepost]I'll concede you one thing, even if nMP7,1 GPU are compatible with 6,1 would be easier to find an pink/orange unicorn than an upgraded nMP6,1 with 7,1 GPUs, first reason, won't be easy to find (unless Apple enable nVidia and AMD to produce it, unlikely since de very low volume), second, prohibitive expensive, and 4th most people would prefer to seld their old mp6,1 and get an all new 7,1 instead get dirty (people wit income to buy a nMP unlikely to be worried about MP upgrades).
 
Last edited:

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
snagglepuss_by_bennythebeast-d4nomxk.png
 
  • Like
Reactions: koyoot

Stacc

macrumors 6502a
Jun 22, 2005
888
353
Its very unlikely that the GPUs in the new mac pro would be pin compatible with the 6,1. The connectors are some combination of PCIe, displayport 1.2 and crossfire bridge. In the new mac pro there won't be a crossfire bridge, the SSD will likely require more PCIe lanes and displayport will be upgraded to 1.3. There may also be a decrease in PCIe lanes on the GPUs, we don't know at this point. You also have to worry about different thermal profiles if the GPUs require more power and generate more heat.

It seems silly to argue about this, the machine is clearly not designed to be upgradeable and there is very little chance that its possible and even less chance Apple would sell upgraded GPUs.

And while the Armchair Experts sit around hypothesizing and postulating about adding GPUs to 6,1 nMP, some of us actually do something about it.

http://www.3dmark.com/fs/8611294

Gotta love that new GTX 1080. Even smushed to 4 PCIE 2.0 lanes, it still leaves Dual D700s for dead.

http://www.3dmark.com/3dm/2038247

It is interesting to see the results from GPUs in the Razer core. FYI its 4x PCIe 3.0 lanes. I hope Apple embraces GPUs over Thunderbolt across its lineup.
 
  • Like
Reactions: filmak

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
Its very unlikely that the GPUs in the new mac pro would be pin compatible with the 6,1. The connectors are some combination of PCIe, displayport 1.2 and crossfire bridge. In the new mac pro there won't be a crossfire bridge, the SSD will likely require more PCIe lanes and displayport will be upgraded to 1.3. There may also be a decrease in PCIe lanes on the GPUs, we don't know at this point. You also have to worry about different thermal profiles if the GPUs require more power and generate more heat.

It seems silly to argue about this, the machine is clearly not designed to be upgradeable and there is very little chance that its possible and even less chance Apple would sell upgraded GPUs.



It is interesting to see the results from GPUs in the Razer core. FYI its 4x PCIe 3.0 lanes. I hope Apple embraces GPUs over Thunderbolt across its lineup.

Thank You ! A voice of reason.

Not sure why they were putting up such a vociferous objection to obvious facts. I think GoMac realized that newer PCIE 3.0 NVME needs more lanes than the PCIE 2.0 AHCI drive in 6,1. Couldn't explain that away so off he went. I hope he has the cojones to come back and admit he was wrong but won't hold my breath.

I have a Razer Core here, but nothing to plug TB3 into. (Different socket & tech than TB2, believe it or not)
 
Last edited:

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
In the new mac pro there won't be a crossfire bridge, the SSD will likely require more PCIe lanes

Nobody has a SSD that requires more than 4 lanes. There is no SSD fast enough to justify more than 4 lanes by far.

and displayport will be upgraded to 1.3.

Which is a faster clock but has the same number of pins. Pins don't change.

There may also be a decrease in PCIe lanes on the GPUs

Which would make no sense, but sure.

You also have to worry about different thermal profiles if the GPUs require more power and generate more heat.

With Polaris's TDP, no. But yes, I already mentioned power for the top end. That's very unlikely on the low and mid end choices.

It seems silly to argue about this, the machine is clearly not designed to be upgradeable and there is very little chance that its possible and even less chance Apple would sell upgraded GPUs.

I don't think that's the case at all.

Thank You ! A voice of reason.

No one has come up with anything that requires a pin change.

Not sure why they were putting up such a vociferous objection to obvious facts. I think GoMac realized that newer PCIE 3.0 NVME needs more lanes than the PCIE 2.0 AHCI drive in 6,1. Couldn't explain that away so off he went. I hope he has the cojones to come back and admit he was wrong but won't hold my breath.

Ok, let's break down the other end of this argument...

Besides NVME still only using 4 lanes, we're not talking about changing the GPU. And because NVME and PCI-E both are based on PCI-E, the connector doesn't have to change.

You don't even need to touch NVME. Just plug the old SSD into the new card. NVME and PCI-E both use the same PCI-E pins.

It's just a PCI-E SSD. This whole NVME nonsense makes about as much sense as saying an Nvidia and AMD card require two different kinds of PCIe slots. Whether it is NVME or not doesn't matter. The new cards won't care whether it's NVME or not. The NVME controller is on the chipset for the CPU.

PCIE 3.0 vs 2.0 is also irrelevant because it doesn't change the number of pins, and 3.0 is backwards compatible with 2.0. You know that.
 
Last edited:

Stacc

macrumors 6502a
Jun 22, 2005
888
353
I have a Razer Core here, but nothing to plug TB3 into. (Different socket & tech than TB2, believe it or not)

Could you get a TB3 to TB2 adapter and see if you have any success? I don't know if any of those are actually available yet though.

No one has come up with anything that requires a pin change.

There will be no crossfire bridge in the adapter. Thus, less pins.

Which would make no sense, but sure.

Something has to give somewhere. Adding up things we expect in the mac pro.

16x GPU
16x GPU
4x SSD
4x times 3 Thunderbolt 3 controller.

We are sitting at 48 lanes with 40 available on the CPU. Either you drop some thunderbolt controllers or drop one of the GPUs to 8x. I suppose the other option would be to pool the bandwidth of various components behind a PCIe switch.
 
  • Like
Reactions: filmak

tuxon86

macrumors 65816
May 22, 2012
1,321
477
I thought it was the other way around.

Shouldn't power coders get the power workstations?

Users get just enough to do their work and not be on the cutting edge?
Nope
[doublepost=1464494372][/doublepost]
what, exactly, are these engineers waiting on?
what are their tasks that are causing such holdups?
what software, exactly, is being used?

here's the thing.. in my experience with the type of software i think you're talking about, your argument holds very little weight and is actually a little backwards in the conclusion.

because in just about every scenario (like high 90s percentage) with an engineer/designer sitting at their terminal.. if this person was using some crazy awesome computer.. a 44core HP Z for example.. and they came to you with "hey, my computer is lagging and i'm waiting on such&such task to finish"... to speed them up, you could put an imac in front of them.. and they'd be like 'oh, thanks.. that's a improvement'.. the faster imac will make their work to go smoother..

(the only way for them to end up saying "holy crap.. this is an incredible performance enhancement.. night&day difference!" would be through changes in the software)

with the type of software i think you're talking about, they can do maybe 1200 different things.. of which, about 4 of those things would show better performance on the 22core cpus.

....
and look, i'm asking questions for clarity.. i'm not telling you you're wrong.. when i see what you write towards me as a means to point out my limited perception, your example usages are completely opposite of what i find to be true..
if what you're saying is true then yes, my perception is limited.. so i'm asking for you to clarify what exact scenarios- software and tasks.. are these engineers waiting around in? and further, what hardware solves the issue?
Oh for **** sake, try to read and understand what people are talking about instead of going again on one of your ****ing nonsensical tangent.

Engineer don't work on a single thing at a time. While something is being process they alt-tab to another project and work on it for a while, then again go to another project... They aren't sitting on their asses waiting for a skateboard ramp to render as you do! We pay them top dollars for them to be productive. Is that ****ing clear enough or do i have to write it in crayons for it to finally sink in!
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.