Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
Another one with a hard time typing "Whoops I was wrong"
Besides NVME still only using 4 lanes, we're not talking about changing the GPU. And because NVME and PCI-E both are based on PCI-E, the connector doesn't have to change.

You don't even need to touch NVME. Just plug the old SSD into the new card. NVME and PCI-E both use the same PCI-E pins.

It's just a PCI-E SSD. This whole NVME nonsense makes about as much sense as saying an Nvidia and AMD card require two different kinds of PCIe slots. Whether it is NVME or not doesn't matter. The new cards won't care whether it's NVME or not. The NVME controller is on the chipset for the CPU.

PCIE 3.0 vs 2.0 is also irrelevant because it doesn't change the number of pins, and 3.0 is backwards compatible with 2.0. You know that.

I am sorry that you have chosen to be intellectually dishonest.

But I will have a hearty laugh when 7,1 comes out and your predictions are proven to be balderdash.

See if you can identify the following quotes:

"There are just no PCIe lanes left. There are barely enough for the Thunderbolt connections that are there, and the SSD doesn't even get dedicated lanes."

"The PCH also has another 8 PCIe 2.0 lanes"

"The current Mac Pro SSD, which is near the top end of SSD speeds, uses two PCIe lanes and I don't think quite maxes them out."

"All Mac Pros ship with a PCIe x4 SSD, and those four lanes also come off the PCH."

"When you look at the new Mac Pro, you find that the PCI lanes are pretty well maxed out. Even if Apple wanted, there really isn't enough bandwidth to add PCI-E slots. IIRC, even the SSD shares bandwidth with the GPU, which is probably why their isn't a second SSD."

So, take the above quotes, and figure out how there are an EXTRA x4 PCIE 3.0 lanes that magically get reassigned with 7,1 GPU. And how that remains pin compatible.

Would be the honorable thing to just admit you made an error.
 
With the limited SATA bus on a 5,1, to feed anything at any decent speed you're already looking at jacking SSDs directly into the PCIe bus to avoid the ancient I/O backplane on a 5,1.
Du'h, that's old news. Serious oMP users have been doing this for years.
 
Engineer don't work on a single thing at a time. While something is being process they alt-tab to another project and work on it for a while, then again go to another project... They aren't sitting on their asses waiting for a skateboard ramp to render as you do! We pay them top dollars for them to be productive. Is that ****ing clear enough or do i have to write it in crayons for it to finally sink in!

well you dodged the hell out of those questions.. nice work!!

(you probably should of just not answered anything instead of coming back at me with this type of baloney.. you could of saved a bit more face by just keeping quiet)
 
Could you get a TB3 to TB2 adapter and see if you have any success? I don't know if any of those are actually available yet though.



There will be no crossfire bridge in the adapter. Thus, less pins.



Something has to give somewhere. Adding up things we expect in the mac pro.

16x GPU
16x GPU
4x SSD
4x times 3 Thunderbolt 3 controller.

We are sitting at 48 lanes with 40 available on the CPU. Either you drop some thunderbolt controllers or drop one of the GPUs to 8x. I suppose the other option would be to pool the bandwidth of various components behind a PCIe switch.
16X switched to 16X 16X for GPU's
12 for 3 TB 3 links
4X storage
4X storage
= 36
so x4 for TB 3 link 4?
X4 dual 10GE
4X storage 3?
USB 3.1 + WIFI + GIG-e + GIG-e
20X switched to 16X 16X for gpus?

or they can add an 2th cpu to get

16X video
16X video
24 for 6 TB 3 links
16 for 4 storage ports
12 left over for other stuff (can use cpu 2 PCH link as an x4 pci-e link)
 
So, take the above quotes, and figure out how there are an EXTRA x4 PCIE 3.0 lanes that magically get reassigned with 7,1 GPU. And how that remains pin compatible.

The 6,1 has a PCIe 2.0 SSD. Why would you need to assign extra lanes? You can mount a 6,1 SSD on a new GPU and there are no lane changes. You're still running 2.0 lanes.

The 6,1 doesn't support NVMe because that's determined by the CPU chipset. No one here is suggesting that replacing the GPU will force an NVMe upgrade. You'd keep the old SSD on the board. No PCIe 3.0 rollover required for the SSD. Same number of PCIe lanes. Same throughput. No change in SSD bandwidth. Apple is very likely going to reuse the same connector for NVMe, just like they are reusing the M2 connection for NVMe on the PC side. And because the connector doesn't care what's installed, it's just passing traffic, a new GPU would still be electrically compatible with an existing 6,1 SSD. No one is seriously suggesting that if Apple offered an upgrade option, they'd force you to swap SSDs at the same time.

I'll just go out on a limb though and assume you're not being purposely obtuse and assume you mean if BOTH GPUs had slots for an SSD instead of one in a 6,1. Could be an issue. But I don't know if anyone has gathered enough data on that. Have you proven that there are dedicated lanes on the card for the SSD that are not present on the other card? Because that would surprise me considering both cards have the same number of pins. It's also very likely that Apple is using a PCIe switch. I believe there is anecdotal evidence out there that the SSD and GPU are competing for bandwidth, which implies that there aren't dedicated pins for each and it's just switching.
 
hmm.. when i 'readily admit' my files sizes, i'm not 'admitting' anything.. i'm bragging.

since when did file size become indicative of file content?

i can tell you i have a 5MB file.. and based off that alone, you have very little to absolute zero clue as to what the content is, how much human effort or skill went into creating the content, how much time or resources the computer spent processing the data.. nothing.

so you can crank out big files by 9:10a ? you've worked 10 minutes.. congratulations?

So what exactly is your premise? I don't see any coherent argument here.

You apparently like the nMP but you don't own one, nor have you given any indication that you need one. So why are you championing the nMP?

No, I have no idea what you're doing, it may be brilliant work, but your hardware requirements seem rather lilliputian by Workstation standards, even those from last decade. Maybe you're knocking out socially relevant animated GIFs. If your hardware needs are so meager compared to ours, how is that a problem for us? Do you really think we're just being inefficient? Making our files just a little too big for your discriminating tastes?

You apparently also think workstation owners would be happier with an iMac, why? Because it's thinner? Running 5K at 100°C is more macho? Think they might like glossy inaccurate, displays that can't be adjusted for height? I think most people using 44 Core machines have good reasons for using them. Since Apple has nothing even close to that range, it wouldn't seem like OSX would be an option for those people.

And, do you really think: A) it takes 10 minutes to take 2 sips of coffee? B) I crank out one file and take a break?

Is this some New York City Workflow or something?
 
  • Like
Reactions: tuxon86
I'm on my second cup of coffee within ten minutes of waking up but I didn't want to make the assumption you're as fiendish about the stuff as I.

I'll answer your other question tmmr or so.
 
The 6,1 has a PCIe 2.0 SSD. Why would you need to assign extra lanes? You can mount a 6,1 SSD on a new GPU and there are no lane changes. You're still running 2.0 lanes.

The 6,1 doesn't support NVMe because that's determined by the CPU chipset. No one here is suggesting that replacing the GPU will force an NVMe upgrade. You'd keep the old SSD on the board. No PCIe 3.0 rollover required for the SSD. Same number of PCIe lanes. Same throughput. No change in SSD bandwidth. Apple is very likely going to reuse the same connector for NVMe, just like they are reusing the M2 connection for NVMe on the PC side. And because the connector doesn't care what's installed, it's just passing traffic, a new GPU would still be electrically compatible with an existing 6,1 SSD. No one is seriously suggesting that if Apple offered an upgrade option, they'd force you to swap SSDs at the same time.

I'll just go out on a limb though and assume you're not being purposely obtuse and assume you mean if BOTH GPUs had slots for an SSD instead of one in a 6,1. Could be an issue. But I don't know if anyone has gathered enough data on that. Have you proven that there are dedicated lanes on the card for the SSD that are not present on the other card? Because that would surprise me considering both cards have the same number of pins. It's also very likely that Apple is using a PCIe switch. I believe there is anecdotal evidence out there that the SSD and GPU are competing for bandwidth, which implies that there aren't dedicated pins for each and it's just switching.


OK, play dumb.

NVME on my 5,1 but not from chipset.

You WILL be proven very wrong if 7,1 sees light of day.

I still can't decide if you are just playing dumb or if it is actual.

7,1 GPU would have an NVME drive on it. Would require x4 3.0 lanes.

Current GPU has x4 2.0 lanes coming off PCH.

So, double bandwidth is needed and x4 3.0 lanes that currently don't exist.

I guess you are counting on magical fairy dust? Currently leftover PCH 2.0 lanes used, nowhere to get those x4 3.0.

Please just stop with the act. We both know those 3.0 lines aren't run right now, and running them will require re-routing from elsewhere. Re-routing means old cards won't work.
 
Last edited:
7,1 GPU would have an NVME drive on it. Would require x4 3.0 lanes.

You're trying to have this two ways.
- If the GPU has dedicated lanes for the SSD, they'll fall back to PCIe 2.0 once the original SSD is slotted in. No 3.0 required.
- If the GPU doesn't have dedicated lanes, but instead has a switch, none of this matters and the SSD will take as many PCIe lanes as it needs (still at 2.0) without issue.

So pick which way you think this works. Does the GPU card have dedicated pins for the SSD, or not? Either way, you don't need x4 3.0 lanes for an x4 2.0 SSD. A 7,1 GPU doesn't have an NVMe drive on it. It has a M.2 (Apple keyed) connector. NVMe doesn't change the socket. A 7,1 GPU would still have the same SSD socket, on the same PCIe bus.

You keep saying NVMe like that changes anything. Nothing on the card physically changes due to NVMe.

Not to mention, NVMe does not require x4 PCIe 3.0 lanes. It would work fine with x4 PCIe 2.0 lanes. You could even install a NVMe PCIe 3.0 drive into a 2.0 slot and it would work fine.
[doublepost=1464503342][/doublepost]
7,1 GPU would have an NVME drive on it. Would require x4 3.0 lanes.

Current GPU has x4 2.0 lanes coming off PCH.

So, double bandwidth is needed and x4 3.0 lanes that currently don't exist.

Now I think you're just playing dumb. So what to any of this? It'll just run at 2.0 bandwidth. Big whoop. 3.0 is backwards compatible with 2.0. Plug it into a PCIe 2.0 Mac Pro 6,1, it'll run at 2.0. No one is saying it needs to run at 3.0 in a Mac Pro 6,1 except for this made up requirement you've got. You don't need to double the bandwidth because you don't need to run at 3.0.

Here: For fun try taking a PCIe 3.0 GPU and putting it into a PCIe 2.0 slot in a machine. Now be amazed when it works despite the slot only having half the bandwidth of a PCIe 3.0 slot. Gasp! Then put it in a x8 slot and watch it work despite only having 1/4 of the bandwidth. Amazing! Now I don't want to ruin the surprise of what will happen when you take a PCIe 3.0 16x card and put it in a PCIe 2.0 4x slot, I'll let you figure out that one for yourself...
 
Last edited:
I have an NVME 3.0 drive running in my 5,1.

It runs at PCIE 2.0 speed.

Something tells me that won't fly with 7,1.

Or maybe you're right and they'll leave it running at 2.0 with no speed gains after 3+ years. Wouldn't really surprise me.

But I'll eat (insert something really gross here) if 7,1 GPUs just slide into 6,1 and work. Even the dyed in the wool "its the screws" fans have given up on GPU upgrades.

I don't know why you are clinging so desperately to the idea, but have at it. No rule against being wrong.
 
I have an NVME 3.0 drive running in my 5,1.

It runs at PCIE 2.0 speed.

You should really go tell MacVidCards he was just saying that was impossi... oh right.

Something tells me that won't fly with 7,1.

Or maybe you're right and they'll leave it running at 2.0 with no speed gains after 3+ years. Wouldn't really surprise me.

I'm going to try to explain this to you again.

An upgraded GPU in a 6,1 would have the original SSD from the 6,1 running at PCIe 2.0 speeds. It would only ask for 2.0 speeds. The GPU would only need to provide 4x 2.0 lanes.

The same GPU in a 7,1 would have the NVMe SSD from a 7,1 running at PCIe 3.0 speeds. If Apple did that. Which who knows, this whole 3.0 NVMe thing is some major thing to you apparently. But either way it would work.

Not that hard. Both the 6,1 SSD and the 7,1 SSD can share the same connector. No NVMe SSDs supported on a 6,1 by Apple.
 
I know you've never tried to get a gPU running on a 6,1.

But I have.

Those PCIE switches aren't the little miracle workers you are trying to paint them to be.

They can't magically negotiate anything to anywhere, as you wish to claim. They also can't make PCIE 3.0 lanes out of nothing. You can take it from someone who got a GTX 1080 running on a 6,1 today, or from someone who does lots of lecturing from an armchair. (and has no (0) (ZERO) 6,1 GPU experience that he has mentioned)

The lanes aren't just up for random assignment wherever you like.
 
Those PCIE switches aren't the little miracle workers you are trying to paint them to be.

The PCIe switchers aren't negotiating anything. The original 6,1 SSD will only try to pull the 4 lanes at 2.0 speeds on a 6,1.

Even if the theoretical-haven't-even-shipped-yet switches were of some horrible design where they couldn't understand PCIe 2.0 vs. 3.0 speed (which sounds crazy but let's just go with crazy for now), there's nothing stopping Apple from making a 6,1 specific version like they did with the GeForce 8800 and the 1,1, and the 3,1.

You can take it from someone who got a GTX 1080 running on a 6,1 today, or from someone who does lots of lecturing from an armchair.

I'd be more impressed if I wasn't pretty sure you got that 1080 working over Thunderbolt, which has nothing to do with the internal GPUs....
 
Last edited:
To help educate and inform.

Here we can see how 6,1 connects resources. Note that the PCIE SSD is hanging off the 600, can it dole out PCIE 3.0 lanes?

If not, will require some re-wiring things. Probably won't be run through a SATA controller anymore either.

Note how the specific items are connected to specific things, not just handed out like Christmas Candy.

Took more than a year to get these to work. Always amusing when I get lectured on how it all works.

The images are doubled, it wouldn't all fit in one screen grab, still missing bottom and top. Same thing on 2014 Mini is...well...Mini.

1.PNG
2.PNG
 
Last edited:
Even If I do not see half of the posts here because of ignored content I can imagine what is happening :D.
Anyways. First GTX 1070 review was up yesterday. It looks like it is not faster than Titan X, not faster than Fury X even, mostly on par with GTX 980 Ti and Fury. In some resolutions it is heavily bottlenecked by GDDR5 memory bandwidth.

In another GPU news:
96iXUa8.jpg

I wonder how this compares to other GPUs.
 
Screen Shot 2016-05-29 at 3.13.40 AM.png
Even If I do not see half of the posts here because of ignored content I can imagine what is happening :D.
Anyways. First GTX 1070 review was up yesterday. It looks like it is not faster than Titan X, not faster than Fury X even, mostly on par with GTX 980 Ti and Fury. In some resolutions it is heavily bottlenecked by GDDR5 memory bandwidth.

So Nvidia will only have 4 of the top 5 GPU spots.

Hopefully "Mediocre Islands" present a challenge finally.

http://videocardz.com/60265/nvidia-geforce-gtx-1070-3dmark-firestrike-benchmarks
 
Last edited:
http://cdn.wccftech.com/wp-content/uploads/2016/05/NVIDIA-GeForce-GTX-1070_Performance_3DMark.jpg
3dMark benchmarks.
http://wccftech.com/nvidia-geforce-gtx-1070-titan-x-killer/ Gaming benchmarks - Ashes of Singularity DX12.
http://cdn.wccftech.com/wp-content/...GTX-1070_Performance_Batman-Arkham-Knight.jpg Batman, Arkham Knight.
http://cdn.wccftech.com/wp-content/...GeForce-GTX-1070_Performance_The-Division.jpg Tom Clancys's: The Division.
http://cdn.wccftech.com/wp-content/uploads/2016/05/NVIDIA-GeForce-GTX-1070_Performance_Hitman.jpg Hitman.
All of games are new. Only place where GTX 1070 is behind GTX 1080 are the older titles.

But There are two very good things about GTX1070. Its cheap, and has 130W of power consumption at load.
 
http://cdn.wccftech.com/wp-content/uploads/2016/05/NVIDIA-GeForce-GTX-1070_Performance_3DMark.jpg
3dMark benchmarks.
http://wccftech.com/nvidia-geforce-gtx-1070-titan-x-killer/ Gaming benchmarks - Ashes of Singularity DX12.
http://cdn.wccftech.com/wp-content/...GTX-1070_Performance_Batman-Arkham-Knight.jpg Batman, Arkham Knight.
http://cdn.wccftech.com/wp-content/...GeForce-GTX-1070_Performance_The-Division.jpg Tom Clancys's: The Division.
http://cdn.wccftech.com/wp-content/uploads/2016/05/NVIDIA-GeForce-GTX-1070_Performance_Hitman.jpg Hitman.
All of games are new. Only place where GTX 1070 is behind GTX 1080 are the older titles.

But There are two very good things about GTX1070. Its cheap, and has 130W of power consumption at load.

First 1070 review, but they pulled the page about 10 minutes after I read it.

http://wccftech.com/nvidia-geforce-gtx-1070-titan-x-killer/

I can go 1070 SLI in my Skylake system and it will be about 70% faster than the 1080 at about the same cost.
 
I know you've never tried to get a gPU running on a 6,1.

But I have.

Those PCIE switches aren't the little miracle workers you are trying to paint them to be.

They can't magically negotiate anything to anywhere, as you wish to claim. They also can't make PCIE 3.0 lanes out of nothing. You can take it from someone who got a GTX 1080 running on a 6,1 today, or from someone who does lots of lecturing from an armchair. (and has no (0) (ZERO) 6,1 GPU experience that he has mentioned)

The lanes aren't just up for random assignment wherever you like.

The nMP6,1 uses a PEX PCIe switch to connect the Falcon Ridge TB2 controller to 8 PCIe3 CPU lines (gets 12x PCIe2.0), and believe me those switches works as advertised...
Anantech said:
http://www.anandtech.com/show/9245/avago-announces-plx-pex9700-series-pcie-switches
...One of the benefits of PCIe switches is that they are designed to be essentially transparent. In the consumer space, I would wager that 99% of the users do not even know if their system has one..
the nMP7,1 Using an PEX 97xx Apple can: either bridge all the 40 CPU lines to the Switch and lets the switch handle priority for all PCIe devices (as does Asrock on their x99 WS motherboard), the PEX switch then enable upto 8 PCIEx16 ports with its own DMA channels (so an NVMe SSD moving data to an thunderbolt device dont need to use CPU resources), the new PEX9700 series are also low latenncy with littke impact on performance, having 8 PCIe channels apple nan use 2 for the GPUs, 1 for each Tb3 controller, 1 for the NVMe (even a x8 NVMe, unlikely), 1 for the Wireless (or the 8 PCIe2 from PCH) and 1 for 10GBt (o rthe 8 PCie2 from PCH).
the GPU no need to change the form factor, mayb Apple will change the SSD keying to avod switch old SSD with NVMe2 SSD, the new one still can route its PCIe3 signals thru the same 4 pins.

Alternatively, apple could use PCIe2 x8 NVMe also Raid PCIe2 x4 NVMe given the PCH PCIe lines have less latency, two PCIe 2 NVMe on Raid also will add Hype to the Mac Pro (while actually their speed is similar to more convenient single x4 PCIe3 NVMe).

There will be no TB2(host) to TB3(client) adapter, only TB3(host) to 2 TB2 (client), if you got an Tb3 cage you need a Tb3 host to use it.
 
Last edited:
The nMP6,1 uses a PEX PCIe switch to connect the Falcon Ridge TB2 controller to 8 PCIe3 CPU lines (gets 12x PCIe2.0), and believe me those switches works as advertised...

the nMP7,1 Using an PEX 97xx Apple can: either bridge all the 40 CPU lines to the Switch and lets the switch handle priority for all PCIe devices (as does Asrock on their x99 WS motherboard), the PEX switch then enable upto 8 PCIEx16 ports with its own DMA channels (so an NVMe SSD moving data to an thunderbolt device dont need to use CPU resources), the new PEX9700 series are also low latenncy with littke impact on performance, having 8 PCIe channels apple nan use 2 for the GPUs, 1 for each Tb3 controller, 1 for the NVMe (even a x8 NVMe, unlikely), 1 for the Wireless (or the 8 PCIe2 from PCH) and 1 for 10GBt (o rthe 8 PCie2 from PCH).
the GPU no need to change the form factor, mayb Apple will change the SSD keying to avod switch old SSD with NVMe2 SSD, the new one still can route its PCIe3 signals thru the same 4 pins.

Alternatively, apple could use PCIe2 x8 NVMe also Raid PCIe2 x4 NVMe given the PCH PCIe lines have less latency, two PCIe 2 NVMe on Raid also will add Hype to the Mac Pro (while actually their speed is similar to more convenient single x4 PCIe3 NVMe).

There will be no TB2(host) to TB3(client) adapter, only TB3(host) to 2 TB2 (client), if you got an Tb3 cage you need a Tb3 host to use it.

So all the parts change, all the interconnects change and all the outputs change but somehow the connections and wiring all stay the same?

Ludicrous, at best.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.