Boy, I really wish there was a higher resolution version of that block diagram.
Boy, I really wish there was a higher resolution version of that block diagram.
Boy, I really wish there was a higher resolution version of that block diagram.
Simples - "nnMP".More importantly since everybody is calling the 2013 MacPro a "nMP" what will we call the nMP rev 2.0?
![]()
According to Anand, the nMP is actually short 4 PCIe lanes (hence the switch for the TB controllers)...
http://www.anandtech.com/show/7603/mac-pro-review-late-2013/8
...
So There are no free lanes.
Yeah no kidding... Is this official?
The key difference here is that it shows the USB hanging off the PEX switch instead of the PCH (Anand's assumption).
And if this is correct, it shows the SSD attached to the display GPU which blows any theory of interconnect pin constraints out the door.
- USB3 is still only single lane despite the fact that it and the three TB controllers are sharing 8 3.0 lanes which is the equivalent of 16 2.0 lanes (thus they could have opted to give USB x4 without impacting TB performance - an odd choice to artificially limit it - maybe a limitation of the USB chipset they selected?)
Two points:
- I count 4 unused lanes on that diagram - look for them
2. There's nothing wrong with modest oversubscription - of course if you run a "bandwidth virus" you'll see a slowdown, but almost never in real use
Two points:
- I count 4 unused lanes on that diagram - look for them
- There's nothing wrong with modest oversubscription - of course if you run a "bandwidth virus" you'll see a slowdown, but almost never in real use
Thanks for the explanation!
So the huge advantage of building a computer from E-series chips (Xeon) vs. i-series (i7, i5) is that Xeon has many more "lanes" of bandwidth which are called PCI?
Another is that the Xeon can take heat better so it wouldn't have to throttle down or throttle later compared to the i-series.
And finally, Xeon use ECC RAM whereas i-series can't.
Do you think the next nMP will have ECC RAM for the video cards?
Is there such a feature of ECC when writing to storage?
Actually, now that I see this diagram, you're right... they could move the USB to the unused lane on the PCH and use those four PCIe lanes on the PLX to facilitate another SSD without impacting much at all.
We'll probably go back to the internal model numbers. 5,1, 6,1... 7,1?More importantly since everybody is calling the 2013 MacPro a "nMP" what will we call the nMP rev 2.0?
![]()
It looks like the PCH has PCIe 2.0 rather than the PCIe 3.0 that the PCIe switch has, so the USB controller would suffer even more than it already does.
The curious thing with regard to the USB specifically (assuming the diagram is correct) is that the USB controller is only x1 PCIe, but is connected to a x4 connection on the PCIe switch.
As Deconstruct60 points out above, USB 3 controllers are all x1 v2 PCIe devices.
What's particularly odd, is that they put USB 3 on the PLX and all the PCIe networking on the PCH. It probably would have been easier from a PCB design perspective to put all the I/O off the PLX keeping all related traces local to the I/O board.
As Deconstruct60 points out above, USB 3 controllers are all x1 v2 PCIe devices.
What's particularly odd, is that they put USB 3 on the PLX and all the PCIe networking on the PCH. It probably would have been easier from a PCB design perspective to put all the I/O off the PLX keeping all related traces local to the I/O board.
So, something that is supposed to have 5 Gbit/s throughput per port, is actually limited to a theoretical limit of 4 Gbit/s aggregate across all ports. Why doesn't that surprise me? USB is such giant pile of crap for anything serious... But that doesn't really surprise me.
I'm guessing that typical PC motherboards have multiple controllers (like one controller for every 2 ports or something).
It looks like the PCH has PCIe 2.0 rather than the PCIe 3.0 that the PCIe switch has, so the USB controller would suffer even more than it already does.
The curious thing with regard to the USB specifically (assuming the diagram is correct) is that the USB controller is only x1 PCIe, but is connected to a x4 connection on the PCIe switch.
What's particularly odd, is that they put USB 3 on the PLX and all the PCIe networking on the PCH. It probably would have been easier from a PCB design perspective to put all the I/O off the PLX keeping all related traces local to the I/O board.
If he PLX switch delivers x4 then would need another PLX switch to "break" that into 4 x1 lanes.
It is likely a property of the switch. It is configured to be x8 -> ( x4, x,4 , x4 ,x4 ) switch. There probably isn't a mode that gets three x4's and sprinkles the remainder around as individual x1's.
Still odd that Apple wastes one lane on the PCH and three lanes on the PLX, and has that nice empty space on the second GPU where an SSD socket could be put.
It can be configured with x1 output ports - the limit is 6 ports.
...
That means that x8 -> (x4, x4, x4, x2, x1) could be done (leaving one lane on the table). Or x8 -> (x4, x4, x4, x1) leaving three lanes wasted.
Still odd that Apple wastes one lane on the PCH and three lanes on the PLX,
and has that nice empty space on the second GPU where an SSD socket could be put.
A columnist for Mac Life wrote that PCIe OtB (outside the box) would take over and that TB will fade. He thought it was a mistake that Apple devoted so many outlets to TB. He didn't say it explicitly, but it sounds like he was saying TB will end up like FireWire.
What do you all think of PCIe OtB?
Never heard of it. WTF is PCIe OtB? Isn't that what TB provides? I guess my ignorance says something about it.![]()
It's usually called PCIe external....
http://en.wikipedia.org/wiki/PCI_Express#PCI_Express_External_Cabling