Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
I really don't need 6 TB3 ports, 4 is more than enough for me. I would like to see it have the same 6 ports though and PCIe3 NVMe of course, but that is highly unlikely and not the best solution, performance wise.
we agree, even each TB3 port could be splitted on 2 TB2 ports by an external splitter foresee by TB3 specification (still Vaporware, at least for me)

There's another downside of the SSD being on the PCH, not only it goes by PCIe2 (which is not really that much of a problem right now, as you say), but it also has to go through DMI (again PCIe2 like performance) concurrently with other data sources.

DMI is an 5G/T bus directly to the cpu main hub (something like the cpu's inner switch) so it's like having 4 extra PCIE 3 lines connected to a switch (the PCH), further the PCH allows data transfers among It's peripherals w/o loading the DMI bus by an sort of DMA, it's not different on what's inside the Falcon Ridge Thunderbolt 3, it's like an PCH having an bridge to the PCIE 3 lines the USB3.1 controller and the 10GbE embedded controllers (video data isn't bridged, simple uses a channel bypass to link the dp signals to the usb3 physical connector).

So there is moreless an tie among C612 PCH and Falcon Ridge..

Of course all this is on a concurrent scenario, it's less likely to see real competition among the PCH connected TB2 devices and the usb3 and the NVMe, while the same is further much less likely to happen in a tb3 controller (having competition among the USB, 10GbE and the Thunderbolt for the pcie lines). But it depends more on how the uses distrubute his peripherals.

I also don't ever use a second GbE port but will Apple go back on the specs? I doubt it. I would see a single Intel controller instead though. But that will get you only an extra lane, not much of a gain there. Moving all comm ports to USB (to free the PCH lanes), although possible, doesn't seem the way to go.

As previously mentioned Thunderbolt 3 has an onboard 10GbE controller, so the nnMP will ship with 3 Ethernet ports 1 from the PCH and 1 from each Falcon Ridge (those will require an media adapter), so effectively the nnMP improves this specification (as long you get the usb-c media adapter for 10GbE a sort of sfp+).

And the reference in OS X to the number of USB ports is somewhat a nonsense, unless I'm not seeing it correctly.
It mentions I believe 6 HS ports and 4 SS ports. Well, that would be 6 USB 2.0 ports (High Speed) and 4 USB 3.0/3.1 5/10Gbps ports. That doesn't match what it's being discussed here, does it? Even the PCH doesn't have that configuration by itself, unless some ports are disabled. X99 supports upto 14 USB ports, max 6 USB 3.0 and 8 USB 2.0 - disable 2 of each and there you go.

You have 4 SS ports (usb 3.1 usb-c), and 6 USB3 (HS) four of these on the back panel and the rest for WIFI ac and Bluetooth. All the native usb2 and Sata disabled by firmware.

The left over ports won't even be used for anything else (GbE, WiFi, BT) unless the active USB2 ports are already only used for this.

The PCH don't provide wifi and bt and it'd GbE it's not linked to the usb.

But where are the TB3 ports then? Do they even show up as USB ports at all? If they are those 4 SS ports then where are the PCH USB 3 ports?

Just wondering here..

Falcon Ridge provides it's own USB 3.1 2 each header (SS/10gbps), the USB3 from the PCH are HS (5Gbps) and go for the legacy connectors Type A on the back panel and for the internal bt/wifi.
 
Another thing that troubles me right now is that even 5K HDR is not supported on DP 1.3 so this will limit the TBD3 specs. Only 4K is supported, so we'll either have 5K SDR or 4K HDR, @60Hz that is...

FYI Thundebolt 3 only supports DP 1.2, and will require MST to drive an 5k display.

Dont cry yet, also is expected the nnMP to have an HDMI 2 port on this you can also an 8K display if there is a gpu capable on board, so demanding deployment will likely use that HDMI 2 for HDR@5K than displayport and don't require MST.
 
What is the ultimate conclusion, guys, about the amount of USB/TB3 on Mac Pro?
 
http://www.guru3d.com/news-story/nv...double-precision-for-flagship-pascal-gpu.html

12 TFLOPs of compute power SP, and 4 TFLOPs of DP. 6144 CUDA cores, at 1000 MHz, plus 16/32 GB of HBM2. Wanted to post earlier as a rumor, but it turned out to come out faster than I thought.

Second rumor is that Polaris GPUs graphical performance will reflect Maxwell performance from cores. So If this is true, we can expect 2048 GCN4 GPU reflect 2048 Maxwell CUDA core GPU gaming performance.

Yup, if all this is true, we have an extremely interesting year...
 
Mago, you got me wrong, but let's recap.
DMI at the moment is PCIe 2 x4 and not PCIe 3 x4, in X99/C612. Only Skylake gets DMI with PCIe 3 speed.
The bottleneck could come in case you're using SSD, GbE, USB and whatever (TB2 if still available on the new nMP)concurrently, struggling for the DMI channel bandwidth, most of the time it will hardly be noticeable for some of us.
I'm aware of the 10GbE controller available with TB3 of course, but I was saying that I don't think Apple will use those and cap any TB3 ports, since those aren't used to the fullest for many people yet and it would hamper the already few available TB3 ports. They won't ditch the 2 GbE ports onboard just yet, IMO, and make them 10GbE - and if both were used those would be 4 ethernet ports. Sorry, you mentioned they'd keep the USB-C ports and we'd have to get adapters, that works for me. But leaving out GbE ports counting on people buying 10GbE adapters?! Nah, they wouldn't!!!
Regarding USB ports, you will see that HS is USB2 and not USB3 as you mention. All USB3 are SS, regardless of being 3.0 or 3.1, and even here 5Gbps or 10Gbps.
What I was saying regarding GbE/WiFi/BT was that if we wanted to make all PCH lanes available, those would have to be moved to some of the USB2 ports available in the chipset (not a great solution in my opinion), instead of disabling them all. This I believe can be done but I wouldn't go that route. Requires additional engineering and different components.
Ye, FR has an internal USB3.1 10G controller, which would make the 4 rumored. But the remaining 6 HS are USB2 and not the PCH USB3 controllers, like I mentioned before. That's why it doesn't add up for me, but I might be missing something.
Still, if you start using 10GbE and USB3.1 in every TB3 port you end up with no display driving capable port, right? And you're supposed to be able to hook up at least a couple of displays, each being 5K would be almost a requirement at this time. And I know TB3 supports only DP1.2 but MST would do the trick. Do you really see Apple now telling you to use HDMI over DP to get HDR at 5K? That was to say TB3/DP is not for display output anymore, a fail.
 
Osx leaks 10 usb ports, 4 SS(3.1) meant for usb-c Thunderbolt, and 6 HS (3.0) 4 for the back panel and 2 for internal peripherals (wifi bt), this is what I see..

I wouldn't take the 10 USB ports thing as fact. The rumor stemmed from some digging in OS X system files. I think it was someone on this forum that identified these 10 ports as a combination of usb 3 and usb 2 ports. Some of these would likely be internal to the system for things like bluetooth. So don't read too much into it.

In terms of the PCIe configuration debate. If Apple wanted to convert the 3 thunderbolt 2 controllers into 3 thunderbolt 3 controllers there would not be enough bandwidth if each controller was saturated simultaneously. We don't know what mix of USB 3.1/Thunderbolt 3 and Thunderbolt 2 ports Apple will use. They could keep a TB2 controller or 2 due to the existing ecosystem of TB2 peripherals and the fact that you need an active adapter to go from TB3 to TB2. There is likely another bandwidth requirement with an NVMe SSD (and we can dream that they could use 2 SSDs). I am not going to pretend that I can map out all the lanes and claim that this is the answer but I see a few possible options.

1. Drop one of the GPUs from 16x to 8x. I doubt this would be noticeable even in benchmarks and this would give them enough bandwidth for at least a couple TB3 controllers and the SSD.

2. Keep the same configuration as what is in the mac pro now. While this would not increase the amount of bandwidth available to all the controllers, any individual controller can use a larger share of the available bandwidth. Thus, if you aren't saturating all the TB3 controllers at once you would likely not notice. Additionally, displayport over TB3 would not count against this bandwidth, so you wouldn't tank your performance by adding say a retina display.

3. Maybe someone who is more knowledgable about this than me can weigh in. Another possibility could be to manage the PCIe lanes "on the fly." If only 1 TB3 device is connected, then each GPU is at 16x. If you have devices plugged in to each TB port, then drop one of the GPUs to 8x to give the TB controllers extra bandwidth. Again, this may not be possible but would be a nice way of handling the bandwidth problem. This type of solution would be a good reason to buy a company that makes PCIe splitters so they can develop this for you.
 
JM, you get (almost) full bandwidth, depending on what you hook up at each port and the requirements of the connected device.
There are no miracles and we keep wanting more and more. Even coming Skylake, and assuming nMP gets Purley with the full 48 PCIe 3 lanes (which I doubt) by then we will need an extra 8 or 16 lanes, for whatever not. Wanna bet?
We'll need a couple 10GbE, half a dozen SSDs, more TB ports. Maybe when the Xeon gets about a 100 lanes it will be enough, or not!! :)
[doublepost=1455731223][/doublepost]Stacc, I don't think they'll cut any of the GPUs to x8, the GPUs are a very important part of the nMP and they wouldn't cut there, even if there is almost no penalty.
I believe a dynamic balancing would be possible but at least an additional switch would be required and some firmware tweaks, don't think Apple will do that though.
 
I wouldn't take the 10 USB ports thing as fact. The rumor stemmed from some digging in OS X system files. I think it was someone on this forum that identified these 10 ports as a combination of usb 3 and usb 2 ports. Some of these would likely be internal to the system for things like bluetooth. So don't read too much into it..

This was an old discussion, FYI SS means Super Speed USB or USB 3.1 the kind of usb from Falcon Ridge Thunderbolt 3 controller (or header if you like this naming). While HS means High Speed Usb out USB 3.0, this is the naming defined by USB consortium.


In terms of the PCIe configuration debate. If Apple wanted to convert the 3 thunderbolt 2 controllers into 3 thunderbolt 3 controllers there would not be enough bandwidth if each controller was saturated simultaneously. We don't know what mix of USB 3.1/Thunderbolt 3 and Thunderbolt 2 ports Apple will use. They could keep a TB2 controller or 2 due to the existing ecosystem of TB2 peripherals and the fact that you need an active adapter to go from TB3 to TB2. There is likely another bandwidth requirement with an NVMe SSD (and we can dream that they could use 2 SSDs). I am not going to pretend that I can map out all the lanes and claim that this is the answer but I see a few possible options..

Each Thunderbolt controller (or header) shares 4 pcie lines among each port, Thunderbolt 2 uses Pcie 2, Thunderbolt 3 uses Pcie 3,further while Thunderbolt 2 only provides pcie bridging, Thunderbolt 3 Falcon Ridge chipset provides USB 3.1, 10GbE all these peripheral bridged to the same 4 pcie lines.

1. Drop one of the GPUs from 16x to 8x. I doubt this would be noticeable even in benchmarks and this would give them enough bandwidth for at least a couple TB3 controllers and the SSD..

Thunderbolt not as critical as gpu bandwidth for workstation tasks, this is an absolute non sense.

2. Keep the same configuration as what is in the mac pro now. While this would not increase the amount of bandwidth available to all the controllers, any individual controller can use a larger share of the available bandwidth. Thus, if you aren't saturating all the TB3 controllers at once you would likely not notice. Additionally, displayport over TB3 would not count against this bandwidth, so you wouldn't tank your performance by adding say a retina display..

OSX leaks at least provides cues on the new nMP having only 4 SS USB3.1 the kind of usb from Falcón Ridge, this means only 4 TB3 ports which means 2 Falcon Ridge headers using the remaining 8 PCIE 3 lines from the cpu.

But there still 8 PCIE 2 lines available from C612 PCH, Apple Could take 4 for NVMe SSD (current nMP uses 2 lines) and 4 for a Thunderbolt 2 header having now 6 total Thunderbolt ports (4 gen 3 and 2 gen 2).

That doesn't leaves free pcie 2 lines for wifi and Bluetooth but those peripherals are used to be plugged internally to USB3 ports on almost every other manufacturer, only big loss is the 2nd GbE port, users then could opt for plugging a 10GbE media interface to one free Thunderbolt 3 port which actually should be much cheaper than ads an 10 GbE pcie card or Thunderbolt 2 adapter.

3. Maybe someone who is more knowledgable about this than me can weigh in. Another possibility could be to manage the PCIe lanes "on the fly." If only 1 TB3 device is connected, then each GPU is at 16x. If you have devices plugged in to each TB port, then drop one of the GPUs to 8x to give the TB controllers extra bandwidth. Again, this may not be possible but would be a nice way of handling the bandwidth problem. This type of solution would be a good reason to buy a company that makes PCIe splitters so they can develop this for you.

Unlikely, as extreme solution (and one I'll not follow) Apple could use an PCIE switch to share that 8 PCIE 3 lines from the cpu shared with Thunderbolt and NVMe and 10GbE, this is naive since said NVMe should perform better on the PCH free pcie 2 than sharing bandwidth with Thunderbolt 3 peripherals (which use to have an high load).

Less likely (and nobody has tried it yet) is to bridge all 40 Pcie 3 lines, it lacks sense
 
Mago, your urge to type has outstripped your knowledge, again.

6,1 has 4 PCIE lanes on the PCIE SSD already. Might want to try doing fact checking before typing out nonsense.
 
  • Like
Reactions: Mago
I guess Intel didn't consider it at the time, maybe because there were no DP1.3 cards yet.
I believe it's only a matter of validation.
Maybe they'll make a spec update sometime in the future, don't hold your breath though.

Mago, you keep saying that High Speed USB is USB 3.0 but in fact it's USB 2.0.
[doublepost=1455740179][/doublepost]Correct, SSD on nMP is a 4 lane device. Had to be, to be that fast.
 
Thunderbolt not as critical as gpu bandwidth for workstation tasks, this is an absolute non sense.
Not to be a jerk, but show me the data. I have seen many results that show dropping PCIe from 16x to 8x has a negligible impact on gaming performance. What is different about compute workloads that change this?
 
Not to be a jerk, but show me the data. I have seen many results that show dropping PCIe from 16x to 8x has a negligible impact on gaming performance. What is different about compute workloads that change this?

It would have an impact only if the process isn't optimized and there is a vast amount of wasted memory transfers between the main system and the GPUs. If the process is optimized and the units that need to be processed by the GPU minimize the number of transfer on the bus then there isn't that big of a deal between 16x or 8x.
 
The question is tuxon, how all of what you have written will look like in Metal environment.
 
Mago, you got me wrong, but let's recap.
DMI at the moment is PCIe 2 x4 and not PCIe 3 x4, in X99/C612. Only Skylake gets DMI with PCIe 3 speed.

DMI is an dedicated bus that links the PCH to the CPU, is not PCIE 2 nor PCIE3, from the PCH are generated PCIe signals ( 8 PCIe 2 case C612, 4-8 PCIE3 case Skaylake). https://en.wikipedia.org/wiki/Direct_Media_Interface

The bottleneck could come in case you're using SSD, GbE, USB and whatever (TB2 if still available on the new nMP)concurrently, struggling for the DMI channel bandwidth, most of the time it will hardly be noticeable for some of us.
depend on the nature of the peripherals, if you combine SSD with Eth and some TB Capture device, its very unlikely to occur an cocurrent accces since most of the time when one device its accessed is to write to another and the CPU/DMA must read one to write another.
I'm aware of the 10GbE controller available with TB3 of course, but I was saying that I don't think Apple will use those and cap any TB3 ports, since those aren't used to the fullest for many people yet and it would hamper the already few available TB3 ports.
Are you talking about the same company that launched an Retina Macbook with just an single USB-C for every thing?

First, 10GbE is not an option is part of Tb3 specification, and Media Adaters are foreseen by Intel, those devices will let you connect any kind of 10GbE cable and also leave you with an free USB3.1, DP1.2 Tb2 or Tb3 port for daisychaining.

Further this feature is promoted as a way to link 2 systems w/o actually purchasing 10GbE infrastructure, you can just plug 2 Mac with Tb3 using a Tb3 Cable among them and atumaticcaly link both as host-server or an sort of wired wiFi direct on steroids.
They won't ditch the 2 GbE ports onboard just yet, IMO, and make them 10GbE - and if both were used those would be 4 ethernet ports. Sorry, you mentioned they'd keep the USB-C ports and we'd have to get adapters, that works for me.
The problem with 10GbE is that on professional deployments is unlikely you to use an RJ45 jack, usually are used Media Adapters (SFP+ the most "popular") since 10GbE cabling is very tricky acording the topology you can use copper or optical fiber, an PRO system likely requires this media flexibility, notwithstanding there are few adapters with 10GB/GbE/100MB link speed, this is not actual the 10GbE std.

But leaving out GbE ports counting on people buying 10GbE adapters?! Nah, they wouldn't!!!
most people wont buy Media Adapters (not the same as a 10GbE Adapter, MA are much cheaper), but use simple a Tb3 cable, anyway those 10GbE interface are part of Tb3, I doubt Apple will redundant this at the expense of the optimal system, the best for all is ditch the 2nd port and those hungry for speed could buy an relatively cheap 10GbE adapter(worth to mention on 10GbE you dont ned tricks as Link Aggregation, you have just full speed).
Regarding USB ports, you will see that HS is USB2 and not USB3 as you mention. All USB3 are SS, regardless of being 3.0 or 3.1, and even here 5Gbps or 10Gbps.
more less, the leaked OS/X file stands for SSP01...04 SuperSpeed+ is the moniker for usb3.1.

There is an bit of confusion respect to those "HS"ports, while HS most references as USB2, on systems with mixed usb3.1 and usb3.0 could be used for OS differentiation while not necessarily USB3/USB2, not the first OS reference like this.
What I was saying regarding GbE/WiFi/BT was that if we wanted to make all PCH lanes available, those would have to be moved to some of the USB2 ports available in the chipset (not a great solution in my opinion), instead of disabling them all. This I believe can be done but I wouldn't go that route. Requires additional engineering and different components.
Ye, FR has an internal USB3.1 10G controller, which would make the 4 rumored. But the remaining 6 HS are USB2 and not the PCH USB3 controllers, like I mentioned before. That's why it doesn't add up for me, but I might be missing something.
I don't understand you, using the GbE from the PCH dont busy any PCIe2 lines, neither the sata or USB3 ports on the PCH, while both shares the DMI port its another question, Usb2 dont have bandwidth for WiFI ac, most motherboards arent filled with all the ports available on the PCH those ports are disabled on the BIOS or UEFI Firmware, no engineering required.

The HS/USB2 issue maybe an error or a mask, independent on the Leak being valid or not, the fact is that the C612 PCH has 6 USB3, and Falcon Ridge each has 2 Usb3.1, the combination I imagine feasible (which doesn't mean I'm ruling this the only way) its reasonable and optimal (avoids PCIE 3 Bridges, and still provides a couple of TB2 ports).

When the nnMP arrives will be unlikely to see NVMe faster than 2.5 GBps, neithe r USB3.1 A/C version peripheral are very common, so having 2 less TB3/Usb-c will hust less than the performance lost from using Pcie Bridges (specially if you link GPUs to the TB3s)

Still, if you start using 10GbE and USB3.1 in every TB3 port you end up with no display driving capable port, right? And you're supposed to be able to hook up at least a couple of displays, each being 5K would be almost a requirement at this time. And I know TB3 supports only DP1.2 but MST would do the trick. Do you really see Apple now telling you to use HDMI over DP to get HDR at 5K? That was to say TB3/DP is not for display output anymore, a fail.

I think most people will likely use TB3 to link multiple 4K screens than 5K, and those with 5K will prefer use HDMI, at least is the Path i would follow.

Actually HDR is matter for TV/Video content creation, not for production Displays (HDR video requires content encoded for HDR, its more an capture issue than a Display issue as some marketing tries to sell, you can watch HDR video on any 10 bit 4K/5K capable monitor using an uncompressed interface as DP1.2 sst/mst or HDMI as long ou can handle 10bit color deep, so I dont care for HDR (search about what is HDR and how is get).

But high frame rate is another question, also some people thing an video link w/o MST delivers an better image, uses less power etc, we haven an 2014 5K iMac we dont see any issue on this display, Also I've connected a dell 5K display thru MST to my nMP and only issue was on initial setup, nothing very different than iMac's display.

So assuming you are smart enough, you have 4 TB3 you can connect an 4K display and a 10GbE to port 1 and still have the port 2 ont he same header available fot Thunderbolt PCIE data as to plug an external capture device or an SSD, also you can do the same on USBc devices (more likely for storage). then you can plug 2 4K display to port 3 (or 1 5k) and and external GPU to port 4, now speculate about both TB2 ports, maybe you'll not be capable to use it for dp1.2 signals if you use the HDMI2 port where you can plug an 5K display with 120Hz refresh, and more likely for legacy Tb2 storage or capture devices w/o interring the video output to HDMI 2.0.

The facts is very few people also using a nMP as workstation actually plug some Ethernet cable and I could bet you that 90% or more dont plug more than 1 Ethernet port, so losing 1 Ethernet port is not an game changer, as to have 10GbE already available (and cheaper), while more expensive 10GbE is not only 5 times faster that 2 GbE, but also frees you to configure tricky Link-aggregation, you can write data to an server on a 10GbE and is like having plugged an eSata HDD also as to have an Tb enclosure directly connected to the nMP.
 
Last edited:
Apple shipped a 'Pro' last year, the iPad Pro - Like it or not, it's the trash can replacement.

I was in an Apple store a while ago, There was a huge pile of ATV4s piled next to the trash can - So I tossed a few in.

Darn - wish I snapped a photo...
 
Last edited:
Like it or not DMI is PCIe like, pretty much based on it, different protocol probably. But OK, let's just leave it.
I agree with you on eth, I haven´t used a cable lately.
But GbE does use a PCIe lane on the PCH. I also thought it didn't before but it does.
Let's how the USB issue comes out.
 
Like it or not DMI is PCIe like, pretty much based on it, different protocol probably. But OK, let's just leave it.
I agree with you on eth, I haven´t used a cable lately.
But GbE does use a PCIe lane on the PCH. I also thought it didn't before but it does.
Let's how the USB issue comes out.
you should see DMI as a sort of Raw PCIE connecting the PCH wich is more like an PCIe switch just with some pre-built peripherals, from which it leavex 8 PCIe lines free..
 
Apple will surprise everyone (and please some, disappoint some).

I think that the nnMP may surprise everyone. Single mid-range GPU (not upgradeable at CTO or aftermarket). Single and Dual CPU. Single NVMe SSD up to 4 TB.

Single CPU model has six T-Bolt 3 controllers (not six ports on three controllers), routing 24 PCIe 3.0 lanes outside the chassis. 256 GiB RAM.

The proprietary (and probably quite expensive) Apple eGPU chassis can support a PCIe x16 GPU and a PCIe x8 GPU, or three x8 GPUs. The cheaper "FCPX accelerator" chassis can support one. That's why the base system has a soldered GTX 750 ti or equivalent that will handle the GUI eye candy on a 4K screen.

Dual CPU model (probably somewhat larger) will have up to three internal NVMe SSDs (12 TB). One TiB RAM. Fourteen T-Bolt 3 controllers, with three by x16 support (or six by x8).

If the future is "out of the box expansion" - this is obvious. The last ten pages have been focussed on how to stretch the number of PCIe lanes. Go dual socket, and you get about twice as many lanes.

It all depends on whether "Phil My Ass" wants to innovate, or sit on his ass.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.