Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
the software doesn't save history states with the file, so if I make some progress in one direction, the image saves at a point, and then crash.. upon re-opening i no longer have the ability to make finite moves in a reverse direction, or sample from a state in the past.
Not to hijack the thread, just an FYI, Serif's Affinity Photo and Design can save history states with the file. They're not as feature rich as Photoshop and Illustrator (v1 software) but worth keeping an eye on
 
I've at work exponentially more instability caused by failing HDD than by failing ram. With around 30k desktop, laptop and workstation, we have only about a dozen bad dimm reported per years but hundreds of spinners going bad per years.
See https://zzzmaestro.wordpress.com/2010/06/25/memory-errors-and-cosmic-rays/ - it doesn't have anything to do with "failing" RAM in many cases.

The more RAM that you have, the more exposed you are. Our smallest servers are 64 GiB, most in the 256 GiB to 512 GiB, and a dozen or two in the 768 GiB to 1.5 TiB range

One fun thing with servers is that my ProLiants have an LED error map on the front, as well as iLO management console displays. A single bit error lights the LED corresponding to a DIMM, and iLO shows the DIMM and error count.

I see two different patterns in these errors.

Very rarely, a DIMM will report millions of errors in a short time. (Actually, the error reports are rate-throttled so a few thousand in a short period will disable error reporting for that DIMM to avoid killing the system with the load of error logging.) This means that a package on a DIMM has failed, or a bank inside a package has failed (or sometimes just that the DIMM needs to be reseated). The system continues to run normally, although there's some small hit to memory references that have to be corrected.

Action: contact the next day support group. Usually they'll overnight a replacement DIMM so that you can schedule a short downtime to replace the DIMM.

The other pattern is a single ECC correction on a system that hasn't reported an error in months or years. These happen from maybe two per month to one per two or three months. Real, but rare and random.

Action: Laugh, raise your fist to the sky and shout "Damn you, cosmic rays".

But - my home PC has an E5-1650v2 processor (same CPU and chipset as the hex MP6,1) and 128 GiB of ECC RAM.
 
Imagine multiple servers running off the same physical hardware using the elastic sky (vmware esxi) and a memory bit goes wonky (cosmic rays, yeah). Now it may be a year before that physical system is rebooted. What kind of fun would that non detectible memory error do? For things that don't get shutdown, rebooted often, ecc memory is useful, or certainly more useful than non ecc memory. If a machine gets powered down and up often, there is less need for ecc memory.
 
Guys...i asked this one guy over at appleinsider.com. I asked if he believes the new Mac Pro will come out.

He said:

I do. There are months of development time, but I think if they were going to use the Broadwell Xeon E5v4 family, Apple would have done so by now. Intel finished the rollout in June, but many of them were available in March. And while the controllers for Thunderbolt 3 are backward compatible to Broadwell, it is really Skylake that is designed for USB 3.1 and Thunderbolt 3.

There is a leaked set of Intel presentation slides from May 2015 that give some information on this. The Skylake Xeon architecture will feature a new platform (and socket) codenamed Purley with significant advantages over what has come before. Just search for skylake and purley and you'll find various takes, mostly repeating the same things from different points of view. Investors: here and here; Servers: here; Gaming: here and here.

Now, whether the Purley platform would make it into the Mac Pro is another question, because if you look at the "1S Workstation" (1S = single-socket, like the current Mac Pro) line across the bottom of the "Purley RoadMap Positioning" slide, you'll find something called "Basin Falls 1S Workstation Platform" that appears to be sort of a hybrid approach. I won't try to speculate -- I'm just pointing out that there are two directions they can go in here. Apple does use a "2S" processor in the highest-end configuration of the current Mac Pro, so a Mac Pro with Skylake architecture on the Purley platform is not out of the question. Nor is a new, dual-socket Mac Pro design. [Okay, now I'm speculating.] But if I were a betting man (I'm not), I think I would be cautious and put my money on the 1S Workstation family. After all, that's really what the Mac Pro is -- the core of a workstation -- it's not a server.

The most important thing to keep in mind is that the Mac Pro is a very forward-looking design. Apple isn't in a hurry here. They are already way out ahead. Some might say too far. Regardless, these new processor architectures and platforms require new approaches to the increasing heat -- that is what the Mac Pro is all about. In addition, USB 3.1 and Thunderbolt 3 (and beyond) mean the "workstation" is changing, moving toward the more flexible, modular world exemplified by the current Mac Pro.

Best case is a pre-production intro at WWDC, with availability later in the year. Much like 2013. More likely everything gets pushed back, but it will still happen next year.


So...how do you guys respond to this? delusional?
 
  • Like
Reactions: robotica and koyoot
sure, some of my software is capable of auto saving.
however, i don't always want it to.. photoshop is actually a perfect example of this.
the software doesn't save history states with the file, so if I make some progress in one direction, the image saves at a point, and then crash.. upon re-opening i no longer have the ability to make finite moves in a reverse direction, or sample from a state in the past.
obviously, having the work save incrementally in versions would elevate this to an extent, but it isn't realistic to constantly be writing the file to disk ever min or less. especially when you're talking multi GB files.

i ran into this a lot (and admittedly, still do) with photoshop. what i started doing was when i get to a point where i'm unsure how to proceed or the direction i need to take in an image, i'll duplicate all the layers into a new GROuP layer, thus, creating a "save state" within the file. takes advantage of the autosave while still giving me the option of going back without multiple "save as's". a little cleaner, IMO, anyways. i still forget, or get impatient, at times to do this but, yes, i get huge files that way. and, yes, i like a lot of ram and fast scratch disks, too. still not as good as if Adobe would save history states in the file, but i bet they do that to help mitigate file sizes.
 
Guys...i asked this one guy over at appleinsider.com. I asked if he believes the new Mac Pro will come out.

He said:

I do. There are months of development time, but I think if they were going to use the Broadwell Xeon E5v4 family, Apple would have done so by now. Intel finished the rollout in June, but many of them were available in March. And while the controllers for Thunderbolt 3 are backward compatible to Broadwell, it is really Skylake that is designed for USB 3.1 and Thunderbolt 3.

There is a leaked set of Intel presentation slides from May 2015 that give some information on this. The Skylake Xeon architecture will feature a new platform (and socket) codenamed Purley with significant advantages over what has come before. Just search for skylake and purley and you'll find various takes, mostly repeating the same things from different points of view. Investors: here and here; Servers: here; Gaming: here and here.

Now, whether the Purley platform would make it into the Mac Pro is another question, because if you look at the "1S Workstation" (1S = single-socket, like the current Mac Pro) line across the bottom of the "Purley RoadMap Positioning" slide, you'll find something called "Basin Falls 1S Workstation Platform" that appears to be sort of a hybrid approach. I won't try to speculate -- I'm just pointing out that there are two directions they can go in here. Apple does use a "2S" processor in the highest-end configuration of the current Mac Pro, so a Mac Pro with Skylake architecture on the Purley platform is not out of the question. Nor is a new, dual-socket Mac Pro design. [Okay, now I'm speculating.] But if I were a betting man (I'm not), I think I would be cautious and put my money on the 1S Workstation family. After all, that's really what the Mac Pro is -- the core of a workstation -- it's not a server.

The most important thing to keep in mind is that the Mac Pro is a very forward-looking design. Apple isn't in a hurry here. They are already way out ahead. Some might say too far. Regardless, these new processor architectures and platforms require new approaches to the increasing heat -- that is what the Mac Pro is all about. In addition, USB 3.1 and Thunderbolt 3 (and beyond) mean the "workstation" is changing, moving toward the more flexible, modular world exemplified by the current Mac Pro.

Best case is a pre-production intro at WWDC, with availability later in the year. Much like 2013. More likely everything gets pushed back, but it will still happen next year.


So...how do you guys respond to this? delusional?
Nor more nor less so than any other speculative posts. Not enough to keep me from evaluating a platform switch though. Especially since it looks as though Apple has intentionally cocked up their TB3 implementation.

[edit] Just read the articles. Some tantalizing new features, but will wait and see (while evaluating platform switch).
 
Last edited:
  • Like
Reactions: pat500000
I happen to find your notes from the Apple Insider guy pretty logical. Particularly the statement: "If they were going to use the Broadwell Xeon E5v4 family, Apple would have done so by now."

Assuming they are committed to the new form factor, there are some things we're just not going to get. That said, with a slight increase in PSU capacity and some improved cooling, the 7,1 could be the heart of a modular workstation I could embrace. Here's a few things I'd be looking for on the specs page if it happens.

1) CPU support for plenty of PCIe lanes, hopefully on package, to leverage USB-C/TB3 I/O.

2) Support for at least 128GB of DDR4 RAM (2,400Hz or faster please), 512 would be sweet

3) Since the internal GPUs are a custom form factor with TDP challenges, and since AMD appears to be the only vendor Apple will work with, it's gotta be Vega with HBM2.

4) Even if they have to be in external chassis, nVidia cards with lots of CUDA cores need robust support - starting with well written native drivers.

I understand there are advantages to a big tower with everything inside, but there are also advantages to being modular. Once you go modular, then don't skimp on things that can only be handled by the MP itself; RAM, I/O, CPU.
 
sure, some of my software is capable of auto saving.
well, yeah.. almost all of the programs that make sense to have autosave do have it.. but i was talking about osx autosave which is something third party devs can hook up to if they want.

macOS autosave is, in my experience, much better than the per application autosaving.. it's way more optimized and virtually invisible.
most applications autosave on time intervals.. if you're set to every 20mins then right when 20mins comes, it starts saving regardless of what you may be doing at the time.. if you're in the middle of something, you'll often feel the sluggishness kick in while it does it's thing.

apple autosave goes every 5 minutes.. unless you're doing something.. then it waits until there's a pause in your action until it saves.. and the saves happen very fast since it's only adding the info to the autosave file that was accumulated since the previous autosave.. as in, it doesn't create a new autosave file each time -or, it doesn't overwrite the file in the same way (all) other autosaves work.. it just adds the new data to the existing larger file.


however, i don't always want it to.. photoshop is actually a perfect example of this.
the software doesn't save history states with the file, so if I make some progress in one direction, the image saves at a point, and then crash.. upon re-opening i no longer have the ability to make finite moves in a reverse direction, or sample from a state in the past.
obviously, having the work save incrementally in versions would elevate this to an extent, but it isn't realistic to constantly be writing the file to disk ever min or less. especially when you're talking multi GB files.
hmm.. actually, versions is exactly what is needed here.. like, apple made versions with exactly these types of scenarios in mind.

a new version is created every hour.. or, if you cmmd-S, it will save a snapshot at that point.. (ie- snapshot your file whenever you want to otherwise, once an hour).. and versions works like autosave in regards to what exactly is being saved each time..
but if i'm working on a 10MB file then save a version, my file isn't now 20MB.. it only adds to the file the new data which was created since last time.. it doesn't just store a bunch of duplicated files inside of one as i think you're imagining.

most of the time, the autosaves or version save are happening in fractions of a second.. there isn't constant disk writing going on and you don't actually notice that any of this stuff is even happening.. until your program decides to crash.. then you'll notice it. gladly

regarding history, you can browse versions in a similar interface as time machine.. certain tools work in this browser without needing to restore a previous version.. as in, you can select an object that was in your file last week, cmmd-C, then paste the object into the current version.. without opening any other files/versions ( so there's just the current version open and you can dig through the history to bring stuff back... or, revert to previous points in the file.. (upon which, you can still go back to the more recent version if desired.. going back in the file doesn't mean you lose what you've done after that point))

===
but anyway, i guess none of this matters if your developers haven't tied in to this capability of macOS.
 
Last edited:
Guys...i asked this one guy over at appleinsider.com. I asked if he believes the new Mac Pro will come out.

He said:

I do. There are months of development time, but I think if they were going to use the Broadwell Xeon E5v4 family, Apple would have done so by now. Intel finished the rollout in June, but many of them were available in March. And while the controllers for Thunderbolt 3 are backward compatible to Broadwell, it is really Skylake that is designed for USB 3.1 and Thunderbolt 3.

There is a leaked set of Intel presentation slides from May 2015 that give some information on this. The Skylake Xeon architecture will feature a new platform (and socket) codenamed Purley with significant advantages over what has come before. Just search for skylake and purley and you'll find various takes, mostly repeating the same things from different points of view. Investors: here and here; Servers: here; Gaming: here and here.

Now, whether the Purley platform would make it into the Mac Pro is another question, because if you look at the "1S Workstation" (1S = single-socket, like the current Mac Pro) line across the bottom of the "Purley RoadMap Positioning" slide, you'll find something called "Basin Falls 1S Workstation Platform" that appears to be sort of a hybrid approach. I won't try to speculate -- I'm just pointing out that there are two directions they can go in here. Apple does use a "2S" processor in the highest-end configuration of the current Mac Pro, so a Mac Pro with Skylake architecture on the Purley platform is not out of the question. Nor is a new, dual-socket Mac Pro design. [Okay, now I'm speculating.] But if I were a betting man (I'm not), I think I would be cautious and put my money on the 1S Workstation family. After all, that's really what the Mac Pro is -- the core of a workstation -- it's not a server.

The most important thing to keep in mind is that the Mac Pro is a very forward-looking design. Apple isn't in a hurry here. They are already way out ahead. Some might say too far. Regardless, these new processor architectures and platforms require new approaches to the increasing heat -- that is what the Mac Pro is all about. In addition, USB 3.1 and Thunderbolt 3 (and beyond) mean the "workstation" is changing, moving toward the more flexible, modular world exemplified by the current Mac Pro.

Best case is a pre-production intro at WWDC, with availability later in the year. Much like 2013. More likely everything gets pushed back, but it will still happen next year.


So...how do you guys respond to this? delusional?
Optimisticly :)

There are so many more knowledgeable members around that can dig and understand road maps and such. I would hope they (Apple) teases by the Spring (April at the latest), if they wait till WWDC, that could really be too long and I could see a lot heading away from the Mac platform at that point. There is a lot of disdain and worry from recent stories in the Apple sphere, from odd position dissolves, fancy fashion status symbol books, and vapor ware ear pods... The Apple we know seems to be quickly rotting :( without any reassurances, changing the trends, I'm seeing more and more start venturing and looking at other alternatives-as we all have seen, or even been looking ourselves, even if just for research purposes.
 
Optimisticly :)

There are so many more knowledgeable members around that can dig and understand road maps and such. I would hope they (Apple) teases by the Spring (April at the latest), if they wait till WWDC, that could really be too long and I could see a lot heading away from the Mac platform at that point. There is a lot of disdain and worry from recent stories in the Apple sphere, from odd position dissolves, fancy fashion status symbol books, and vapor ware ear pods... The Apple we know seems to be quickly rotting :( without any reassurances, changing the trends, I'm seeing more and more start venturing and looking at other alternatives-as we all have seen, or even been looking ourselves, even if just for research purposes.
Maybe that person was overly optimistic. Yeah, Apple is known for fashion statement.
 
  • Like
Reactions: 44267547
Guys...i asked this one guy over at appleinsider.com. I asked if he believes the new Mac Pro will come out.

He said:

I do. There are months of development time, but I think if they were going to use the Broadwell Xeon E5v4 family, Apple would have done so by now. Intel finished the rollout in June, but many of them were available in March. And while the controllers for Thunderbolt 3 are backward compatible to Broadwell, it is .......So...how do you guys respond to this? delusional?

Well ins't that lovely! Well, not, more like plain ********, as in the same ******** that Apple is pushing for years. Conducting your marketing with rumors to get the aficionados excited with the next 'secret project'. Trouble is that for this type of marketing you need to deliver, like Jobs did, constantly so at least we know there are updates. the only mystery is a redesign, but not the very existence of the product. For a business point of view 3 years without updates and a clear roadmap of your main pro product means ' guys let's leave the ship'. I already jumped to Dell 1 year ago after the nMP without dual cpu's, way higher price point and no regular annual updates. If you want to base your business and paying bills on rumors, that's fine. This is not how i conduct my business though. I need clear roadmap and assurance that i will get the products i need when i need them on a regular basis. When you fail to provide, i will direct my business elsewhere. If you say you commit to this market(after 3 years delay!) than leave it again for 3 years with no updates or news, that's your sign to run away.....
 
See https://zzzmaestro.wordpress.com/2010/06/25/memory-errors-and-cosmic-rays/ - it doesn't have anything to do with "failing" RAM in many cases.

The more RAM that you have, the more exposed you are. Our smallest servers are 64 GiB, most in the 256 GiB to 512 GiB, and a dozen or two in the 768 GiB to 1.5 TiB range

One fun thing with servers is that my ProLiants have an LED error map on the front, as well as iLO management console displays. A single bit error lights the LED corresponding to a DIMM, and iLO shows the DIMM and error count.

I see two different patterns in these errors.

Very rarely, a DIMM will report millions of errors in a short time. (Actually, the error reports are rate-throttled so a few thousand in a short period will disable error reporting for that DIMM to avoid killing the system with the load of error logging.) This means that a package on a DIMM has failed, or a bank inside a package has failed (or sometimes just that the DIMM needs to be reseated). The system continues to run normally, although there's some small hit to memory references that have to be corrected.

Action: contact the next day support group. Usually they'll overnight a replacement DIMM so that you can schedule a short downtime to replace the DIMM.

The other pattern is a single ECC correction on a system that hasn't reported an error in months or years. These happen from maybe two per month to one per two or three months. Real, but rare and random.

Action: Laugh, raise your fist to the sky and shout "Damn you, cosmic rays".

But - my home PC has an E5-1650v2 processor (same CPU and chipset as the hex MP6,1) and 128 GiB of ECC RAM.

I'm not disagreeing with you. I understand and know all that. I'm just saying that where I am managing desktop, workstation and laptop in an engineering industry we are way more impacted by a failling HDD than by a bit flip.
I know that they do happen, but in my field they don't matter much.

Now, for our research center and our data center we DO use ECC ram and in the case of the research center, their server and mainframe are in a shielded room since we work on power transmission and generation where EMP are a thing...
 
Absolutely not delusional, pat500000.

https://semiaccurate.com/2016/11/17/intel-preferentially-offers-two-customers-skylake-xeon-cpus/

Lets say this straight up front, SemiAccurate thinks Intel’s latest move is going to cause them irreparable damage in public image, customer relations, and long term sales. What are they doing? Several trusted sources say that later today, likely at Supercomputing 16, the company will announce they have pulled in Purley aka Skylake-EP Xeons, to this year and will sell them to two key customers. So far this sounds like good news, next generation cash-cow server CPUs early is a positive thing.
Yearly next year for Mac Pro?
 
  • Like
Reactions: pat500000
Another self proclaimed Apple 'Insider' which seems to be the same DarkNetGuy says:

nMP 7,1 e17:
  • Xeon e5v4 upto 22core on BTO, chipset C612
  • GPUs: 2x D510 (AMD WX5100), 2x D710 (WX7100), 2x D910 (WX9100 VEGA)
  • Thunderbolt 3 X 6, no-half speed, no USB3.
  • 1x SSD NVMe not soldered.
  • 4x DDR4 ECC RAM
  • Dual Multi-speed Gbe (1,2.5,5,10) intel X550-BT2
That's all
one more thing:
  • Next iMac 21 on AMD ZEN APU 4 core, iMac 27 still on Intel/AMD GPU.
Decoding the purpoted nMP e17 configuration it seems Apple opted either to use a Multiplexer or link GPU2 to 8 PCIe3 lines (12 pcie3 lines used by TB3 headers + 4 used by SSD) 10GbE obviously connected to PCH's PCIe2 x8, no details on BT/Wifi but likely to be plugged to PCH's USB3.
 
Last edited:
Best case is a pre-production intro at WWDC, with availability later in the year. Much like 2013. More likely everything gets pushed back, but it will still happen next year.
So, the best case scenario is about a year from now (or possibly longer). That makes it a bit tough for those of us on 7 year old computers (or more). At this point, everyone is taking a best guess approach with no real knowledge of Apple's plans. We could potentially wait all that time for something we wouldn't even like due to some design epiphany.
 
  • Like
Reactions: pat500000
Another self proclaimed Apple 'Insider' which seems to be the same DarkNetGuy says:

nMP 7,1 e17:
  • Xeon e5v4 upto 22core on BTO, chipset C612
  • GPUs: 2x D510 (AMD WX5100), 2x D710 (WX7100), 2x D910 (WX9100 VEGA)
  • Thunderbolt 3 X 6, no-half speed, no USB3.
  • 1x SSD NVMe not soldered.
  • 4x DDR4 ECC RAM
  • Dual Multi-speed Gbe (1,2.5,5,10) intel X550-BT2
That's all
one more thing:
  • Next iMac 21 on AMD ZEN APU 4 core, iMac 27 still on Intel/AMD GPU.
Decoding the purpoted nMP e17 configuration it seems Apple opted either to use a Multiplexer or link GPU2 to 8 PCIe3 lines (12 pcie3 lines used by TB3 headers + 4 used by SSD) 10GbE obviously connected to PCH's PCIe2 x8, no details on BT/Wifi but likely to be plugged to PCH's USB3.
apple can stack the video cards on a switch feed by X16 link to X16 X16 to each one as well.

To get 6 full speed TB3 ports you need 24 lanes and that will take X16 + X8 links. But then networking and storage will have to both be the the DMI pci-e X4 3.0 feed PCH.

To have 3 TB buses over 6 ports then it's 12 lanes for that with 4 for each storage card.

So 16 with 1 storage 20 with 2 cards. You can put 10 GIG-e on the leftover lanes + a few for USB 3.1. and put BT/Wifi on PCH pci-e.
 
So, the best case scenario is about a year from now (or possibly longer). That makes it a bit tough for those of us on 7 year old computers (or more). At this point, everyone is taking a best guess approach with no real knowledge of Apple's plans. We could potentially wait all that time for something we wouldn't even like due to some design epiphany.
Waiting while we wither away..........
 
  • Like
Reactions: Aldaris
Well ins't that lovely! Well, not, more like plain ********, as in the same ******** that Apple is pushing for years. Conducting your marketing with rumors to get the aficionados excited with the next 'secret project'. Trouble is that for this type of marketing you need to deliver, like Jobs did, constantly so at least we know there are updates. the only mystery is a redesign, but not the very existence of the product. For a business point of view 3 years without updates and a clear roadmap of your main pro product means ' guys let's leave the ship'. I already jumped to Dell 1 year ago after the nMP without dual cpu's, way higher price point and no regular annual updates. If you want to base your business and paying bills on rumors, that's fine. This is not how i conduct my business though. I need clear roadmap and assurance that i will get the products i need when i need them on a regular basis. When you fail to provide, i will direct my business elsewhere. If you say you commit to this market(after 3 years delay!) than leave it again for 3 years with no updates or news, that's your sign to run away.....
Maybe if we can create a tower Mac Pro rumor........Apple might follow it.
 
  • Like
Reactions: Synchro3 and pcd109
If the rumors about 225W Vega 10 with 64 CU units is true, then sticking it to 125W Thermal envelope will provide 10 TFLOPs of compute power on single GPU.
Thanks, koyoot. If they get this to market, I think I could live with that level of performance - it's competitive with Nvidia's current crop. Of course the 1080Ti is about to launch... :D
 
Maybe if we can create a tower Mac Pro rumor........Apple might follow it.

At this point with be difficult for me to return to Apple. I invested my money into Windows now and i also have the next big investment almost ready(a brand new dual 16 core Dell T7910) to complement my existing one. So Apple can do whatever they want, but at this point i pure and simply don't trust them any longer. The only reason i still hang over here few times is because i was with Apple since system 7 and i still have one mac for my old files. But otherwise Apple is pretty much a finished story for me. Anyone thinking Apple is going to continue to make a pro desktop is delusional. On 27'th october i posted a picture with Apple mac page: two laptops and one iMac. No mini no mac pro, so at this point they are pretty much out of the pro market. The pros voted with their wallet(me included) and Apple has already lost vast majority of the creatives that took 20 years to get.
 
At this point with be difficult for me to return to Apple. I invested my money into Windows now and i also have the next big investment almost ready(a brand new dual 16 core Dell T7910) to complement my existing one. So Apple can do whatever they want, but at this point i pure and simply don't trust them any longer. The only reason i still hang over here few times is because i was with Apple since system 7 and i still have one mac for my old files. But otherwise Apple is pretty much a finished story for me. Anyone thinking Apple is going to continue to make a pro desktop is delusional. On 27'th october i posted a picture with Apple mac page: two laptops and one iMac. No mini no mac pro, so at this point they are pretty much out of the pro market. The pros voted with their wallet(me included) and Apple has already lost vast majority of the creatives that took 20 years to get.
I hear you.
Edit: that's why I went to z series. If it still comes out...it would probably be personal usage or storage....like I have seen some people do that.
 
  • Like
Reactions: pcd109
apple can stack the video cards on a switch feed by X16 link to X16 X16 to each one as well.

That what we name Multiplexer, but it only shares bandwidth among the cards, adds latency and cost, if you are an integrator better you plug those GPU at x8 each, will operate at same speed but on less latency and cost, switches I see more a point of expansion flexibility.

To get 6 full speed TB3 ports you need 24 lanes and that will take X16 + X8 links

Each Alpine ridge TB3 controller (header) uses 4 PCIe3 lines to feed 2 TB3 ports. and while those lines are multiplexed (or switched in your words) it's how 'full speed' TB3 are advertised, and actually very few devices will see that speed.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.