Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Being new here I'm trying to learn as much as I can. Great information scattered among many threads. I did send some questions to the blog staff asking about some new sub-categories (complaints, hackintosh, upgrade problems, new Mac Pro, etc.) to make topics easier to find and follow but they seen unwilling to change anything. Professional use categories would also be nice. The only alternative I see is a private forum which has a fee. Anybody else have ideas on this?

I think this forum is great. Its name is MacRumors not Professional Mac. The name says it all. Everyone is allowed to speak their mind which can be irritating at times but that's on me. The forum has to be doing something right because on any given day note the volume of traffic here. Nope, leave the forum as is.
 

Sure, we both know such systems exist. My remark simply stated that less than 10% of workstation or desktop operators own one like that. And I can probably safely revise the statement now and claim that less than 1% own one like you posted here. Maybe even less than 0.01%. :D

What you're listing here are excellent cluster nodes for a render server. ;)

----------

This is totally not true. TB devices still need drivers. Because TB really is just PCIe, Thunderbolt devices probably need the exact same PCIe drivers.

Are you sure? I dunno for sure thus i ask. I assumed it would be more akin to USB type devices.
 
WARNING: Stop here if you don't yearn for system expansion and growth.
Here's where a Dual CPU Nehalem/Westmere LGA 1366 (PCIe 2: ~$3,700 w/o CPUs, ram and storage) [ TYAN FT72-B7015 - http://www.superbiiz.com/detail.php...3c789f87a210&gclid=CIyixZePlLgCFbNj7AodzxUARA ] or Sandy Bridge/Ivy Bridge 2011 (PCIe 3: ~$5,200 w/o CPUs, ram and storage) [ TYAN B7059F77AV6R - http://www.tyan.com/product_SKU_spec.aspx?ProductType=BB&pid=512&SKU=600000346 ] bare bones server has distinct advantages. It's my xMac, xLinux and xWindows platform for system expansion and growth. For instance, the LGA 1366 has ten PCIe 2 x16 slots (two of which have x4 signals and the other eight can cradle eight double wide PCIe cards at x16 speed), two PCI-Ex1 slots and one PCI 32 bit slot.

I think I have a pretty extreme case of workstation envy at this point.
 
There really isn't an argument. Apple and anyone with eyes knows this is Apple's attempt to gracefully exit the serious market and move into the Mickey Mouse/ "I'm a Director because I make wedding & graduation videos" market.

The serious folk need power with no compromises or middle man adapters. They want software that doesn't have excuses or silly limits. "What do you mean you need multi cam support? Just shoot with fewer cameras, see we SAVED you money !!!"

The wedding video guys are satisfied with a few sparkle dazzle spinning stars transitions and....a computer in a can.



Yes, it would seem most of Apple's shills have been posting all sorts of glowing comments.

Hmmm....YOU make money selling video cards for Mac Pros....but people who like the new version are "shills".:rolleyes:
 
Sure, we both know such systems exist. My remark simply stated that less than 10% of workstation or desktop operators own one like that. And I can probably safely revise the statement now and claim that less than 1% own one like you posted here. Maybe even less than 0.01%. :D

What you're listing here are excellent cluster nodes for a render server. ... .

TYANs are excellent for rendering nodes of all kinds, such as for digital content creation in animation and video, and Tyans are excellent for those whose work requires multiple PCIe cards such as in music where multiple audio related PCIe cards are needed. Cubix, Magma and other external chassis manufacturers target, e.g., the following uses for expanded PCIe space: "running GPU-accelerated applications ranging from scientific research and CAE modeling/simulation software, to the latest high-performance digital content creation, linear editing, and digital intermediate solutions." [ http://www.cubix.com/gpu-xpander-rackmount ]. GPGPU CUDA based solutions are many: Computational Chemistry and Biology, Numerical Analytics, Physics, Weather and Climate Forecasting, Defense and Intelligence, Computational Finance, Computer Aided Design, Computational Fluid Dynamics, Computational Structural Mechanics, Electronic Design Automation, Animation, Modeling and Rendering, Color Correction and Grain Management, Compositing, Finishing and Effects Editing, Encoding, Digital Distribution, On-Air Graphics, On Set Review and Stereo Tools, Simulation, and Weather Graphics [ http://www.nvidia.com/docs/IO/123576/nv-applications-catalog-lowres.pdf ]. So I'm going out on a limb and project that Tyans' total potential market would be at least .02% of workstation or desktop operators, especially when you compare the total cost of a TYAN system with the total cost of TYAN's competitors' multi-box solutions. Total cost for mine, without the eight Titans, was about the price of a 2012 Mac Pro.
 
Last edited:
I think I have a pretty extreme case of workstation envy at this point.

The next time you're in the market for a kick booty system, checkout what TYAN, Supermicro and Gigabyte are offering in barebones (xWhatever to want to call it) systems to quell that envy. They sell the lazy person's DIY barebones systems where you have to have just enough tech savvy to insert your own CPUs, video card(s), ram and storage. I know you're up to that great challenge.
 
Last edited:
I'll like to lift my Mac Pro from the dust onto the desk. Especially, when it is not anymore going to be too bulky for that. PCs I'll continue keeping under the desk. :)

I agree that the new MP design surprised me in this rectangular world. But, I admire it and I want it. And more. DIY MP would not be a "revolution for the user".
 
Last edited:
Hmmm....YOU make money selling video cards for Mac Pros....but people who like the new version are "shills".:rolleyes:

I make money building folks small servers and setting up networks in their home. If that dried up tomorrow I spend more time working on my Jeep and bicycles. It's much the same for MacVidCards he has a normal job and sells cards on the side.

He like many of us have a philosophic disagreement with the direction of the MP.
 
Ummm, I don't think I agree. First, if indeed the new basic config includes two video cards then that's it for like 90% or the motherboard currently available. When two GPU cards are installed there are no more slots available. Second, this increases system heat which elevates noise levels and increases dust - which decreases life-spans. Also these cards are dependent on custom drivers which TB/TB2 devices are not. With every OS upgrade card users must fear for the functionality or obsolescence of their installed cards - this has been true since IBM came out with the PC XT models and it's still true today on the MP5,1. Every OS release adds and removes compatibility for various specific PCIe cards.

You do know there are LGA2011 boards with 6 PCIe 3.0 slots (four 16x, one 8x, and one 1x), available off newegg of all places right now, right?

If you're saying we need to put all our drives externally to reduce heat, I don't buy it. That's what fans are for. With low power drives, I'm not sure how much heat the extra heat could really contribute.

As far as drivers for PCIe, I have to agree with goMac. This has nothing to do with the differences between PCIe and TB. If TB can theoretically function with its drives having the same drivers for each new piece of hardware (by the way, evidence?), so can PCIe.

This is totally not true. TB devices still need drivers. Because TB really is just PCIe, Thunderbolt devices probably need the exact same PCIe drivers.

So all I see is a potential for expansion and growth. I don't really see it as being "good enough for now but not up to snuff compared to internal PCIe connections" as you suggest.

I'm clearly just talking theoretical here (arent we all?), but even the Mac Pro with Six TB2 ports will not have as much drive throughput two or three 4 port SAS cards and dual GPUs--a config that, again, I can buy off NewEgg this afternoon. That may be an overkill for anything 99% of people are doing, but the same was said about a lot of the technology we use commonly today.

Again, TB is clearly a great technology, but as far as TB taking the place of PCIe or even becoming a standard on PC, I'm not seeing the need (except maybe on laptops?).
 
I'm clearly just talking theoretical here (arent we all?), but even the Mac Pro with Six TB2 ports will not have as much drive throughput two or three 4 port SAS cards and dual GPUs--a config that, again, I can buy off NewEgg this afternoon. That may be an overkill for anything 99% of people are doing, but the same was said about a lot of the technology we use commonly today.

Again, TB is clearly a great technology, but as far as TB taking the place of PCIe or even becoming a standard on PC, I'm not seeing the need (except maybe on laptops?).

Stop talking about theory and start talking about practice, because your theory does not apply in practice. For example, what storage needs do you have that the combined throughput of 6 Thunderbolt ports can't handle?

What storage solutions exists that can provide that throughput (12GB/s) to a single client, in practice?
 
You do know there are LGA2011 boards with 6 PCIe 3.0 slots (four 16x, one 8x, and one 1x), available off newegg of all places right now, right?

Dual Xeon E5 ones sure. There is no dual Xeon Mac Pro going forward anymore. Whether the new one went tube or Thunderbolt is likely highly decoupled from whether duals are still around or not. Has far more to do with how many folks were expected to buy duals. It is Apples-to-Oranges to inject duals and their 80 PCI-e lanes into the mix.

Restricted to 40 lanes and avoiding PCI-e switches ( which Thunderbolt is a variant of ) has the same max bandwidth issues as the MP 2013 has.


As far as drivers for PCIe, I have to agree with goMac. This has nothing to do with the differences between PCIe and TB.

Thunderbolt pragmatically needs more completely PCI-e drivers. The vast majority of PCI-e drivers lack "hot-plug/hot-unplug" support. It is an optional and often conveniently ignored part of the PCI-e standard. So do they require 100% rewritten driver? No. Do vendors have do more software development work for their system? Yes.

That was one of the farces that Thunderbolt perpetrated. "It is just PCI-e no software needed". Technically true only if folks had written robust, fully comprehensive drivers in the first place. They generally don't. So software is needed.


I'm clearly just talking theoretical here (arent we all?), but even the Mac Pro with Six TB2 ports will not have as much drive throughput two or three 4 port SAS cards and dual GPUs--a config that, again,

Not going to get that configuration in a single Xeon E5 "tube" or not. If the SAS cards run x8 you'll be throttled. Even if just x4 won't get three.


, but as far as TB taking the place of PCIe or even becoming a standard on PC, I'm not seeing the need (except maybe on laptops?).

Thunderbolt doesn't remove PCI-e. It just moves PCI-e controllers to another box (along with some other things. It is also not just merely PCI-e ).

If talking about personal computers (PCs) in general the parenthetical is a joke. Generally speaking PCs are laptops. at this point. The antiquated notion that a person computer is box with slots is last century notion at this point.

You can re-couch the discussion into the context of high end PC workstations and get "box with slots" back into a dominate position but that is a 'fork' from the general PC market at this point.


.....For example, what storage needs do you have that the combined throughput of 6 Thunderbolt ports can't handle?

What storage solutions exists that can provide that throughput (12GB/s) to a single client, in practice?

The 12GB/s number is fundamentally flawed. Thunderbolt port bandwidth is not additive. Thunderbolt is a switch. You cannot just add a switches ports together to get a bandwidth number. That is just deeply and grossly flawed misunderstanding of what a switch does.

Theoretically you might get 3 (number of controllers ) times x4 PCI-e v2 bandwidth out of them 6GB/s. In reality probably closer to 3 * x3 PCI-e which is closer to 4GB/s

Largely the same general issue with the current Mac Pro for which the two x4 slots are switched.
 
The 12GB/s number is fundamentally flawed. Thunderbolt port bandwidth is not additive.

Of course it is, the guy is talking about using several multi port SAS cards in a regular workstation chassis. It's extreme and theoretical which is why I reply with another theoretical example. If you attach 6 storage arrays, 1 to each port for example and use that as 1 large array you would get the combined throughput of all 6 ports.
 
Of course it is, the guy is talking about using several multi port SAS cards in a regular workstation chassis. It's extreme and theoretical which is why I reply with another theoretical example. If you attach 6 storage arrays, 1 to each port for example and use that as 1 large array you would get the combined throughput of all 6 ports.

From what I understand, he is saying you cant add thunderbolt ports together into a single array without loss of bandwidth. This is beyond my scope so I don't know if it is true, but what I do know is that LGA2011 supports 40 lanes of PCIe 3.0 per processor--for 40GB/s possible via PCIe. Subtracting 16GBps (8GBps each) for dual video cards means the new Mac Pro is vastly underutilizing the platform. That's fine, it's still more capable than my haswell, but I'm not the one saying that this is "just as good" as a workstation board with 5 PCIe 3.0 slots--each one of them capable of 4-8 times as much bandwidth as thunderbolt 2. I'm also not the one who needs a rig like that.
 
From what I understand, he is saying you cant add thunderbolt ports together into a single array without loss of bandwidth.

The addition of all ports (disks) is done in software, if you are using ZFS (for example) you can add any number of disks from various sources into one storage pool and configure it the way you want. If what he says is true, then the consequence is that you can not use all 6 ports individually at full bandwidth either, which doesn't make sense.

I'm also not the one who needs a rig like that.

What I'm trying to say is, that a rig like that, for example a massive SAN full of SSDs would be going through a switch an be shared by several users. Even though, it's theoretically possible to spend $40-50k on that and use it as your private disk, I don't think any one does.
 
Dual Xeon E5 ones sure. There is no dual Xeon Mac Pro going forward anymore. Whether the new one went tube or Thunderbolt is likely highly decoupled from whether duals are still around or not. Has far more to do with how many folks were expected to buy duals. It is Apples-to-Oranges to inject duals and their 80 PCI-e lanes into the mix.

The board I was talking about was single processor. I'm not saying all the ports on that board can run at full bandwidth all at once, but even if you ran them all at 8x (5x8 = 40 lanes), that's still far more bandwidth than the Mac Pro.

Thunderbolt pragmatically needs more completely PCI-e drivers. The vast majority of PCI-e drivers lack "hot-plug/hot-unplug" support. It is an optional and often conveniently ignored part of the PCI-e standard. So do they require 100% rewritten driver? No. Do vendors have do more software development work for their system? Yes.

That was one of the farces that Thunderbolt perpetrated. "It is just PCI-e no software needed". Technically true only if folks had written robust, fully comprehensive drivers in the first place. They generally don't. So software is needed.

So you're agreeing with goMac and I: as far as drivers, there's nothing specifically about TB that would make it more plug and play than PCIe, nor anything specific to PCIe that causes manufacturers to lack standardized drivers.


Not going to get that configuration in a single Xeon E5 "tube" or not. If the SAS cards run x8 you'll be throttled. Even if just x4 won't get three.

8x PCIe 3.0 = 8GBps. Divided 4 ways, thats 2GBps per SAS port. Isn't that plenty?

Thunderbolt doesn't remove PCI-e. It just moves PCI-e controllers to another box (along with some other things. It is also not just merely PCI-e ).

It doesn't need to move anything. PCIe and thunderbolt can coexist, the only problem is with running the video stream for thunderbolt displays (which is optional). Can't we all just get along??

The 12GB/s number is fundamentally flawed. Thunderbolt port bandwidth is not additive. Thunderbolt is a switch. You cannot just add a switches ports together to get a bandwidth number. That is just deeply and grossly flawed misunderstanding of what a switch does.

Theoretically you might get 3 (number of controllers ) times x4 PCI-e v2 bandwidth out of them 6GB/s. In reality probably closer to 3 * x3 PCI-e which is closer to 4GB/s

Largely the same general issue with the current Mac Pro for which the two x4 slots are switched.

So you're saying that thunderbolt busses can be combined but that there is a loss due to switching?
 
It's extreme and theoretical which is why I reply with another theoretical example.

So if someone blows smoke the response is to blow more smoke? That is exactly why many of these threads turn into disinformation propaganda campaigns.

Theoretical and not even remotely possible are two different states.

If you attach 6 storage arrays, 1 to each port for example and use that as 1 large array you would get the combined throughput of all 6 ports.

That isn't even a theory based on the technology. That is just hand waving junk not grounded on anything.

A two port Thunderbolt system has matching bandwidth on each port so that the data that comes in on one side can go right back out on the other. You are only double counting the same data if count what goes in as different data as what comes out. It is the same data so it counts once.

If talking about special case in which the TB controller is a host and all the PCI-e data is inbound from one or two ports then the switch bi-section bandwidth is limited to x4 PCI-e v2 lanes ( e.g., the "on/off ramp " from the TB backbone network is limited to this top end).


If you are making up a different Mac Pro that uses 6 TB controllers for six ports then has same issue as his examples in that for a single E5 Xeon don't have enough lanes to go around to support that. The current Mac Pro has to resort to switches with the 3 it has. 3 more would only further dulute the internal PCI-e bandwidth. It also still would be the case that counting ports was not the material way of arriving at bi-section bandwidth.

The bandwidth of Thunderbolt ports is not additive. Claiming that they are doesn't move any discussion of the Mac Pro 2013 forward at all. It just keeps it in the FUD swamp.
 
Stop talking about theory and start talking about practice, because your theory does not apply in practice. For example, what storage needs do you have that the combined throughput of 6 Thunderbolt ports can't handle?

What storage solutions exists that can provide that throughput (12GB/s) to a single client, in practice?

If you want to talk about practice, here's "in practice": PCIe is standard on nearly all desktop PCs and supports bandwidth higher than Thunderbolt. Need more PCIe slots than are commonly available? There are plenty of motherboards available that have tons of slots. Thunderbolt in the non-Mac world is unnecessary due to this fact and the fact that externalizing these components is a strange idea which is inefficient in terms of space and economy (it takes up less space and requires no additional power supply purchases to put a PCIe card into an existing slot--the marginal cost of a molex plug, a few extra watts from the PSU, and 2.5 cubic inches of space inside a PC case is nothing compared to externalizing the same solution).

In addition to thunderbolt solutions being expensive and unavailable, PCIe solutions are ubiquitous and have no particular disadvantage (apart from they can't be used on mac laptops). These things will likely change in time to some extent, but it will always be more efficient to have the raw PCIe card than circuitry hooked to a thunderbolt controller placed into a box bundled with an external power supply. This is a ridiculous solution to a problem nobody has but laptop users and, soon, New Mac Pro users.
 
That isn't even a theory based on the technology. That is just hand waving junk not grounded on anything.

It can be one large array that presents it self as several smaller ones, it can be many ports coming from a switch through a mini-SAS to Thunderbolt adapter. If you have ever used ZFS you would know that it's possible, you can even do it with several USB drives.

A two port Thunderbolt system has matching bandwidth on each port so that the data that comes in on one side can go right back out on the other. You are only double counting the same data if count what goes in as different data as what comes out. It is the same data so it counts once.

The Mac Pro has 6 ports, that you can add 6 sources to.

If talking about special case in which the TB controller is a host and all the PCI-e data is inbound from one or two ports then the switch bi-section bandwidth is limited to x4 PCI-e v2 lanes ( e.g., the "on/off ramp " from the TB backbone network is limited to this top end).

If there are only x4 lanes on the new controllers that would mean that the 4k video example from WWDC only works on a 1 per Thunderbolt port pair basis.

If you are making up a different Mac Pro that uses 6 TB controllers for six ports then has same issue as his examples in that for a single E5 Xeon don't have enough lanes to go around to support that. The current Mac Pro has to resort to switches with the 3 it has.

The bandwidth of Thunderbolt ports is not additive. Claiming that they are doesn't move any discussion of the Mac Pro 2013 forward at all. It just keeps it in the FUD swamp.

I don't make up anything, it's based on the same (very) limited information we have at this point, an intel press release and and some tech blog previews. Which is why your very assertive answers are so telling.

Let's leave the combined bandwidth per controller out of the discussion and say that the available bandwidth of all ports can be used simultaneously.
 
The board I was talking about was single processor. I'm not saying all the ports on that board can run at full bandwidth all at once, but even if you ran them all at 8x (5x8 = 40 lanes), that's still far more bandwidth than the Mac Pro.

Since the Mac Pro uses the exact same chipset and the Xeon E5 (v1 and v2) have the same PCIe lane bandwidth it is not far more. It is the same collective bandwidth since it is the same implementation.

It is also a bit of a fraud to be quoting physical slot sizes as opposed to electrical slots sized when in the middle of a bandwidth discussion. Card pins that are connected to nothing don't have any bandwidth. So crotch grabbing over seating four x16 cards and only hooking to 8 pins is silly in the context of turning around that poo-pooing Thunderbolt because it throttle the bandwidth.



So you're agreeing with goMac and I: as far as drivers, there's nothing specifically about TB that would make it more plug and play than PCIe, nor anything specific to PCIe that causes manufacturers to lack standardized drivers.

Pragmatically, yes there is something different. As typically implemented in most workstations the cards not hot-plug capable. Typically that is only implemented and supported on big iron 24/7/365 servers. Therefore, card vendors do not write the hot-plug additions. . If the hot-plug support is not commonly in the drivers... it is missing, hence different. Primarily what TB brings to the table is a hot-plug requirement. So yes, it is a driver of new software features.


8x PCIe 3.0 = 8GBps. Divided 4 ways, thats 2GBps per SAS port. Isn't that plenty?

Funny how most of those were x16 slots before. Frankly, TB speeds are plenty for most situations.


It doesn't need to move anything. PCIe and thunderbolt can coexist,

Then why did you claim that TB replaced PCI-e. It doesn't replace it at all. Thunderbolt's job is to transport PCI-e data. "can coexist" isn't even a question. If there is no PCI-e data there is no purpose for Thunderbolt. A system with a purely DisplayPort data stream doesn't need Thunderbolt at all.


So you're saying that thunderbolt busses can be combined but that there is a loss due to switching?

No. ( there is overhead but the rest of that is muddled )

----------

The Mac Pro has 6 ports, that you can add 6 sources to.

Adding sources does not increase bandwidth of the target/sink. Bandwidth can only be measured is take into account both source and sink. Point just at a single end does nothing substantive.



If there are only x4 lanes on the new controllers that would mean that the 4k video example from WWDC only works on a 1 per Thunderbolt port pair basis.

The video on/off ramps onto the Thunderbolt backbone network are entirely decoupled from the PCIe ones. 4K has nothing to do with the bandwidth restrictions in your example. This is just misdirection.





I don't make up anything, it's based on the same (very) limited information we have at this point, an intel press release and and some tech blog previews.

There is not a single Intel press release , spokesman, or demo that even remotely implies that Thunderbolt port bandwidth is additive. None. That is just smoke that you are making up. Intel is very consistant in saying that Thunderbolt is 10Gb/s or in v2's cases 20Gb/s. Not "per port", just that Thunderbolt is.

The only "per port" aspect Intel talks about is the number of devices; NOT bandwidth. More devices does NOT mean more bandwidth on the backbone interconnect network.
 
Adding sources does not increase bandwidth of the target/sink. Bandwidth can only be measured is take into account both source and sink. Point just at a single end does nothing substantive.

No, and I never said that it did. But each individual port has some bandwidth, which means that n ports has n bandwidth, in total.


The video on/off ramps onto the Thunderbolt backbone network are entirely decoupled from the PCIe ones. 4K has nothing to do with the bandwidth restrictions in your example. This is just misdirection.

In Thunderbolt 2, two lanes are combined to give a 2Gb/s instead of 2 x 1Gb/s. If a controller only has x4 PCIe lanes (two in each direction) then how can it support two ports per chip?


There is not a single Intel press release , spokesman, or demo that even remotely implies that Thunderbolt port bandwidth is additive. None. That is just smoke that you are making up.

Is USB additive on a protocol level? No, but that doesn't mean that you can not combine several sources in software to get the combined bandwidth of many sources. It has nothing to do with intel, protocols, or their press release. I was referring to the new Falcon ridge controllers.

Intel is very consistant in saying that Thunderbolt is 10Gb/s or in v2's cases 20Gb/s. Not "per port", just that Thunderbolt is.

Ok, so in Thunderbolt v1 a two port controller had x4 lanes. Meaning each port got 1 lane in each direction. Now Thunderbolt v2 combines these lanes to reach 2Gb/s, so how is that going to work out if that needs to be shared between two ports?

The only "per port" aspect Intel talks about is the number of devices; NOT bandwidth. More devices does NOT mean more bandwidth on the backbone interconnect network.

Of course not, which is why your talk about amount of lanes to the Falcon ridge controllers in not from intel.
 
I make money building folks small servers and setting up networks in their home. If that dried up tomorrow I spend more time working on my Jeep and bicycles. It's much the same for MacVidCards he has a normal job and sells cards on the side.

He like many of us have a philosophic disagreement with the direction of the MP.

Well if it's just a philosophic disagreement then avoiding insulting people who disagree with you by calling them shills would be a better way to go.
 
deconstruct60:

Let's make an example. I add two PCIe disks, each on an individual TB2 port to the computer. To limit the discussion to if it's possible to use bandwidth from two ports, let's say we make sure that each port is on a separate controller.

I start up zpool, I can see the two devices as disk in /dev. I then add them to a zpool and configure them in striped RAID, I then create a filesystem from my pool and mount the new volume, which shows up on the desktop. That volume will consist of two devices in striped RAID each giving the available throughput of the port it's connected to.
 
That'll totally work too. But leave the PSU in place. Without the MB, backplanes, and so forth pretty much any of the cable type interfaces or octopus type cable interfaces will probably fit.

So. I'm grooving to this as well. IF Apple had been as smart as they pretend to be; they would have designed the New Mac Pro as a box-shaped module to retrofit the legacy towers (and interface with internal storage / cards to fill up the remaining space in the Old Mac Pro Cases. They they would therefore rack mount (Pro Studios use racks right ?) and Apple could pat itself on the back for offering an ingenious & environmentally sustainable computing upgrade path.
As is the cylindrical MacPro is just a cup-holder friendly CPU for Hipsters to edit 4K in their cars with .:D Doesn't play nice with Edit Suite Space space or peripherals, unless Apple plans to release a donut-shaped external storage unit for the towers to 'stick into'.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.