Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Would you have preferred (of only 2 options):

  • The 5,1, but with Ivy Bridge (2 processors), USB3, SATA3, PCIe 2.0, and TB1

    Votes: 218 61.9%
  • The New Mac Pro as it is

    Votes: 134 38.1%

  • Total voters
    352
But the extra PCIe slots in the current form factor are also woefully inadequate for GPUs so the "limited" 20Gbps performance of TB2 is not a good reason to prefer the current form factor. The six TB2 ports in the new machine are really competing with those two x4 PCIe slots in the current Mac Pro. It's hard not to argue you come out ahead with the new MP in this regard.

While the total throughput of ALL the TB2 slots on the new Mac Pro is higher than that of the current motherboard (not saying much, as it's a generation or two behind), it is NOT true that TB2 is any faster than the DUAL 16x PCIe 2.0 slots on the current mac pro, which clock in at about 8GBps each -- Thunderbolt 2 is 1/4 the speed. The remaining 2 PCIe 2.0 4x slots are roughly equal in speed. And again, this assumes thunderbolt 2 wont have any latency issues or other bottlenecks (as it has not been benchmarked publicly, it's hard to say).

http://support.apple.com/kb/ht2838
The Mac Pro (Early 2009), Mac Pro (Mid 2010), and Mac Pro (Mid 2012) computers implement PCI Express revision 2.0 for all four slots. Slots 1 and 2 are x16 slots, and slots 3 and 4 are x4 slots.

http://en.wikipedia.org/wiki/PCI_Express

So, a 16-lane slot (each direction):
v1.x: 4 GB/s (40 GT/s)
v2.x: 8 GB/s (80 GT/s)
v3.0: 15.75 GB/s (128 GT/s)
v4.0: 31.51 GB/s (256 GT/s)

EDIT: Woops, I see what you're saying: Since the new MP has all the GPU power you could ever want, the TB2 slots don't need to compete with the current 16x slots, they're competing with the remaining 4x slots.... Interesting point of view. Any reason why a motherboard produced today as opposed to 3 years ago couldn't make that upgrade? Perhaps Intel could just release a motherboard with PCIe and add TB2 slots? Maybe we could have a four PCIe 3.0 8x configuration? The limitations of the current Mac Pro have more to do with the fact that it's 3 (4?) years old, NOT that there's something inherently wrong with PCIe slots.



Edit2: ASUS just released a consumer board for less than $300, available today (not 6 months[ish] from now), which has THREE PCIe 16x 3.0 slots with a whopping 15.75GBps each -- nearly 8 times the bandwidth of thunderbolt 2, and there's Three of them. Oh, plus it has Three PCIe 3.0 1x slots, each has the bandwidth equal to Thunderbolt 1... and surround sound, because that was so much to ask :D
 
Last edited:
So far...

70.24% votes for OLD design and only 29.76% for the trash can LOL :D:p

And how many of those people currently own MBPs or will be buying the new one? The opinions of people who can't afford and/or don't need a MBP simply don't count. At all.

(I didn't vote because I'm one of those people)
 
But the extra PCIe slots in the current form factor are also woefully inadequate for GPUs so the "limited" 20Gbps performance of TB2 is not a good reason to prefer the current form factor. The six TB2 ports in the new machine are really competing with those two x4 PCIe slots in the current Mac Pro. It's hard not to argue you come out ahead with the new MP in this regard.

Isn't form factor about the shape and size of the case, and its internal expandability ?

If Apple had stuck to the original design, or a slightly modified version, we would be looking at PCIe 3.0 slots, SATA III for internal drives bays, optional PCIe drives, USB 3.0, and the same proprietary GPUs and TB 2.0 would have fit in just fine . Some kind soul might even have kept FW available, else you could just plug in a PCIe card .
Also, optional dual CPUs with more memory slots, for cheaper RAM in most configurations.

Wouldn't everyone come out ahead with that design ?

As for TB - it's news to me that it requires specific GPUs, goes to show how little I know . Is there any particular reason why TB can't just be dumped, do we need a connector that drives both displays and other externals at the same time ?
 
Isn't form factor about the shape and size of the case, and its internal expandability ?

If Apple had stuck to the original design, or a slightly modified version, we would be looking at PCIe 3.0 slots, SATA III for internal drives bays, optional PCIe drives, USB 3.0, and the same proprietary GPUs and TB 2.0 would have fit in just fine . Some kind soul might even have kept FW available, else you could just plug in a PCIe card .
Also, optional dual CPUs with more memory slots, for cheaper RAM in most configurations.

This. VirtualRain is making the assumption that Apple's creating something amazing here simply due to the fact that the old motherboard is using 3 year old technology and he assumes there's no option but "six TB2 ports" and just 4 PCIe 2.0 slots (16x,16x,4x,4x).

As for TB - it's news to me that it requires specific GPUs, goes to show how little I know .

It's news to you because it isn't true

http://youtu.be/O1t7Rc9qFgI
 
This. VirtualRain is making the assumption that Apple's creating something amazing here simply due to the fact that the old motherboard is using 3 year old technology and he assumes there's no option but "six TB2 ports" and just 4 PCIe 2.0 slots (16x,16x,4x,4x).



It's news to you because it isn't true

http://youtu.be/O1t7Rc9qFgI

I don't really understand how that is working. There is no connection from the discrete graphics card to the motherboard other than the PCIe bus... so how does the output of the discrete card get to the TB controller? It sounds like some kludge that requires an on-die CPU with some technology from LucidLogix.
 
I don't really understand how that is working. There is no connection from the discrete graphics card to the motherboard other than the PCIe bus... so how does the output of the discrete card get to the TB controller? It sounds like some kludge that requires an on-die CPU with some technology from LucidLogix.

Looks like they're just smarter than the Apple engineers.
 
  • Is there? I've been looking for info on that but haven't found any. In fact what I have found claims it's about the same. Got a source?
  • Why? What??? What would make you think that? And what exactly do you mean? Can you elaborate a little? Thanks!

Even at the speed of light, there would be a 10ns delay over a 3m wire.

It'll still work, though - there are some examples of using external GPUs over thunderbolt with mild success, and I could give you evidence of why that is - basically regular thunderbolt is about as good as PCIe 2x, and a few gens back that was still good for 60-80% of max performance on a decent GPU. Sure, that's a lot better than intel's IGPs were back then, but this comparison was done with a macbook air, not a mac pro with high end GPUs.

What I wonder is what kind of latency a high end GPU like a GTX titan or a GTX 780 will function with.

I don't really understand how that is working. There is no connection from the discrete graphics card to the motherboard other than the PCIe bus... so how does the output of the discrete card get to the TB controller? It sounds like some kludge that requires an on-die CPU with some technology from LucidLogix.

*facepalm*

Memory sharing.
 
Even at the speed of light, there would be a 10ns delay over a 3m wire.

Well, you got your math a little wrong there. The speed of light = 0.299792458 meter per nanosecond. So we can round that off if you like to 0.3m per nanosecond and thus a 3 meter length would be 0.9ns and not 10ns. 10ns would be like 32 meters or about half way from my house to the nearest connivence store. ;)

In a vacuum the speed of electricity thru copper is 100% the speed of light. Through copper data cables like TB uses the propagation speed in open air at sea level is about 95% the speed of light. And through heavily insulated copper wire like the coax your RF TV antennae uses it's about 70% the speed of light. So for TB we can assume it's about 90% to 95% the speed of light and thus pretty close to about 1ns per 3 meters.

And let's be clear, a nanosecond (ns) is one billionth of a second (10−9 or 1/1,000,000,000 s). One nanosecond is to one second as one second is to 31.7 years. ;)


It'll still work, though - there are some examples of using external GPUs over thunderbolt with mild success, and I could give you evidence of why that is - basically regular thunderbolt is about as good as PCIe 2x, and a few gens back that was still good for 60-80% of max performance on a decent GPU. Sure, that's a lot better than intel's IGPs were back then, but this comparison was done with a macbook air, not a mac pro with high end GPUs.

Yeah, I dunno. I haven't profiled anything to do with PCI bandwidth, burst, or bus frequency myself and I've never had the occasion to read anything other than anecdotally based speculation so it's unclear to me. My education as a computer scientist was completed in the 80's and I haven't really kept up to date with these kinds of intricate details in this more modern day - all to say it's hard for me to speculate without actually doing some work on it. I am of the opinion that bandwidth specifically, is nowhere near as important as most folks would like to think. Bus frequency is likely far far more critical to performance on a workstation grade system. So if the frequency of TB2 is the same or close to that of PCIe v3 then theoretically we should get close to the same performance from almost all v3 based GPUs given the game code or whatever was written intelligently. The PCIe v2 TB1 benchmarks that Tom's Hardware published bare this out as well where compute times were nearly identical and almost all applications were within 10 or 15% of direct PCIe speeds. Only one game (probably poorly written brute force code!) showed more than a 30% performance reduction.


What I wonder is what kind of latency a high end GPU like a GTX titan or a GTX 780 will function with.

Probably similar but I dunno. For compute operation I guess it will be about the same as if it were directly connected via PCIe cardedge connector. Even if it's 20% less that's still a huge computational benefit. To be clear about this discussion I have to add that I just do not see any rationality to support installing more than two GPUs in a workstation. If you're trying to build a GPU compute cluster then WTH are you doing messing around with workstation grade machines like the MacPro (any revision) anyway?

This is basically the abilities the MacPro6,1 is targeting and also basically what you could expect from a MP5,1 as well (at least concerning CGI):

 
Last edited:
Yea, I got kinda lazy with that 10ns figure:
http://www.wolframalpha.com/input/?i=3m/speed+of+light

Oh well. From my digital design class, wire delay was brought up to be a huge problem - one that should be avoided whenever possible, but that was indeed on a nano scale. Granted, optics and wires are used in supercomputing rigs all the time and aren't much of a problem there.. so I assume those delays are really trivial for a 3m thunderbolt connected GPU of any kind.

Whatever, I suppose.
 
Looks like they're just smarter than the Apple engineers.

*facepalm*

Memory sharing.

I realize emotions are running high here, but what warranted these kinds of responses?

Since at least part of this thread is about having TB in the same old form factor, this topic of the feasibility of TB with discrete off-the-shelf graphics is not only relevant, it may actually be the key to answering why Apple went the direction they did.

I'm not the only one who was under the impression you needed integrated or custom GPUs or some kind of display port cable kluge to achieve TB. And this Newegg video that's been posted above doesn't really do much to convince me otherwise, but perhaps I don't understand it. As I said, it appears to be some kind of software GPU virtualization combined with an on-die CPU that allows this to work. If that's true, the latter is a non starter in a Xeon based workstation. And then we're still at the point where no matter how badly people would like TB with off-the-shelf GPUs in a Xeon based system, it just doesn't seem possible. But again, I'm no expert and would love to understand more if I'm off base here.
 
I realize emotions are running high here, but what warranted these kinds of responses?

Since at least part of this thread is about having TB in the same old form factor, this topic of the feasibility of TB with discrete off-the-shelf graphics is not only relevant, it may actually be the key to answering why Apple went the direction they did.

I'm not the only one who was under the impression you needed integrated or custom GPUs or some kind of display port cable kluge to achieve TB. And this Newegg video that's been posted above doesn't really do much to convince me otherwise, but perhaps I don't understand it. As I said, it appears to be some kind of software GPU virtualization combined with an on-die CPU that allows this to work. If that's true, the latter is a non starter in a Xeon based workstation. And then we're still at the point where no matter how badly people would like TB with off-the-shelf GPUs in a Xeon based system, it just doesn't seem possible. But again, I'm no expert and would love to understand more if I'm off base here.


Watch the Newegg video again.

It shows that there is a way to have TB added in such a way that you can still use the PCIE GPUs.

So if Asus could do this a year ago, why is it that new Mac Pro didn't have this tech?
 
Yea, I got kinda lazy with that 10ns figure:
http://www.wolframalpha.com/input/?i=3m/speed+of+light

Oh well. From my digital design class, wire delay was brought up to be a huge problem - one that should be avoided whenever possible, but that was indeed on a nano scale. Granted, optics and wires are used in supercomputing rigs all the time and aren't much of a problem there.. so I assume those delays are really trivial for a 3m thunderbolt connected GPU of any kind.

Whatever, I suppose.

Yup, always try to keep things as short as possible. But it's not really a "huge problem" until you examine larger scale systems. When designing something like a datacenter cluster, if you start placing the components 4 or 5 meters apart then over 100 or more components there's likely going to be "huge problems" - and most likely due to the effects of voltage degradation + line noise on signal integrity rather than frequency problems (latency, and etc.).

3M is a terribly long cable IMO. 1m or 1.5m is likely more appropriate. With a 1.5meter cable you will potentially be limited to about 2 billion data transactions (starts and stops - not stream data) across the cable per second. And this would be significant if TB2 were transacting a billion or two times per second. I'm going to take a wild stab and say there are less than a few million transactions/s even in the very busiest sessions. If that's true then yeah, no problem at all from a 3m or less cable length. This transaction frequency in conjunction with interface buffering, voltage, and a few other things is how cable length for a given interface is determined. Wiki says that the maximum cable-length for copper based TB/TB2 is 3 meters and apparently there is an optical version of Thunderbolt cabling with longer maximums. Optical doesn't induce the voltage drop/m and other noises that copper does so it can be longer - approaching the transactional limitations. Wiki says that optical TB cables are "available" in lengths up to 30 meters (since 1/2013) and signal integrity can be maintained up to 100 meters. These cables work with the TB found in Macs but currently don't supply power-over-cable like the copper version does.
 
Last edited:
Edit2: ASUS just released a consumer board for less than $300, available today (not 6 months[ish] from now), which has THREE PCIe 16x 3.0 slots with a whopping 15.75GBps each -- nearly 8 times the bandwidth of thunderbolt 2, and there's Three of them. Oh, plus it has Three PCIe 3.0 1x slots, each has the bandwidth equal to Thunderbolt 1... and surround sound, because that was so much to ask :D

I believe the Haswell 1150 (for which that board is designed) has only 16 lanes of PCIe 3.0. I'm guessing Asus is using a switch to share them across 3 slots, but there is not 48 lanes of PCIe 3.0 to be had there... Only 16... So three GPUs on that board are going to be fighting over the same bus bandwidth or getting a fraction of it each.

I think the new Ivy Bridge E 2011 socket systems such as what are in the new Mac Pro will offer a full 40 lanes of PCIe 3.0... 16 for each of the two GPUs and then 8 for other I/O such as TB. There is a discussion on this elsewhere in these recent threads on how the PCIe lanes are likely being allocated in the new Mac Pro.

Now keeping this on topic, those 40 lanes could be utilized in the old form factor to offer 3 GPUs (with 16, 16, and 8 lanes), but the chassis would have to increase in size and power capability (PEG connectors) to accommodate three double width GPU cards.

----------

Watch the Newegg video again.

It shows that there is a way to have TB added in such a way that you can still use the PCIE GPUs.

So if Asus could do this a year ago, why is it that new Mac Pro didn't have this tech?

Yeah, I watched it a few times, but its still not clear how it works and thus what is required. Do you know what is going on there? Can you explain how they are getting video output from the discrete GPU to the input of the TB port? Does the on-die GPU play a role? If so, this solution is a non starter in a Xeon system which lacks an integrated GPU.

EDIT: never mind, I found an article that explains it. As I suspected, the reason you won't find TB working with off-the-shelf GPUs in any Xeon based system is...

The first, to reiterate, is that Virtu requires a system with an integrated GPU (iGPU, such as a mainstream Intel Sandy/Ivy Bridge processor or an AMD APU) as well as discrete graphics (dGPU, AMD or NVIDIA). Without this setup, Virtu will not do anything.

Given Apple's commitment to TB, custom GPUs in a Xeon based system were really the only option.
 
Last edited:
The ican isaw is an isore.

Haha! All things subjective of course, but I couldn't agree more. Quite puzzled if Mister Ive has had anything to do with the shaping of this new model, such an ugly departure from the norm.

Looks like something that belongs on my Dyson vac!
 
My opinion..

There was really nothing wrong with the current Mac Pro form factor. In fact I would have liked to have seen all the same features that the iCan has put into the current Mac Pro. Not only would we have been able to expand internally, but also there would be no need for external expansion only.

Already, one can use Dual GPUS in the current Mac Pro.
 
Where's the "I don't know until I see detailed specifications and prices" option?

I would have been happy with an upgraded "classic" Mac Pro, but until we know more about the new one, it's would be rather rash to dismiss it completely.
 
I'm dismissing it completely not because of its design and or expansion, but because thunderbolt is too stratospheric in prices. Unless thunderbolt becomes widly used and prices come down, the iCan will be too distant for me to even get and USB 3.0 sucks in terms of Hard drive enclosures.. my SATA II inside the Mac Pro 2010 I have is alot faster than the paultry speed of USB 3.0.

My reason for dismissal: THUNDERBOLT TOO EXPENSIVE.


Where's the "I don't know until I see detailed specifications and prices" option?

I would have been happy with an upgraded "classic" Mac Pro, but until we know more about the new one, it's would be rather rash to dismiss it completely.
 
Where's the "I don't know until I see detailed specifications and prices" option?

I would have been happy with an upgraded "classic" Mac Pro, but until we know more about the new one, it's would be rather rash to dismiss it completely.

Price doesn't matter if doesn't do what I want and need it could be $.50 and I'm not buying.
 
I think it's too early to assume the Mac Pro form factor we have today is dead. It's a safe assumption, but not a sure bet. It's certainly possible to iterate it to support the latest CPUs. But even if they offered this, I'd still choose the new solution for the PCIe SSD, compact chassis, and TB2.

Oh! The current design, just anodized black to match the new style! With 8 PCIe slots and a big PSU. (1200-1500W) plus support for SLI and crossfire! That would be a nice step up! No more external power supplies for the Titan/s!
 
I believe the Haswell 1150 (for which that board is designed) has only 16 lanes of PCIe 3.0. I'm guessing Asus is using a switch to share them across 3 slots, but there is not 48 lanes of PCIe 3.0 to be had there... Only 16... So three GPUs on that board are going to be fighting over the same bus bandwidth or getting a fraction of it each.

I think the new Ivy Bridge E 2011 socket systems such as what are in the new Mac Pro will offer a full 40 lanes of PCIe 3.0... 16 for each of the two GPUs and then 8 for other I/O such as TB. There is a discussion on this elsewhere in these recent threads on how the PCIe lanes are likely being allocated in the new Mac Pro.

You're right, the current gen has to do a lot of bandwidth sharing, but The new MP is using a processor that isn't out yet. What is your reason to think there wont be Ivy Bridge-E Mobos on the PC side that would take advantage of 40 lanes?

Now keeping this on topic, those 40 lanes could be utilized in the old form factor to offer 3 GPUs (with 16, 16, and 8 lanes), but the chassis would have to increase in size and power capability (PEG connectors) to accommodate three double width GPU cards.

What about having a motherboard that could just have all the lanes shared among TB and PCIe ports? Since I for one wouldn't use TB, and my GPU (say, a 7970) shows no benefit with over 8GBps (PCe 3.0 8x), I could have the following config

PCIE 3.0
8x -> 7970
8x -> 7970
8x -> SAS card
4x -> SSD (Up to twice as fast as ANY TB2 option)
On-board:
4x -> 5 on-board SATA III 6Gbps
8x -> Four TB2 ports

All this done without sharing lanes (unlike the New Mac Pro, which shares 8GBps over six TB2 ports (12GBps)).

Having things stuck in the current config deprives us of options.
 
Well, you got your math a little wrong there. The speed of light = 0.299792458 meter per nanosecond. So we can round that off if you like to 0.3m per nanosecond and thus a 3 meter length would be 0.9ns and not 10ns. 10ns would be like 32 meters or about half way from my house to the nearest connivence store. ;)

Quick correction - your math is off. You need to multiple by 10, not just 3. You want to go from 0.3 meters to 3.0 meters.
0.3 meter per nanosecond
Times ten...
3 meters per 10 nanoseconds.
Remember, the point was about 3 meter cables.
 
Id rather actually wait until its released before jumping to conclusions.

Conclusions about what?

It will have 40 PCI-E lanes

It lacks internal storage options

It will not have more than 1 processor.

It will have custom graphics cards

the only thing we really don't know is graphics card options, core count options, and price. For most replying in the negative to this thread those don't matter because it lacks one of the above.
 
Last edited:
I've given this lots of thought, and I wont be the owner of the new MacPro, because in IMHO, is not really a MacPro at all. Since the days of the Macintosh II, one of the beauties of the expandable Macintosh was just that, it's expandability. BTO options really meant nothing to me, since I could screw around with the innards whenever i wanted too and could make it all I wanted it to be, without a lot of external peripheral enclosures. I currently have a 3,1, 2008 MacPro, and my decision now is weather to upgrade to a 5,1 or not. My 3,1 has served my well and I'm reluctant to give it up. I'm on my fifth video card (a MacVidCards modified Gigabyte 3 fan GTX570. All bays are filled with HDDs, I also have two external HDDs. I have two Pioneer DVD writers and 12MBs of RAM. I could replace one of the HDDs with an SSD and increase the RAM and probably still be very happy with my current machine. So, the answer to your question is a resounding YES!!!! A current configuration MacPro upgraded to contain all the new technologies developed and available since 2010 would have certainly been on my shopping list. The new one is not.

Lou
 
An Echo?

No, they are not identical; I get that. Nevertheless, I believe that it will win awards for design and maybe even get into the museum of modern art. However, just like the cube, I think it will eventually be a rare piece of memorabilia rather than the powerhouse that is the current Mac Pro. The whole convective cooling philosophy with everything stacked inside a package that subjugates utility to design and the idea that people will not only buy it because it looks good, they will love it and show it off; that is where I hear the echo. I think it is beautiful but if not useless, decidedly not a Mac Pro.

I hope Apple comes to its senses soon before they are back being traded as penny stock. While I am at it: the new os has all the flaws that graphic artists wanted gone way back when and I do not believe for a second that Steve Jobs would have approved the projects as they came to be; I think this is all Jonathan Ives.

I bought my Mac Pro because it is infinitely upgradable and I love the daughter board holding the processors and the way that having them horizontally improves the cooling. I love that I can add ever hotter video cards and storage by the bushel (4 - 4gb hd plus two SSDs in a raid 0 and eventually, a PCIe SSD and I still have the optical drive -soon to be a DL Blu-Ray to boot) Now, for the same price or greater, they want me to buy something that can't be upgraded? NO, I will drool over it's design and marvel at the speed of 12 cores on one chip coupled with two hot video cards but I will do it all knowing that I will put in 6 core processors (just not today) and an ubber hot video card in my solid aluminum insanely great, take no prisoners, eats all-in-one's for breakfast and defecates wintel gamer boxes, REAL Mac Pro.
 

Attachments

  • cube G3.jpg
    cube G3.jpg
    89.9 KB · Views: 70
  • macpro2013_02.jpg
    macpro2013_02.jpg
    22.1 KB · Views: 66
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.