Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Oh please, get off your internal expansion high horse. You know darn well all you used those slots for in the Mac Pro was a fiber NIC to your external SAN that's managed by IT (who also bought you the machine, configured it, and told you it'll be more than enough for what you need - and we're right) and a three year old video card. From an IT perspective, the new Mac Pro is EXACTLY what you lot need: compact, powerful, and with MAXIMUM EXTERNAL EXPANSION. Your SAN isn't in your current Mac Pro, you connect to it using Fiber (or, if you're stupid rich, 10GigE) over an ADAPTER. Your high speed video encoders? EXTERNAL. Broadcast controllers? Feed mixers? Broadcast switches? Sound boards? Scratch disk?

EXTERNAL.

My Data Center has an entire rack devoted to your EXTERNAL gear, yet you're crying that you'll have a Thunderbolt to LC Fiber adapter sticking out of your shiny new cylinder? Geez, what a crying shame that is.

"But we can't upgrade our video cards!" First off, if you can't do something with twin FirePro video cards, then you're already looking at a distributed rendering or GPU "Cloud" solution anyway. Second of all, do you not see how few official video cards are actually supported in the Mac Pro? You get one, maybe two per year, at an inflated cost. From my IT perspective, this new design is a win-win: you get twice the power in a workstation, and I'm not shelling out $5,000 for new FirePro or Quaddro cards every time AMD or nVidia finally decide to bless you with an upgrade card for your older Mac Pros.

"But it's not a workstation, it's meant for amateurs!" No, a cheap PC is for amateurs or frugal people. You get everything you need for serious work here: a Xeon E5 processor with up to twelve cores, twin FirePro cards, a speedy boot drive, and plenty of memory, plus 20Gbps Thunderbolt 2 ports. If you seriously need more horsepower, go fuss at your IT guy to build you a VMWare cluster of linux machines or something for the additional horsepower you need. Scream at your vendors to support TB2 on their equipment and software natively and at full bandwidth. Apple has given you the crutch of FW800 for years after it died, and it's finally time you grow up and join the modern era of USB3 and Thunderbolt.



I'm not sure what you're getting at here. From my perspective, the new Xeon processors have given us a lot more performance compared to the last gen/year in our VMWare deployments. I would assume going from the current Xeon Westmere Mac Pros to the nMP Xeon E5 CPUs will result in similar performance increases to what we've seen.

Whatever float your boat man... But in the end, we are the one who will cast our vote with our money. As it goes, I'll be spending it elsewhere, but hey, that is just me. I wouldn't think one second of telling you what is good for your business... But I'm telling you that it isn't good for mine. It's going to be overpriced and either underpowered or overkill. At least with the old model I could scale up, as many did, by switching cpu and gpu myself, which you won't be able to do with the macCan.

As for the FireGL, yeah it's a good GPU, except if you need CUDA... With the old model I can use either AMD or NVidia as some has done here. Also, working with humongeous files and dataset over network, even a 10GigE or fiber is a pain. When dealing with topological maps and ref datasets that weight 200GB and up you have to do it locally. Hence the need for local internal storage or at least a faster than TB raid connection.

As for vendor supporting TB2... Why should they? They make ********* of money selling USB2 and USB3 peripherals at the masses and also make $$$ selling SAS raid drives array that leaves TB1 & 2 far behind eating dust.

TB is the new NuBus, EISA, VLB rolled into one.
 
Sandy Bridge brought some pretty nice performance improvements, so its pretty understandable why people were a disappointed in Apple for missing that boat.

From rough guesses though the speed makes sense. 1 Ivy bridge 12 core vs 2 Westmere 6 cores at high clock rates is going to be close. Exactly how close will depend on usage, plus you have the GPU issue, which should be a large advantage for the new Mac Pro.

The new upgrade will jump ahead two CPU generations, so the difference should be much more significant core for core than going from Westmere to Sandy Bridge because now the jump is from Westmere to Ivy Bridge.
 
The new upgrade will jump ahead two CPU generations, so the difference should be much more significant core for core than going from Westmere to Sandy Bridge because now the jump is from Westmere to Ivy Bridge.

But it has to make up for a decent GHz disadvantage due to having so many cores on one die. The E5-2697 is supposedly clocked at 2.7GHz, while the 5,1 had 12 cores at 3.06. So, the clock for clock improvements from Westmere to Ivy Bridge need to make up for ~13% in GHz disadvantage. The pretty much eats away most of the architecture improvements. Maybe you're left with 10% increase....
 
But it has to make up for a decent GHz disadvantage due to having so many cores on one die. The E5-2697 is supposedly clocked at 2.7GHz, while the 5,1 had 12 cores at 3.06. So, the clock for clock improvements from Westmere to Ivy Bridge need to make up for ~13% in GHz disadvantage. The pretty much eats away most of the architecture improvements. Maybe you're left with 10% increase....

I guess I'm left confused by your former statements that seemingly contradicted each other. If Sandy Bridge "brought some pretty nice performance improvements", Ivy Bridge should bring even nicer improvements, compared to Westmere. And it doesn't merge with what was said at WWDC "up to 2x faster".
 
I originally really liked this machine for that very reason, but then I thought about it more and it now has me confused. Why do you need an excellent, powerful client machine? I'd like some in local horsepower, but if I have easy access to a cluster/high RAM machines, or what have you, why do I want to pay multiple thousands of dollars for "just" a client machine, even if its a pretty damn good client?

This is why the bottom end of spectrum is going to be so key. It could be a great client machine with reasonable horsepower, but I'm not sure many people or corporations will want to buy $2500, or more, machines that are ultimately just clients. A mac mini also makes a great client for about 1/3 of that cost. And at the bottom end, you're going to be comparing 4 mainstream Haswell cores to 4 Ivy Bridge E5 cores, where the difference in CPU performance will not be huge. So for small-medium jobs, the mac mini is going to keep up with the a Mac Pro pretty well, and then the big jobs go to the cluster anyway. So...huh?

I think that's my thing - I like having enough workstation power that I don't have to go to the cluster, but I don't actually have a good handle on how much I'm willing to pay for that. I think that's my biggest concern with the Mac Pro - if I am buying a "powerful client" machine, I don't want to be paying for dual FirePros. Sure, they might have a lower end option, but they haven't discussed that yet, and I'm not going to be sorry for discussing Apple's products as they have portrayed them.

  • The mountain of cables doesn't make sense to me. Whether it's like sirio76 linked to or my own system cables are just a part of the deal. The MP6,1 will not have any more or less than any other system. Maybe a total of one more for the drives but that's all I'm giving. As soon as you add a third GPU the MP6,1 is actually using less cables than a MP5,1 would with the same added in - unless you get very creative and tricky about adding an internal PSU - and then you're opting out bays intended for other devices anyway.


  • It will inherently have more for me - four of my hard drives, and an optical drive (yes, I still use them) will need external cases. A PCI card I'm using will need an external case. If I'm doing CUDA work and they stick with bespoke FirePros, a GPU will need an external case. All of those external cases will need cables. Even if all the drives go in a single enclosure, that's between 3 and 4 new cables.

    [*]I also don't really follow you about the enterprise thing either. External drives are common from the Commodore 64 and Atari 8-bit machines all the way up to every present electronic computerized device sold today from entry level to enterprise. Heck, even my LCD Television has a port for an external hard drive. And saying that it's more suited as a "client" whatever that means, seems to assume that it's somehow not configurable. Sure, it can be used that way. And the fact that it's a strong self-contained machine out-of-the-box might lend to that vision but it no more needs a server than any other computer. It probably will need mass storage of some kind but that can be three 4TB seagates in a USB3 enclosure just as well as 14 1TB drives in an Xserve or other server. The configurability looks to be very dynamic to me. A company can save money [assumed] by using a bunch of MP6,1 (one on each person's desktop) and a large server share or an individual can connect up a single four or five bay 5.25 enclosure via TB2, USB3, or Ethernet just as easily. Likewise one could easily set up the MP6,1 as a server itself. So I'm not seeing how it fits one description better than the other or lends itself to enterprise environments more than any other possible configuration.

    Stop reading it as "external cases", because I'm not talking about a place to plug in a hard drive. I'm talking about settings where there is vast, ubiquitous infrastructure resources available. Where everything I could possibly put in a 5,1's case that I can't put in a 6,1 is available over the network anyway.

    The 6,1 looks really good in that configuration. It's somewhat spartan options are actually just trimming down to what you need sitting right at your desk. It looks like a perfect setup for that, and if you're already using infrastructure resources, then you haven't lost anything between the 5,1 and the 6,1. Of course you *can* plug a whole bunch of stuff into the 6,1, but you lose a little bit of its aesthetics, and it's somewhat less convenient than the 5,1. It's not that it can't do it, it just feels...somewhat kludgy, in a way where as a client machine, being quiet, reasonably powerful and tiny/beautiful makes perfect sense.

    [*]Yeah, I think it's good for that - so I guess I can see the vision you're seeing too. :) I somehow doubt anyone outside Apple was consulted tho - that wouldn't be typical for sure! Keep in mind that in order to regain almost all of the configurability of the previous MP models all one needs to add is a single 4-bay 5.25 enclosure. Everything else is the same and upgraded. Where people added cards for eSATA we have USB3 - no cards needed. Where people added USB3 cards they are no longer needed as there are 4 dedicated USB3 ports already present. No one I know of installed more than two GPUs and the MP6,1 now comes standard with two - and with TB2 we even take advantage of the upgrade and add another 12 or so GPUs to it. Where people were using old-skool Audio cards there are USB2/3 devices with the same and better/more-modern functionality available to use in their place. Given that the new machine is between 1/8 and 1/6th the size of the MP5,1 it would seem to me to fit into more computing environments - not less.
Shrug, I dunno, that's just the way I see it.

Even if they didn't consult with them, there's undoubtedly enough people at Apple used to that environment (indeed, I suspect that's the environment *at Apple*) to design for that spec.

I think the objection is not that the current specs of the Mac Pro are a problem, although I do find the need for a new enclosure for the drives to be somewhat irksome, and I'd rather they be hidden in something designed by Apple, but that there's fairly limited space to do anything *new* with the new Mac Pro that doesn't involve external TB enclosures.

There's a ton you could do with the old design before you needed to change the look of your machine and the layout of your desk. With the new Mac Pro, most steps have an added external component required, which will either require careful cabling (in your case) or a horrific next of cables (me - I suck at cable management, always have).
 
As for the FireGL, yeah it's a good GPU, except if you need CUDA... With the old model I can use either AMD or NVidia as some has done here. Also, working with humongeous files and dataset over network, even a 10GigE or fiber is a pain. When dealing with topological maps and ref datasets that weight 200GB and up you have to do it locally. Hence the need for local internal storage or at least a faster than TB raid connection.

Thunderbolt is faster than SATA3 or SATA2. Or the Mac Pro's SATA1. Why would Thunderbolt be unable to handle 200 gigabyte data sets? Thunderbolt 2 is even faster than the internal SSD on the new Mac Pro, and that thing is already a bandwidth monster.
 
Thunderbolt is faster than SATA3 or SATA2. Or the Mac Pro's SATA1. Why would Thunderbolt be unable to handle 200 gigabyte data sets? Thunderbolt 2 is even faster than the internal SSD on the new Mac Pro, and that thing is already a bandwidth monster.

Herrrr, no it isn't...

Check here http://www.barefeats.com/tbolt01.html . Even if tb2 performed twice as fast than tb1, it still fall short of sas.

And when talking about geomatic dataset you have to understand that they are constantly being read and rewritten, simultaneously. If all your drives are connected via a single wire you'll saturate that link fast. This is demonstrated by the low score that tb raid gets compared to sas.
 
Herrrr, no it isn't...

Check here http://www.barefeats.com/tbolt01.html . Even if tb2 performed twice as fast than tb1, it still fall short of sas.

"3. Thunderbolt RAID 0 with 6Gb/s SSDs appears to run into a bottleneck when you compare it to the SAS RAID 0 with the same 6Gb/s SSDs. I guess the 1000+MB/s theoretical bandwidth is... theoretical."

Interestingly Andandtech managed to reach 1GB/s with the Pegasus R6 with SSDs. So I doesn't look like the bottleneck was Thunderbolt in the Barefeat test.

39439.png
 
"3. Thunderbolt RAID 0 with 6Gb/s SSDs appears to run into a bottleneck when you compare it to the SAS RAID 0 with the same 6Gb/s SSDs. I guess the 1000+MB/s theoretical bandwidth is... theoretical."

Interestingly Andandtech managed to reach 1GB/s with the Pegasus R6 with SSDs. So I doesn't look like the bottleneck was Thunderbolt in the Barefeat test.

Image

Apple to orange, since anandtech didn't compared it to sas with the same drives... Anandtech tested the pegasus raid only.

Also the pegasus cost more.
 
Apple to orange, since anandtech didn't compared it to sas with the same drives... Anandtech tested the pegasus raid only.

Also the pegasus cost more.

Eh, you mean that Thunderbolt performs worse when it's compared to a SAS RAID? Perhaps it's intimidated.

The point is, that their test reached a bottleneck at 730MB/s and this was not due to Thunderbolt, because if it was, then it would not be possible for Anandtech to reach 1GB/s.
 
Eh, you mean that Thunderbolt performs worse when it's compared to a SAS RAID? Perhaps it's intimidated.

The point is, that their test reached a bottleneck at 730MB/s and this was not due to Thunderbolt, because if it was, then it would not be possible for Anandtech to reach 1GB/s.

What do you think i was talking about?

The point i'm making is that a tb only solution suck compared to what was available in the tower mp. In a tower i was able to add a sas controller. I won't be able to do that with the itube.

For the price apple is going to ask for the new mp you would expect better performance when doing real data processing. We are regressing instead of going forward.

In anycase, and this is my final word for this thread, i'm really sad that apple has moved their "form over function" approach to the mac pro. Hell, it's like trying to sell a gold plated, gem studded, $3k philips screwdriver to your neighborhood auto mechanic...
 
What do you think i was talking about?

Well, you said something about apples and oranges.

I was pointing out that the test you linked to seems to have reached some other bottleneck, that's all.

You are now making a different, unrelated point and that's fine.
 
I guess I'm left confused by your former statements that seemingly contradicted each other. If Sandy Bridge "brought some pretty nice performance improvements", Ivy Bridge should bring even nicer improvements, compared to Westmere. And it doesn't merge with what was said at WWDC "up to 2x faster".

Well, from what I understand Ivy Bridge comes with some new instruction set, so if your code using that new instruction set very well, it can be much faster and that's where the "up to 2x faster" comes from. However, most things are 5-10% faster than Sandy, and Sandy is some 10+% faster than Westmere, GHz for GHz.

Now, the improvements moving up from Westmere to Sandy Bridge came in large part from extra cores, 10%+ clock for clock improvements, and increased turbo ranges.

But, when you nerf yourself to a single socket, you give up on much of that because:

1) added core count is neaturalized by an extra socket in the old machine
2) the added core count per die drives GHz down relative to the old machine

So, Apple is fighting an up hill battle to get back up to the same performance levels by tieing one hand behind the nMP's back in limiting it to one CPU.

When, you step back and think about it, I guess its pretty impressive though. One CPU can now do what it took two to do only a few years ago. But that doesn't help too much if you could have two then, but only one now....
 
Well, from what I understand Ivy Bridge comes with some new instruction set, so if your code using that new instruction set very well, it can be much faster and that's where the "up to 2x faster" comes from. However, most things are 5-10% faster than Sandy, and Sandy is some 10+% faster than Westmere, GHz for GHz.

Now, the improvements moving up from Westmere to Sandy Bridge came in large part from extra cores, 10%+ clock for clock improvements, and increased turbo ranges.

But, when you nerf yourself to a single socket, you give up on much of that because:

1) added core count is neaturalized by an extra socket in the old machine
2) the added core count per die drives GHz down relative to the old machine

So, Apple is fighting an up hill battle to get back up to the same performance levels by tieing one hand behind the nMP's back in limiting it to one CPU.

When, you step back and think about it, I guess its pretty impressive though. One CPU can now do what it took two to do only a few years ago. But that doesn't help too much if you could have two then, but only one now....

Let's go back and see Geekbench results for two CPU generations going from dual 4 core to a single 8 core. There is also a slight clock difference here.

First it's a dual 4 core 5460 (Harper Town) or Mac Pro 4,1.

http://browser.primatelabs.com/geekbench2/2125096

Then a Sandy Bridge EP single 8 core E5-2690.

http://browser.primatelabs.com/geekbench2/1760074
 
Last edited:
Let's go back and see Geekbench results for two CPU generations going from dual 4 core to a single 8 core. There is also a slight clock difference here.

First it's a dual 4 core 5460 (Harper Town) or Mac Pro 4,1.

http://browser.primatelabs.com/geekbench2/2125096

Then a Sandy Bridge EP single 8 core E5-2690.

http://browser.primatelabs.com/geekbench2/1760074

That's a really low Geekbench for the 4,1 8 core. And what's up with 4,1 coming with Harpertown, it should be nehalem. Harpertown is 3 generation old, and going to Nehalem made a big change too. So, we should be looking at 2x X5570, which happen to be right up there with 1 E5-2690.
 
That's a really low Geekbench for the 4,1 8 core. And what's up with 4,1 coming with Harpertown, it should be nehalem. Harpertown is 3 generation old, and going to Nehalem made a big change too. So, we should be looking at 2x X5570, which happen to be right up there with 1 E5-2690.

Well turns out it's a hackintosh, under BIOS it says: MultiBeast.tonymacx86.com

But let's try this then (53xx to 55xx):

http://browser.primatelabs.com/geekbench2/2150852
http://browser.primatelabs.com/geekbench2/2114318
 
Speaking of bottlenecks, I'm sad the new Mac Pro won't be able to run my bottleneck-free x8-lane Areca 1880ix-12 RAID card. Here's a review from March 2011 when they hooked up some SSDs to it. Apparently, they saw 3600MB/sec from it, though one graph shows 3000MB/sec sustained.

I currently run the 1880ix-12 in RAID 6 with a single 8-bay enclosure, but can expand to a second 8-bay for 16 total disks without port multipliers. Even with only eight HDDs (not even SSDs) I get over 800MB/sec sustained in RAID 6. If I swapped in eight SSDs, I'd expect to see at least 2800MB/sec sustained, which is easily shown to be the case by the review linked above.

Areca has newer / improved cards today, but it saddens me to think this card that cost me about $650 is going to cost yet another $700+ just to run at half speed (x4 lanes) in some external enclosure.

I'm hoping that someday soon (maybe by the time the Mac Pro 7,1 comes out) someone will have an external TB enclosure that can take four or even six TB cables for simultaneous data throughput greater than the x4 lanes used by a single TB controller today. As it stands today, I'd have to get a second RAID card, two TB external enclosures, and run them both in their handicapped x4 lane configurations into a software RAID in OS X on top of the hardware RAID of each Areca card just to match what I use on my old 4,1... not ideal at all.

If I could get full x8 speed from a clump of 4-6 TB cables into one external box, that would really make a difference for my business. Otherwise, I'm left with creating a stack of x4-lane external boxes that will most likely be taller than the nMP itself, then using software RAID and praying none of the disks fail between inconvenient, frequent backups, because of a lack of RAID 6 in OS X. It's currently nice to be able to do a single, nightly backup with a two-disk fail-safe in RAID 6.

This is why I'll be skipping the MP 6,1 for now.
 
Speaking of bottlenecks, I'm sad the new Mac Pro won't be able to run my bottleneck-free x8-lane Areca 1880ix-12 RAID card. Here's a review from March 2011 when they hooked up some SSDs to it. Apparently, they saw 3600MB/sec from it, though one graph shows 3000MB/sec sustained.

I currently run the 1880ix-12 in RAID 6 with a single 8-bay enclosure, but can expand to a second 8-bay for 16 total disks without port multipliers. Even with only eight HDDs (not even SSDs) I get over 800MB/sec sustained in RAID 6. If I swapped in eight SSDs, I'd expect to see at least 2800MB/sec sustained, which is easily shown to be the case by the review linked above.

Areca has newer / improved cards today, but it saddens me to think this card that cost me about $650 is going to cost yet another $700+ just to run at half speed (x4 lanes) in some external enclosure.

I'm hoping that someday soon (maybe by the time the Mac Pro 7,1 comes out) someone will have an external TB enclosure that can take four or even six TB cables for simultaneous data throughput greater than the x4 lanes used by a single TB controller today. As it stands today, I'd have to get a second RAID card, two TB external enclosures, and run them both in their handicapped x4 lane configurations into a software RAID in OS X on top of the hardware RAID of each Areca card just to match what I use on my old 4,1... not ideal at all.

If I could get full x8 speed from a clump of 4-6 TB cables into one external box, that would really make a difference for my business. Otherwise, I'm left with creating a stack of x4-lane external boxes that will most likely be taller than the nMP itself, then using software RAID and praying none of the disks fail between inconvenient, frequent backups, because of a lack of RAID 6 in OS X. It's currently nice to be able to do a single, nightly backup with a two-disk fail-safe in RAID 6.

This is why I'll be skipping the MP 6,1 for now.

Makes you teary-eyed about the "good old days", huh?

Imagine, an external box from 2 years ago will run CIRCLES around the one that works on the "new improved" 6,1 that will be out in 4-6 months.

Amazing progress.
 
Makes you teary-eyed about the "good old days", huh?

Imagine, an external box from 2 years ago will run CIRCLES around the one that works on the "new improved" 6,1 that will be out in 4-6 months.

Amazing progress.

We never appreciate the good things until we lose them... :D
 
Herrrr, no it isn't...

Check here http://www.barefeats.com/tbolt01.html . Even if tb2 performed twice as fast than tb1, it still fall short of sas.

It isn't really a SAS difference that is being measure there but two different kinds of RAID cards. One is a x4 PCI-e one ( in the Pegasus models ) and the other is a x8 card ( in a x16 slot ). Neither are the actually hardware RAID controllers and respective cache normalized. SAS isn't the primary gap.


Apple to orange, since anandtech didn't compared it to sas with the same drives... Anandtech tested the pegasus raid only.

If vary the cards and card bandwidth then SAS/SATA has little to do with the bandwidth changes.

Even on the barefeats benchmarks in the 4 HDDs head to head the Tbolt solution comes out in front in writes. 6 HDDs the Tbolt results are lower in both read and writes. ( also substantially lower than anadtechs default configs sequential read... different benchmark but perhaps indicative that the raid controller and caches are playing a role here as least as much as the PCIe connectivity. )



Also the pegasus cost more.

Really?

iStorage Pro iT8 6G SAS Exapnder .... ~1650
Highpoint RocketRAID 2744 ..... ~450

$2,100 driveless

Pegaus R6 12TB ~1,600-2,200

Stardom ST8-U5 enclosure ~700 ( again drive less ) + Hpoint ~275 = 975

There are two more drive bays in these expanders but it is around the same price point. So it isn't like some folks haven't been paying these kinds of prices already.

----------

Speaking of bottlenecks,.....

I currently run the 1880ix-12 in RAID 6 with a single 8-bay enclosure, but can expand to a second 8-bay for 16 total disks without port multipliers. Even with only eight HDDs (not even SSDs) I get over 800MB/sec sustained in RAID 6. If I swapped in eight SSDs, I'd expect to see at least 2800MB/sec sustained, which is easily shown to be the case by the review linked above.

"If I swapped in eight SSDs" isn't a benchmark.

A more interesting point will be who is actually dumping 6-10 HDDs for all SSDs set ups. I'm sure there are a few but are they are bulk of the Mac Pro market? Lets say those dumping 6-10 1TB HDDs for 500GB SSDs and tossing 3-5TB of storage out of their systems.

There are all sorts of corner cases that don't fit the new Mac Pro. The more telling issue is how many folks are required to be in those corners.
 
It isn't really a SAS difference that is being measure there but two different kinds of RAID cards. One is a x4 PCI-e one ( in the Pegasus models ) and the other is a x8 card ( in a x16 slot ). Neither are the actually hardware RAID controllers and respective cache normalized. SAS isn't the primary gap.




If vary the cards and card bandwidth then SAS/SATA has little to do with the bandwidth changes.

Even on the barefeats benchmarks in the 4 HDDs head to head the Tbolt solution comes out in front in writes. 6 HDDs the Tbolt results are lower in both read and writes. ( also substantially lower than anadtechs default configs sequential read... different benchmark but perhaps indicative that the raid controller and caches are playing a role here as least as much as the PCIe connectivity. )





Really?

iStorage Pro iT8 6G SAS Exapnder .... ~1650
Highpoint RocketRAID 2744 ..... ~450

$2,100 driveless

Pegaus R6 12TB ~1,600-2,200

Stardom ST8-U5 enclosure ~700 ( again drive less ) + Hpoint ~275 = 975

There are two more drive bays in these expanders but it is around the same price point. So it isn't like some folks haven't been paying these kinds of prices already.

You can get sas enclosure for less than that and with more drive than the peg has... You know it, but it will break your argument.

As to the benchmark, that is exactly the point. You are trying to say that 4x is better than 8 or 16x. Unless you are really bad at math and logic I can't see how you could say that tb is better... It cost more and it perform less. Are you the same guy who said 4 ram slot is greater than 8?
 
What do you think i was talking about?

You said Thunderbolt wasn't fast enough.

It's been pointed out that a sub-par enclosure seems to be the problem.

Thus the sub par enclosure vs. Thunderbolt not being fast enough are two different things.
 
For me:

Areca 1880ix-12 - $650
Sans Digital 8-bay box - $399
(miniSAS cables included)

$1050 driveless

8x WD RE-4 2TB HDDs @ $200 ea. - $1600

Total for 12TB RAID 6 system = $2650

Mind you, the Pegasus R6 won't do anywhere near 800+MB/sec in RAID 6. It never saw 700MB/sec in RAID 5, and only had 10TB usable space at that. Their HDDs are 150MB/sec each, while mine are only rated at 138MB/sec each... which I know is about right, since I got 1101MB/sec in a RAID 0... divided by 8 = 137.625 each.

If I pulled out two of my eight drives, I'd save $400 but suffer similar (or worse) speeds as the Pegasus.

The beauty is that I can add a second 8-bay on the same Areca RAID card, or even swap HDDs for SSDs, and go much faster than anything the Pegasus can ever hope for over Thunderbolt.

----------

"If I swapped in eight SSDs" isn't a benchmark.

A more interesting point will be who is actually dumping 6-10 HDDs for all SSDs set ups. I'm sure there are a few but are they are bulk of the Mac Pro market? Lets say those dumping 6-10 1TB HDDs for 500GB SSDs and tossing 3-5TB of storage out of their systems.

There are all sorts of corner cases that don't fit the new Mac Pro. The more telling issue is how many folks are required to be in those corners.
No, but I linked to a benchmark using my exact card and SSDs, which IS a benchmark, is it not? The answer is yes, it is. And in that very benchmark, you can see 3000MB/sec flowing free and sustained. You won't see that on Thunderbolt without some serious effort and expense far beyond what a single PCIe RAID card can do in a single slot.

Here's the benchmark that you must have missed:

UBERNUBER1.png
 
You might be able to mount four Pegasus J4 enclosures into four TB2 ports on the nMP, spreading them across the three TB controllers inside, and hope the third controller seeing two TB devices doesn't throttle too badly.

That would cost $1552 driveless for those four J4s today.

Then you can put 16 SSDs inside those J4s, which are said to max at 750MB/sec each, for a total of 3000MB/second, using a software RAID in OS X, if it all goes right. This is all theoretical with today's facts used.

So, 16 fast SSDs are going to run at least $2800 + 1550 for enclosures, and $200 for four TB cables = at least $4630... For the same speed as an old Areca card from a couple years back and half the number of SSDs.

This is where people say, "But Thunderbolt is going to get cheaper someday." That's fine, but we're talking about today, if one were to be given that actual, working Mac Pro 6,1 that Apple has shown off.

It's great that they're working on moving forward, though. Let's just hope they don't find themselves playing with the next NuBus.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.