Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
http://www.youtube.com/watch?v=vLujLtgBJC0
Watch the video, someone in 2003 build a supercomputer with 1,100 Power Mac G5's do you think this would work with the newest mac pro?

Several points to consider....

1. The System X depended on PCIe cards for the special high-bandwidth low-latency interconnect. Those interconnects are currently 40 Gbps, double the speed of T-Bolt2's peak data rate.

2. The interconnect was a switched fabric with 1100 switch ports. Who is going to build a thousand port T-Bolt2 switch?

3. The System X didn't work - too many memory errors. It had to be torn down and rebuilt with XServes with ECC memory.

4. Only 1 of the top 100 supercomputers have AMD gpus (#59), 24 of the top 100 have Nvidia and/or Xeon Phi accelerators.
 
Several points to consider....

1. The System X depended on PCIe cards for the special high-bandwidth low-latency interconnect. Those interconnects are currently 40 Gbps, double the speed of T-Bolt2's peak data rate.

2. The interconnect was a switched fabric with 1100 switch ports. Who is going to build a thousand port T-Bolt2 switch?

3. The System X didn't work - too many memory errors. It had to be torn down and rebuilt with XServes with ECC memory.

4. Only 1 of the top 100 supercomputers have AMD gpus (#59), 24 of the top 100 have Nvidia and/or Xeon Phi accelerators.

5)No-one targets macs for that kind of cluster app anymore for the most part. Sure I can use my local machines for testing code, but for scaling up... even on the older MPs support for things like infiniband cards began waning years ago. And yeah, QDR IB is not only far higher throughput than any TB2 port can handle but also would be bottlenecked by the extra bridges in the way (the TB2 controllers) even if you aggregated ports somehow.

6)The kind of density you can achieve with even 2u boxes, let alone 1u or blades absolutely destroys anything you could do with the current MP without some serious custom racks and cooling solution. Even the old MP was more easily rackable

That said, sure, you could network the machines, compile something like torque for queue management, and start running distributed jobs. Even just using the gig ports (or maybe 10gig with a TB2-->PCIe adapter, though I have no idea what the state of support under OSX is for such cards), a lot of clusters do still run on gigE for apps that don't need large bandwidth. It's just not a task the current mac pro is well suited for compared to other solutions at far better prices.

The nMP is a quiet workstation that sits on your desk, it's not designed for the machine room, not even a little
 
Last edited:
They achieved, or planned to achieve just over 10 Tera FLOPS.
That can be achieved just 2 of nMP with D700 (7 Tera FLOPS x 2), today.
 
They achieved, or planned to achieve just over 10 Tera FLOPS.
That can be achieved just 2 of nMP with D700 (7 Tera FLOPS x 2), today.

Supercomputers are ranked by benchmarked performance, not by the sum of the theoretical peaks of components in isolation.

The slowest systems on the Top500 are around 120 TFlops, and have about 12,000 Sandy Bridge cores.

Also note that they're ranked by double precision floating performance, and the D700 is less than 1 TFLOPS at double.
 
Last edited:
5)The nMP is a quiet workstation that sits on your desk, it's not designed for the machine room, not even a little

And yet, since I use mine to record in a home studio, to the machine room it goes, looks be damned.
 
Supercomputers are ranked by benchmarked performance, not by the sum of the theoretical peaks of components in isolation.

The slowest systems on the Top500 are around 120 TFlops, and have about 12,000 Sandy Bridge cores.

Also note that they're ranked by double precision floating performance, and the D700 is less than 1 TFLOPS at double.

So you'd need a 150 nMP cluster to even have a chance of making the list then?
 
So you'd need a 150 nMP cluster to even have a chance of making the list then?

Apple claims that the D700 is around 1 Tflops on 64-bit float, and the tube can fit two of them.

So it would be somewhere between about 60 and 1000 - depending on how good a D700 is at scientific double precision floating point and whether there are memory bandwidth or thermal constraints.
 
The funny thing is, no matter how you look at it, as years pass, everything scales, including the definition of SuperComputer.

So, even though it would take far less than 1,100 nMPS to reach 10 TFlops, that would be a drop in the bucket compared to today's double digit PFlops class of supers.

Given another 11 years, there will be even bigger leaps in speed, until Moore's law eventually runs into the limits of silicon based architecture and the cycle starts over again with Quantum, optical or some of the other candidates.

No matter how you look at it, it's amazing to see how far we've come and speculate as to what we will consider mainstream speeds in the years to come.

My only hope is that in 11 years, Windows will be a humorous footnote in the history of computing.
 
No matter how you look at it, it's amazing to see how far we've come and speculate as to what we will consider mainstream speeds in the years to come.

Saw this recently

"You have significantly more computing power in the smartphone in your pocket now than we had in all of the computers in the seismolab in 1994," said seismologist Lucile Jones of the U.S. Geological Survey and a visiting research associate at the Seismological Laboratory at Caltech.

http://www.cnn.com/2014/01/16/us/northridge-earthquake-things-learned/index.html
 
Very true... especially for a loaded iPhone 5S, which not only has a 64 bit CPU, but fully loaded more storage than a Fortune 500 company would've had in their entire datacenter only a few decades ago.

I've read that the Space Shuttle had essentially 286 based computers inside, and as low-end as that sounds, the "computers" that NASA used to launch men to the moon didn't even have screens.

Some people look at the control center photos with all those people staring at screens and assume they were computer monitors... not quite. That was years later. Back then people didn't anticipate computer screens, which is why even the computers on Star Trek were just blinking lights... however they did have the foresight to have a woman's voice (Majel Barret-Roddenberry) ... like Siri.
 
And yet, since I use mine to record in a home studio, to the machine room it goes, looks be damned.

When I say machine room I was thinking large ones that typically house clusters (though, ::shudders::, I still have memories of one of my Uni's clusters housed in an a large set of former office space and half off/the rest cooled by box fans for a summer while the machine room got redone) since that was the point of the thread, not the equipment room of a home studio

Very true... especially for a loaded iPhone 5S, which not only has a 64 bit CPU, but fully loaded more storage than a Fortune 500 company would've had in their entire datacenter only a few decades ago.

I've read that the Space Shuttle had essentially 286 based computers inside, and as low-end as that sounds, the "computers" that NASA used to launch men to the moon didn't even have screens.

To be fair, the reason NASA still uses older designs is a)they know every bug, every quirk, every behavior and b)they've spent a lot of time radiation hardening those designs.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.