Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

fuchsdh

macrumors 68020
Jun 19, 2014
2,028
1,831
They won't? People aren't already using flashed cards like there is no tomorrow? All those people smart enough to keep their old Mac Pros but without a future in the mac sphere would probably jump on that in a heart beat. You can talk about the difference between chips all day, but on a hobby machines like the nMP and it's single chip configs, you really don't see much advantage. Thermal constraints? The trashcan isn't that great as it is. You don't need to go into how it's thermally superior, I've read the advertisements and seen the dog and pony show already....Yeah yeah yeah..I KNOW..innovation my arse.

Maybe you should listen to someone who actually uses the machines, then. The Mac Pro will smoke a quad i7 Mac mini in my renders, and won't sound like a wind tunnel in the process. I'm sure if you wanted to break it out into actual price per performance, a Mac mini is a better deal, but by that metric professional gear and Xeons have always been a ripoff.
 

Derpage

Suspended
Mar 7, 2012
451
194
Maybe you should listen to someone who actually uses the machines, then. The Mac Pro will smoke a quad i7 Mac mini in my renders, and won't sound like a wind tunnel in the process. I'm sure if you wanted to break it out into actual price per performance, a Mac mini is a better deal, but by that metric professional gear and Xeons have always been a ripoff.
I totally hear you, but I'm not sure people doing rendering stick to 4 core machines much. It would seem that more cores would suite you if that is where your power needs lay. However, if you want to work in OSX and do some boss gpu calculations. Why wouldn't you go with a 4core mac mini with an egpu. You get your OSX. You get your gpu rendering and you aren't too deep down into the "kludgy-will-this-continue-to-work-i-don't-know-but-lets-keep-doing-it" routine that we all fall into when something has to get done. I mean, Apple is trying to provide a solution that allows for heavy GPU computations. One could say that is one of the main features of the new design. So if they were to have a 4 core mini that easily hooks up to GPUs externally, it kinda kills their marketing angle for that 4 core MacPro. At six or 8 or whatever core count, they then can boast: Look at all those cores...look at those 2 shiny gpus. YOU HAVE OPTIONS FRIEND. BUY OUR STUFF.
 
Last edited:
  • Like
Reactions: Synchro3

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
Xserve was killed by NAS appliances like Synology Qnap Asustor and Netgear, nothing Apple could do to revive it, and OSX server remains alive thanks the Mac mini and Thunderbolt storage.
Probably true as a file server.

For computing, it died because it was a very low-end entry-level system with no growth possibilities and little expansion capability.

It was also damaged by spectacular failures like the Virginia Tech "System X" in the HPC space.
 
  • Like
Reactions: Mago

fuchsdh

macrumors 68020
Jun 19, 2014
2,028
1,831
Probably true as a file server.

For computing, it died because it was a very low-end entry-level system with no growth possibilities and little expansion capability.

It was also damaged by spectacular failures like the Virginia Tech "System X" in the HPC space.

How did System X fail?
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
How did System X fail?
The original (highly touted on Apple fan sites, but largely ignored by the rest of the world) System X never went into full production use - the lack of ECC memory made the MTTF less than the mean time to finish real jobs.

It did stay up long enough to make some headlines in the Top 500 list.

It had to be completely replaced within a year when Apple was forced to admit that ECC memory was important and updated the Xserve G5. (https://en.wikipedia.org/wiki/System_X_(computing))

That, and the fact that it didn't run x64 Linux, means that rather than heralding an era of Apple supercomputing - it was a short-lived Apple media darling that helped its media-hungry creator get some good gigs. Make some headlines in the Apple press with a cluster of low-end entry-level servers - get some good industry jobs - then send it to eWaste (four years ago).
 
Last edited:

Bubba Satori

Suspended
Feb 15, 2008
4,726
3,756
B'ham
The original (highly touted on Apple fan sites, but largely ignored by the rest of the world) System X never went into full production use - the lack of ECC memory made the MTTF less than the mean time to finish real jobs.

It did stay up long enough to make some headlines in the Top 500 list.

It had to be completely replaced within a year when Apple was forced to admit that ECC memory was important and updated the Xserve G5. (https://en.wikipedia.org/wiki/System_X_(computing))

That, and the fact that it didn't run x64 Linux, means that rather than heralding an era of Apple supercomputing - it was a short-lived Apple media darling that helped its media-hungry creator get some good gigs. Make some headlines in the Apple press with a cluster of low-end entry-level servers - get some good industry jobs - then send it to eWaste (four years ago).

Did that lead to Apple bailing on Xserve?
Or did they realize that a server business required more than a stunt
and lost interest when they discovered that servers required an enterprise level commitment?
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
Did that lead to Apple bailing on Xserve?
Or did they realize that a server business required more than a stunt
and lost interest when they discovered that servers required an enterprise level commitment?
Apple's business customers "realize(d) that a server business required more than a stunt" - it needed a range of systems beyond low-end entry-level and a near-medium-longterm roadmap so that an enterprise could plan purchases and expansions

Apple didn't have the former, and of course Apple didn't provide the latter.

The death of Xserve, however, was simply that Xserve (after getting rid of PPC) only had the distinction of being one of the highest priced and least compatible x64 low-end entry-level servers. Not the best of business models.
 
Last edited:

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
Apple's business customers "realize(d) that a server business required more than a stunt" - it needed a range of systems beyond low-end entry-level and a near-medium-longterm roadmap so that an enterprise could plan purchases and expansions

Apple didn't have the former, and of course Apple didn't provide the latter.

The death of Xserve, however, was simply that Xserve (after getting rid of PPC) only had the distinction of being one of the highest priced and least compatible x64 low-end entry-level servers. Not the best of business models.
I'm not sure but System X was the first Supercomputer build on commodity hardware.

The research now is focused to move to commodity SOC, Xeon Phi actually is an commodity SOC (72xAtom) and similar alternative planned are based on ARM cortex A72 or custom SOC originally targeted at smartphones. (thanks to Bitcoin miners for this)

Maybe the next generation will be based on FPGA which are evolving really fast pushed by the HPC market momentum. I've considered to buy dev cards from Altera and Xilinx just for my personal research.
 

fuchsdh

macrumors 68020
Jun 19, 2014
2,028
1,831
The original (highly touted on Apple fan sites, but largely ignored by the rest of the world) System X never went into full production use - the lack of ECC memory made the MTTF less than the mean time to finish real jobs.

It did stay up long enough to make some headlines in the Top 500 list.

It had to be completely replaced within a year when Apple was forced to admit that ECC memory was important and updated the Xserve G5. (https://en.wikipedia.org/wiki/System_X_(computing))

That, and the fact that it didn't run x64 Linux, means that rather than heralding an era of Apple supercomputing - it was a short-lived Apple media darling that helped its media-hungry creator get some good gigs. Make some headlines in the Apple press with a cluster of low-end entry-level servers - get some good industry jobs - then send it to eWaste (four years ago).

I think you're reading into things. The Xserve G5's always had ECC RAM, they weren't updated ex post facto. The VT prof might have been stupid to buy X million dollars worth of hardware without realizing that would be an issue, but that's his problem, not Apple's. And yeah, it lasted nine years, which judging by the supercomputers from the TOP500 list I looked at, isn't bad at all. IBM's Roadrunner, for instance, lasted five years before replacement and cost $100 million.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
I think you're reading into things. The Xserve G5's always had ECC RAM, they weren't updated ex post facto. The VT prof might have been stupid to buy X million dollars worth of hardware without realizing that would be an issue, but that's his problem, not Apple's. And yeah, it lasted nine years, which judging by the supercomputers from the TOP500 list I looked at, isn't bad at all. IBM's Roadrunner, for instance, lasted five years before replacement and cost $100 million.
No the System X originally don't use ECC, this was introduced later at an cost about 500K $

It's curious about a decade later a single 5000$ GPU Card has the same processing power as the 6MM System X
 

fuchsdh

macrumors 68020
Jun 19, 2014
2,028
1,831
No the System X originally don't use ECC, this was introduced later at an cost about 500K $

It's curious about a decade later a single 5000$ GPU Card has the same processing power as the 6MM System X

I know the original G5's didn't, but Aidan's comment made it sound like "Apple realized ECC RAM was important after they got System X running, and added it to the Xserves as a consequence," when there's insufficient evidence to that chain of casualty.
 

fuchsdh

macrumors 68020
Jun 19, 2014
2,028
1,831

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
I'm not sure but System X was the first Supercomputer build on commodity hardware.
Not only "not sure", but simply wrong.

A year before "System X" a dual Xeon cluster was number 8, and one with off-the-shelf Dell 2U Xeon servers was #22. http://top500.org/list/2002/11/

About the only first for "System X" was that it was the first system built with pizza.

And, to be honest, no supercomputer is built with "commodity" hardware. Even if the servers are commodity - they have specialized (and expensive) Infiniband or other network fabric interconnects.

The Xserve G5 had ECC, but System X started with PowerMac G5s without ECC. The supercomputer's reputation was destroyed by the non-ECC start.

Virginia-Tech-Apple-Mac-Supercomputer[1].jpg
 
Last edited:

jerwin

Suspended
Jun 13, 2015
2,895
4,652
In the latest top500 lists, the only self made instances are Amazon's own EC2 clusters. Everyone else contracts the design to a vendor--such as Cray or IBM or SGI.

A self design may mean that the individual nodes may be supplied by a vendor, but designing the topology, the cooling systems, the hardware mix and so on is all on the customer.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
In the latest top500 lists, the only self made instances are Amazon's own EC2 clusters. Everyone else contracts the design to a vendor--such as Cray or IBM or SGI.

A self design may mean that the individual nodes may be supplied by a vendor, but designing the topology, the cooling systems, the hardware mix and so on is all on the customer.
Don't care if these are self design or commissioned to a vendor, the discussion was about the architecture switching from custom hardware (from special cpu and logic boards) to Commodity COTS hardware equipment as Xeon servers.

The next big thing on HPC seems the custom blade like modules having multiple arm 64 cpu plus a powerful FPGA and ultra fast fabric interconnecting thousands of these modules.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
Any word on Intel's push for that FPGA space via the Altera purchase?
5000 millions $ implies lots of words.

EU is in design on their next Exascale and considers both FCPGA and GPUs or pure ARM cores too, FPGA depends on the issue on Floating point processing, there require some breakthrough to takeoff on HPC.
 
Last edited:

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
They'll probably drop the Mac Pro entirely, considering how things have looked so far.
Unlikely, further the nMP wants an imminent update to Broadwell-EP/AMD Polaris 10/Fiji, later next year a full move to AMD ZEN is the next steps. Also the Mac mini should receive again quad core cpu on all its line. While Apple didn't choose the most stellar performance line actually products are delayed or constrain due Intel schedule and delays.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.