Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Radiating said:
The storage expansion has improved though as you can now run several 2600 MB/s multi terabyte drives over thunderbolt, either through a few RAID arrays or by using high end PCIe drive like the OCZ Z-Drive in PCIe boxes.
But 20Gbps is only 2500MB/s, which is only achieved by running two TB cables today. I'm wondering why you think you can run "several 2600 MB/s" data streams today over TB1.

What I mean here is that the new Mac Pro has 3 Thunderbolt 2 controllers, which allow for 3 separate 20Gb/s data streams over 3 separate cables, enabling 3 ultra high speed RAID arrays or OCZ Z-Drives.

Tests using Thunderbolt 1 show that it has no problem reaching 1300 MB/s up or down stream. Thunderbolt 2 combines the up and down stream to allow for twice the speed in one direction at a time, at least through the cable.


Even TB v2 is probably not going to do 2500MB/s. You'll be closer to the x4 PCI-e v2.0 therotical maximum. It likely isn't going to be able to grab all 20GB/s . The removal of the border is more so to all 4K video traffic to be a roadhog, not necessarily to hand all of the bandwidth over to PCI-e data traffic.

This is a good point, upping the cable speed will probably hit PCIe x4 bottlenecks. PCIe x4 lanes typically guarantee at least 2GB/s though.

It is amazing what 3-4 years of other hard work makes...
It does not 'give' you more room. It forces you outside. It is a pure defection argument.
It is faster expansion but it is in a pretty useless form right now. Even 2 real form PCI slots could have stopped this entire thread from existing.
I don't understand the complete haters and I don't understand the complete lovers of this nMP. It needs to exist 1st. It needs to be priced 2nd. And Intel and Apple need to push TB hard. I am glad something was announced. Even if it is hard to swallow.

Personally neither completely love or hate the new MP. Like I said definitely all performance metrics are improved across the board and all of your expandability capabilities are improved across the board. And the machine is a lot more compact which is a good thing.

BUT

If Apple expects Thunderbolt to replace physical PCIe slots then they need to support the standard better. Being an early adopter of Thunderbolt 1 for PCIe cards I would call the experience user hostile to initially set up. All 4 of the pricey top of the line PCIe TB boxes I tried were poorly made with tolerances that were pretty embarrassing. Their layouts were incredibly poorly thought out because the boxes used up way too much space for what they needed to do usually having more than 50% empty space despite having unreasonably limited space for the actual card (such as requiring custom low profile PCIe 8 pin power connector or having parts interfering with heat sinks), the PCIe boxes available were physically ugly, and their internal air flow amplified noise, the boxes were literally crooked sometimes and looked like they were put together in a shed out of cases that were never designed for the task they were trying to do, which is probably close to what was going on. On top of that the power supplies were loud and underpowered. You can get power supplies half as large, half as loud and 4 times as powerful for less than $40. And finally the PCIe boxes were hugely price gouged, 400%-800%. The realities of PCIe over thunderbolt are that it's kind of just cobbled together it can work great but getting it working is a rough job.

And that's before we consider the OSX and bootcamp driver support or lack there of. eGPU support through windows 8 is literally there by accident, which is one of the most demanded features with Thunderbolt on Macs.

So if Apple wants to push Thunderbolt instead of PCIe slots they really need to make their own plastic or aluminum rectangle with rounded corners that fits full length full size double wide PCIe x16 cards and provides 250w of power with no wasted space, and sell it at minimal mark up (less than $250) and give the aftermarket a baseline to compete and improve quality control and lower prices, because it's not very good right now and by the time it is good enough, Apple will have lost way more customer revenue through lost lifetime customers than the very minor savings of farming out accessory development gained them.
 
Last edited:
In your original post that I was quoting you, you said:
If you're considering buying a Titan, and NEED a Titan, I can't really see how cost is going to be your main concern. It is already an outrageously expensive card.

I'm telling you that I have a PC with a GTX Titan and that cost would be a major concern.
The price of a nMP with a GTX Titan will be way more than I'd be willing to pay. That's why I went with a PC.

Or build a hackintosh

Yes, that is a good suggestion. I plan to hack my PC so it runs OS X.
 
I'm telling you that I have a PC with a GTX Titan and that cost would be a major concern.
The price of a nMP with a GTX Titan will be way more than I'd be willing to pay. That's why I went with a PC.

I understand your point, and I clearly cannot speak for you. However, I would still suggest that people who generally would be willing to buy a Titan (~$1000) over a HD7970 (~$400) would not be the kind of people who are overly concerned about cost.

If you personally have a reason for buying a Titan, because nothing else (ie multi-GPU solution) is sufficient, that's fair enough. But if instead of buying a Titan, one could buy a HD7970, and a TB enclosure, and get similar performance (ie, within 10-20%), for likely less money than the titan itself. Especially if TB enclosures start dropping in price reasonably soon.
 
I understand your point, and I clearly cannot speak for you. However, I would still suggest that people who generally would be willing to buy a Titan (~$1000) over a HD7970 (~$400) would not be the kind of people who are overly concerned about cost.

I think it should be noted here that from a scientific computing perspective, the people buying Titan's are penny-penching types looking for the cheapest card with a full feature set.
 
If you've never had over 10 cables coming out of the back of your Mac Pro, you're probably using it wrong.

Yeah, really. I have something in every port, every slot, every bay. Having to move to the new Mac Pro would only add to that mess, likely doubling it.
 
Yeah, really. I have something in every port, every slot, every bay. Having to move to the new Mac Pro would only add to that mess, likely doubling it.

I think that's the problem. It's not that I don't have 10 cables already. It's that just duplicating the things I currently have in there, without *anything* that I've been eyeing, that number is now up to...16.

It's already a cable management nightmare. Apple going "Oh look, this is going to be sleek and elegant and a small form factor" doesn't make any sense in the context of how I use my machine. It's just going to be a very small cylinder sitting in the middle of a nest of cables.
 
The reason why workstations are generally huge is because we want to fit as much of the stuff we use inside one thing.

Currently I have:

  • My tower, Monitors, KB, Mouse
  • Raid Array
  • Time Machine USB3 Drive


Going with a new Mac Pro would need me to have

  • My tower, Monitors, KB, Mouse
  • Raid Array
  • Time Machine USB3 Drive
PLUS to match what I originally had inside my current tower
  • Optical Drive
  • 5 Bay JBOD unit
  • PCI Chassis
  • TB Blackmagic box


That's already four additional things that I will need to buy and have connected and powered just to match my current configuration.

At least 8 additional cables to wrangle.
 
Except CPU.

And some RAID scenarios. (The flash storage is about 40% as fast as what we currently run internal.)

But every workstation box maker has the same limitations in their single Xeon CPU configs.

A single CPU HP, Dell...or even BoXX or some the other boutique workstations will have a faster box spec for spec, because they are all limited with the same Intel processors.

None of those other guys run MacOS either.
 
But every workstation box maker has the same limitations in their single Xeon CPU configs.

A single CPU HP, Dell...or even BoXX or some the other boutique workstations will have a faster box spec for spec, because they are all limited with the same Intel processors.

None of those other guys run MacOS either.

Ones that offer PCI-E slots would not have the I/O limitations.

They'd also have a dual CPU option.
 
The point is the hardware cost is the least of practically all costs....

That only makes it easier to justify a poor business decision. If you can do the same work on a computer that costs $1 less, then its the correct business choice to save that $1. Now, if you want to justify spending that extra $1 because your favorite shape is a cylinder, or whatever, fine. Spend your money how you want. But the fact that something is relatively cheap, does not justify spending more than you need on it....no amount of words describing your other costs will change that fact.
 
If apple would open up their license to do so I would put it on a generic intel tower and upgrade that at will in a heartbeat. I would severely miss having applecare but that would be offset by not having to hope apple gave us the things we wanted rather than what consumers wanted.

I would also miss not being an apple customer. I have been one for more than 3 decades. In addition to buying their stuff I plopped down thousands for stock when it was in the doldrums to show my faith in the company. My main hero is Woz and when Jobs returned to Apple my response was 'they got the wrong Steve'.

Good point!
I'd also add that for those of us actually working on Macs in large businesses Hackintosh is simply not an option. All of our software and hardware needs to be completely and verifiably legit. We get audited semi regularly.
If you think businesses do not get in trouble for using pirated, semi-legal or gray area software just look up Ernie Ball Music Man, a rather large musical instrument manufacturer. They got screwed because there was some sloppy software management by their IT staff. It pissed off the owner, Sterling Ball so much he dumped Windows, Microsoft and all other mainstream commercial software and went to an all Linux solution.
Where I work we'd rather spend way too much money making sure our nose is clean than even entertain the idea of a grey market solution.
Heck, we have let people go for torrenting or using "workarounds"!
If Apple continues to dilute their pro offerings into prosumer (not just talking about the 2013 Mac Pro, but also FCP, Logic etc) a lot of us will also jump ship to Linux.
 
Last edited:
Amen, bro . ;)

Sorry for taking things a bit out of context.

No problem.

Not just small, but even mid sized businesses can feel the impact of significant changes in hardware...

How much of such an impact the new MP design will eventually have, compared to changes that happened before, that's hard to say right now .

It very much depends on individual requirements .
For me, if I lost the current port powered Firewire, I'd have a 30k problem .
I admit I'm not familiar with all the hubs etc. that exist, but a simple TB-FW dongle doesn't work with my photo gear .

That's not representative; but tell me again why I should embrace this future of computing, just because some employees at Apple figured that thinking different is so much cooler than industry standards .

Another good point. Unfortunately, I can predict the Apple Attitude, which is that your legacy gear won't be current forever, so you shouldn't whine about such problems...just continue to use your old Mac Pro for the next six months (their claim) until the manufacturer of your legacy gear cranks out a new compatible product.

Of course the fallacies with this attitude are huge: Apple doesn't know where "Legacy Company" is in their lifecycle management, nor the same for our acquisition of that product, nor the budgets for either.

As such, they have no basis to know if their change represents a minor, a major or a critical issue...

...and this isn't a new issue, unfortunately. Apple has been chronically blind to the interests & needs of consumers who actually plan ahead - - for decades. In many cases, businesses who use Apple products use them despite the efforts of Apple. Apple survives on the fickle whims of the consumer market, where they're currently riding the popularity wave of iOS.

...of course, in consumer electronics, this position used to be held by Sony.


-hh
 
Tests using Thunderbolt 1 show that it has no problem reaching 1300 MB/s up or down stream. Thunderbolt 2 combines the up and down stream to allow for twice the speed in one direction at a time, at least through the cable.

Not. Thunderbolt v2 combines the two segregated v1.0 10Gb/s streams into one stream that is shared. The total TB backbone aggregate number speed doesn't not increase at all. It is just shared differently. That's it. Nothing is faster... just different sharing.


Here is a TB v2 demo at 1200 MB/s.

http://www.youtube.com/watch?feature=player_detailpage&v=3TGVlyEYurQ#t=54s

10Gb/s ---> 1250 MB/s and x1 PCI-e v2.0 lane is 500MB/s so that is about x2.5 v2 lanes worth of bandwidth and there is no such thing as a half lane . x4 PCI-e lanes is 2,000 MB/s or 16Gb/s. if there is no video traffic the 20Gb/s will allow full 16Gb/s to get through even if there are multiple controllers dumping large amounts of data inbound to the host system at the same time. ( same reason why your ISP providers internal network is much faster than the pipes to your place. Can aggregate without congestion if have fatter pipes on the backbone http://en.wikipedia.org/wiki/Fat_tree )


This is a good point, upping the cable speed will probably hit PCIe x4 bottlenecks. PCIe x4 lanes typically guarantee at least 2GB/s though.

Once again, the cable speed was not upped. The bandwidth allocation is different and not totally assigned to PCI-e data traffic.
What v2 is going to deliver for PCI-e data traffic is better switching speeds and latency. It isn't particularly going to go much faster unless strictly stay out of the way of the video that is also on the backbone network. ( all inbound while video is all outbound . If read and write not so clean separation. )


If Apple expects Thunderbolt to replace physical PCIe slots then they need to support the standard better. Being an early adopter of Thunderbolt 1 for PCIe cards I would call the experience user hostile to initially set up. All 4 of the pricey top of the line PCIe TB boxes I tried were poorly made with tolerances that were pretty embarrassing.

That's not Apple's job. That is those company jobs to ship better products.
That has nothing to do with supporting the standard or not.


Their layouts were incredibly poorly thought out because the boxes used up way too much space for what they needed to do usually having more than 50% empty space despite having unreasonably limited space for the actual card (such as requiring custom low profile PCIe 8 pin power connector or having parts interfering with heat sinks),

that is typically because they don't know what in the world someone is going to throw into the box and how the fan/thermal system attached to the card is going to jack up the solution they have planned with their fan. So there is much more open space air tolerance.

That has nothing to do with the standard and lots to do with the complexity of dealing with a hodgepodge of conflicting cooling solutions.


eGPU support through windows 8 is literally there by accident, which is one of the most demanded features with Thunderbolt on Macs.

eGPUs is not what TB is primarily aimed at. You can see that with TB v2 which is largely an adjustment so that can transmit more GPU output from the host to a LCD screen. Not to transmit raw input so that the GPU can directly connect to a LCD screen itself ( or some iGPU can copy buffer frames from remote VRAM. ) .

It can be made to work, but it is hardly a priority because it never was the primary point.



So if Apple wants to push Thunderbolt instead of PCIe slots they really need to make their own plastic or aluminum rectangle with rounded corners that fits full length full size double wide PCIe x16 cards and provides 250w of power with no wasted space,

Not going to happen. If Apple loved making rectangles with slots they could just made one.

Look for Thunderbolt to succeed it has to get picked up by the overall PC market. Apple screwing around threatening to wipe out perhipheral vendor's investment isn't going to help that ecosystem grow. Apple has staked out a small subset of highly effective or highly necessary (due to possible premature port eviction by Apple. ) Thunderbolt devices.

Their display docking station is effective because it combines both things that TB does well video and PCI-e data transfer. Not just one. The TB dongles are necessary because a significant number of folks do use Ethernet jacks and firewire ports on their laptops even though Apple is nuking those. ( The iMac even drops FW which is kind of goofy. )
 
Going with a new Mac Pro would need me to have

  • My tower, Monitors, KB, Mouse
  • Raid Array
  • Time Machine USB3 Drive
PLUS to match what I originally had inside my current tower
  • Optical Drive
  • 5 Bay JBOD unit
  • PCI Chassis
  • TB Blackmagic box
...
At least 8 additional cables to wrangle.

Other than likely the lack of enough sales, there is no reason the ODD and JBOD box can't be merged. A 6-7 bay box with one-two 5.25" bays slapped on top. The ODD bandwidth is lightweight and already have power supply for other 3.5" drives.

Frankly there wasn't going to be an ODD even if Apple had stuck with the same size box.

Not sure what is in the PCI-e chasis since Blackmagic box is one card down from old system.
 
MacVidCards, your repeated personal attacks and fixation on argumentative nonsense indicate a profound disinterest in being impartial and rational about your views. You are someone who clearly starts with a viewpoint and then tries to prove it instead of the other way around.

Sorry, but that's a biased view because there's also been individuals who have made personal attacks under the premise that his opinions must be grossly biased because he has freely disclosed (yes, that's an ethical thing to do!) that he has a side business selling PCIe video cards.



Regarding the 4 vs 8 slots in terms of expandabity, you are trying to compare Apples to Oranges.

True, there's some dissimilarities to it, but it is nevertheless a fact that four RAM slots are fewer than eight.

The new Mac Pro has 4 32GB max ram slots, the old one has ZERO 32gb max ram slots.

You're trying to split hairs based on how large of a stick a slot can accomodate, but you need to be careful to identify if this difference is really a technical factor, or merely based on artificial limitations that Apple set in firmware.

So while it is true that the current MP doesn't support the new forthcoming 32GB DIMMs, could this be remedied if Apple were to update the firmware?

Let us not forget just who is responsible for firmware settings: Apple is.

Read that a few times if you don't get it.

I find the above to be offensive due to its very condescending tone & attitude. While it may pedantically not cross the line as a TOS Violation, it undoubtedly creates a hostile environment which is counterproductive to civil discussion. I trust that I have made myself clear.


You are doing the equivalent of saying...

Sorry, your premise fails technically.

And you're approach purposefully is disingenous for how you're approaching the key performance metric question.

The key performance metric question is quite simply, "How much RAM can I put in this machine?".

Factors which contribute to the answer include:
How many RAM slots;
How large of a DIMM;

The first one is easy because it is a fixed integer - the second part is less so because what is technically feasible can be countered by the firmware settings electively chosen by the manufacturer (Apple)...and can be later changed, too.

"Expandability" is how much room you have. You're basically trying to say "I have eight 5 gallon buckets and you have four 10 gallon buckets therefore you have less room for stuff than I do because 8 > 4!". That is a silly premise.

It is also technically wrong on your part.

If you want an bucket analogy, it is:

Old Mac Pro: 8 buckets which are rated for 5 gallons (by firmware)
New Mac Pro: 4 buckets which are rated for 10 gallons (by firmware)

Note that physically, all the buckets are the same dimensionally.

In a simple world, we could do a "if all other factors are equal" and assume that the permissible loading of all buckets were the same. But because of firmware, we can't.

Technically, the underlying question is why has Apple chosen to constrain the maximum size of the DDR3 DIMM in the current Mac Pro?

The facts are that Apple has been historically known to manipulate firmware to settings below what the hardware limits are. As such, the 64GB limit on the Mac Pros was probably a "to protect us from ourselves" rationale: it is known that Mac OS X versions prior to 10.9 Mavericks are unable to utilize more then 96GB RAM due to an operating system limitation.

Okay, but the logical follow-up question is that since 10.9 will remove this limitation, will Apple write & ship a firmware update for the 2012 Mac Pro (at least) to allow it to use larger DIMMs once 10.9 ships?

Place your bets, folks. Personally, I'm not optimistic.


BTW, your summary chart is unfortunately playing "loose" with the facts regarding the storage I/O performance claims of the current Mac Pro - specifically when running with an SSD, which claims 300MB/sec. I suggest that you review your claims and consider a revision so as to reduce the potential for readers to be decieved:

First, there's technically more than one way to skin a cat to do much better. For example, my 2012 is currently benching at ~600MB/sec...and if I had the need to, I could bump it to 800+, which represents 65% of Apple's claimed future max performance.

See, not all of us are deceived by the "Up to 2.5x" style marketing hype.

Second, the underlying technical reason why the current MP in stock configuration doesn't compare well is because its controllers were not updated to the current SATA3 standard...and let us note that SATA3 was published as a standard way back in 2009.


The facts of the matter here are that Apple's neglect of the Mac Pro product line resulted in a very stagnant architecture. To then use this stagnant architecture as the basis of comparison against the new Mac Pro may be pedantically 'correct', but it functionally is a disingenously rigged comparison, because it was Apple who controlled how stagnant the old Mac Pro became.

If we really want a realistic comparison, we really are obligated to look beyond just Apple's Mac Pro and seriously consider what's being used within the relevant Creatives industry fields.

For example, the hardware that Dell provided to Lou Borella (of "We Want a New Mac Pro" Facebook fame) to evaluate was a Dell T7600 w/32 gigs of Ram, two 8-core Sandy Bridge E chips, 256GB SSD, 3 SATA HDDs and a Quadro 4000 with the Tesla Maximus graphics card.



-hh
 
Lots of well-written, well supported stuff...

-hh

Off topic: but DAMN, that's how you get your point of view across...

Now, back to topic...

I think it should be noted here that from a scientific computing perspective, the people buying Titan's are penny-penching types looking for the cheapest card with a full feature set.

If this is true, and the Titan is indeed significantly better than any other non-Quadro NVIDIA card at CUDA, then I stand, at least partially, corrected.

I still, however, cannot yet understand why a person who was concerned about cost would purchase a Titan for games. It does not seem to be a reasonable investment to me.
 
Last edited:
...
Technically, the underlying question is why has Apple chosen to constrain the maximum size of the DDR3 DIMM in the current Mac Pro?

The facts are that Apple has been historically known to manipulate firmware to settings below what the hardware limits are.

typically because the limits aren't particularly affordable when Apple releases the machine. So they set the firmware to the configurations they actually test. As opposed to setting firmware to something they never tested (not so professional).

What Apple tends not to do is incrementally iterate on firmware software updates. Major bugs? Yes. Bad interactions perhaps with 3rd party hardware. Probably yes. But "oh it has been 1-2 since our last firmware update lets ship another" ... errr no. Next years products get higher priority than last years products in terms of R&D.



As such, the 64GB limit on the Mac Pros was probably a "to protect us from ourselves" rationale: it is known that Mac OS X versions prior to 10.9 Mavericks are unable to utilize more then 96GB RAM due to an operating system limitation.

So in addition practically no one can afford it the OS X didn't support it either. So how exactly does that config get tested... Apple fires up Windows or Linux for configuration testing? Probably not.


Okay, but the logical follow-up question is that since 10.9 will remove this limitation, will Apple write & ship a firmware update for the 2012 Mac Pro (at least) to allow it to use larger DIMMs once 10.9 ships?

Isn't the 10.9 limit 128GB? That is already supported.

" **Mac OS X versions prior to 10.9 Mavericks are unable to utilize more then 96GB RAM due to an operating system limitation. 128GB can be fully utilized by a 2009-2010 Mac Pro if running 10.9 Mavericks or later, Bootcamp with 64-bit versions of Windows XP and later as well as with 64-bit versions of Linux. "
http://eshop.macsales.com/shop/memory/Mac-Pro-Memory#1333-memory

what is more likely to happen folks will buy super duper expensive four 32GB DIMMs or buy 8 16GB at about 1/3 to 1/4 the cost ??????

Place your bets, folks. Personally, I'm not optimistic.

For the 128GB limit? I'm pretty optimistic. For single package 128GB, no. It was never promised or sold (with it latently available) that way.


Second, the underlying technical reason why the current MP in stock configuration doesn't compare well is because its controllers were not updated to the current SATA3 standard...and let us note that SATA3 was published as a standard way back in 2009.

The basic board Mac Pro was likely design and entered feature complete status in 2008. There were no major mainboard updates after that. That has as much to do with what users did and did not do as much as what Apple did and didn't do. And all the major workstation vendor take their mainboards through a full tick-tock cycle before doing a revision with the socket format update. That isn't an "Apple" thing.


For example, the hardware that Dell provided to Lou Borella (of "We Want a New Mac Pro" Facebook fame) to evaluate was a Dell T7600 w/32 gigs of Ram, two 8-core Sandy Bridge E chips, 256GB SSD, 3 SATA HDDs and a Quadro 4000 with the Tesla Maximus graphics card.

They are Sandy Bridge EP chips not E. And in 2011 Dell didn't have squat to give him new either. There are other factors in play other that unilateral control by the system vendors themselves.
 
If this is true, and the Titan is indeed significantly better than any other NVIDIA card at CUDA, then I stand, at least partially, corrected.

It is not better. Just cheaper. There are two phases of doing computational science. First is getting the custom app debugged (no numeric instabilities, optimized to fit the computational function unit's quirks, etc ) and running. Second is one it is sent up to "big iron" for a long batch job. You want to send up a correctly running app so more testing can do on "development" machine that is as close feature wise, but several orders of magnitude cheaper, the better.

Some folks can't afford "big iron" batch time either so it is run it on the machine can afford. Again the Titan is cheaper not necessarily faster.
 
It is not better. Just cheaper.

By "better" I meant with a higher performance/cost ratio. It seems reasonable to expect that there would be NVIDIA Quadro GPUs that would be faster, but also far more expensive. The main question is if a Titan is significantly faster than say a GTX 680/780 for CUDA to justify costing twice as much.
 
Off topic: but DAMN, that's how you get your point of view across...

Now, back to topic...



If this is true, and the Titan is indeed significantly better than any other non-Quadro NVIDIA card at CUDA, then I stand, at least partially, corrected.

I still, however, cannot yet understand why a person who was concerned about cost would purchase a Titan for games. It does not seem to be a reasonable investment to me.

I sold a Quadro K5000 + Tesla K20x combo after I tested a Titan. One Titan beat the pair of them in CUDA compute.
 
By "better" I meant with a higher performance/cost ratio. It seems reasonable to expect that there would be NVIDIA Quadro GPUs that would be faster,

The Titan has a GK110. That is what is in the K20; not the Quadros.
Features not performance is the biggest gap with Titan not having ECC and a few other things.

A long running numerical simulation you probably don't want to run on it, but it is way
cheaper and would be useful in getting something ready to run on a K20 cluster.

Media stuff would work fine too since not particularly pressed about errors.
 
(on max DIMM size ratings & firmware support)
typically because the limits aren't particularly affordable when Apple releases the machine. So they set the firmware to the configurations they actually test. As opposed to setting firmware to something they never tested (not so professional).

Agreed and an understandable business case.

What Apple tends not to do is incrementally iterate on firmware software updates. Major bugs? Yes. Bad interactions perhaps with 3rd party hardware. Probably yes. But "oh it has been 1-2 since our last firmware update lets ship another" ... errr no. Next years products get higher priority than last years products in terms of R&D.

Also understandable from the business case .. but to a somewhat lesser degree: Apple's published product lifecycle support policy is for five (5) years and in the specific case of the Mac Pro, they know that its customer base is essentially atypical and will push hardware to its technology limits...which includes emerging technology becoming more affordable.

So in addition practically no one can afford it the OS X didn't support it either. So how exactly does that config get tested... Apple fires up Windows or Linux for configuration testing? Probably not.

...and yet because 16GB DIMMs can currently be used in a Mac Pro, it results in a Mac which can go beyond its advertised 64GB 'limit'...and pedantically can reach today that same 128GB that is being touted for the Tube. All that's required is a copy of 10.9 With 128GB RAM already achievable in (dual CPU) legacy Mac Pro hardware today, the basis to claim a dramatically improved RAM capacity in the Tube is at best a weak ... to an outright false ... claim.

And yes, the single CPU legacy Mac Pro is presently limited to 64GB by the use of 16GB DIMMs and will need a firmware update from Apple to use 32GB DIMMs in order to also get to 128GB....and once again, this is a technical -vs- business decision for Apple to decide *if* Apple want to bother to support legacy hardware.

Precisely as you pointed out, new product is the higher business priority - - which is why I'm cynical about seeing a relevant firmware update for the legacy Mac Pro to facilitate this.


Isn't the 10.9 limit 128GB? That is already supported.

" **Mac OS X versions prior to 10.9 Mavericks are unable to utilize more then 96GB RAM due to an operating system limitation. 128GB can be fully utilized by a 2009-2010 Mac Pro if running 10.9 Mavericks or later, Bootcamp with 64-bit versions of Windows XP and later as well as with 64-bit versions of Linux. "
http://eshop.macsales.com/shop/memory/Mac-Pro-Memory#1333-memory

what is more likely to happen folks will buy super duper expensive four 32GB DIMMs or buy 8 16GB at about 1/3 to 1/4 the cost ??????

It depends on if they have 8 or 4 slots to have that option.

What is happening today is that these power users with DP CPUs have 8 slots and are thus able to install (6 x 8GB + 2 x 16GB) = 80GB. This keeps them below the current OS X memory limit of 96GB ... and it is precisely because their hardware have 8, not 4, memory slots available for their employment that they're able to have this capability while avoiding the higher expense of the 32's.

Of course, the other set of Mac Pro power users who have the single CPU configuration do not have this option to go beyond 64GB addressability, even though OS X currently supports up to 96GB...and the basic reason here is because Apple's firmware is not supporting 32GB DIMMs for them.

RE: Place your bets, folks. Personally, I'm not optimistic.
For the 128GB limit? I'm pretty optimistic. For single package 128GB, no. It was never promised or sold (with it latently available) that way.

With the way it stands today, only the single CPU versions of the legacy Mac Pro (4 slots) "have to" receive a firmware update in order to install 128GB total. I'm cynical that it will happen because the business case discussion at Apple will invariably be that these MP customers self-selected themselves as not ... worthy ... (Sorry, a hard word choice) of needing that much RAM because they didn't fork over the extra bucks for the dual CPU...and, as you point out, they're yesterday's old customers, not new business.

Unfortunately, what this business case argument misses is that for some power customer use cases, a faster-Hz-despite-fewer-cores provides higher productivity than a dual-CPU system with tons of slow cores. FYI, this is particularly the case with Adobe Photoshop, as it does relatively poorly in the multithreaded arena: clock speed trumps cores.

But if you're going to nevertheless be optimistic, the ramifications are that technically the single CPU current Mac Pro must receive a firmware update to permit it to use 32GB DIMMs...a pretty easy litmus test. If you've interested in a friendly 'beer bet' on the prospects of Apple coming through with said firmware update, I'll put aside a 750ml Chimay Cent Cinquante that's in my larder for the cause...

The basic board Mac Pro was likely design and entered feature complete status in 2008. There were no major mainboard updates after that. That has as much to do with what users did and did not do as much as what Apple did and didn't do. And all the major workstation vendor take their mainboards through a full tick-tock cycle before doing a revision with the socket format update. That isn't an "Apple" thing.

Unfortunately, this is effectively just illustrating Apple's business case which rationalized more technological stagnation than was necessary - - and yet even here things weren't completely stagnant on Apple's part because these "newer but still using the 2008 board" Mac Pro configurations were specifically obstructed from being able to use 10.6 (Snow Leopard).


They are Sandy Bridge EP chips not E. And in 2011 Dell didn't have squat to give him new either. There are other factors in play other that unilateral control by the system vendors themselves.

Understood, but we also see that other vendors were nevertheless able to provide incremental upgrades around the CPU roadmap such as SATA3 and USB3 despite Intel...that's evidence that these were technologically possible, available and affordable ... and that it was only a business decision on Apple's part to not adopt any of them in the current (legacy) Mac Pro, not a technological obstruction.


-hh
 
I just checked, to bring my '09 quad up to date, I 'd have to:

- run the netkas firmware updayer, free
- buy a Xeon W3680, $630 on ebay
- buy a Radeon HD5770, $244 on ebay

So for $874 I get a 2013-spec machine, pretty cool considering I bought mine used in 2011 for $1650.

Suppose I throw in a Mercury Accelsior E2 PCIe SSD (480GB), $735 from OWC, and now I can hang with the big boys for a few more years.

Let's wait and see just how unexpendable the new MacPro will be, but I think upgrading a four year old machine to current spec is an undeniable advantage that I'd hate to lose.
 
So for $874 I get a 2013-spec machine, pretty cool considering I bought mine used in 2011 for $1650.

Suppose I throw in a Mercury Accelsior E2 PCIe SSD (480GB), $735 from OWC, and now I can hang with the big boys for a few more years.

Let's wait and see just how unexpendable the new MacPro will be, but I think upgrading a four year old machine to current spec is an undeniable advantage that I'd hate to lose.

My colleague bought a Mercury Accelsior E2 PCIe SSD for his 2009 Mac Pro and it made his Mac Pro like a "brand new" 2013 machine. 4,1 Mac Pros are still adequate Macs. I am also using one.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.