Yes, except that Apple PR didn't say "Mac Pro" in 2013 ... they said "Something New", and that's the nuance I'm referring to.
No this is incorrect. Go back to post 51 in this thread for the link outlining the actual facts.
Spinning Tim Cook's bungled attempt to push the info out is just spreading FUD at this point.
Wasn't really what I was thinking of. What I was thinking of was a single-CPU based "Lowest Common Denominator" as a building block so that if a particular customer's workflow process required 47 cores, he could buy them in multiples of, say, 8...so he would stack six modular boxes ... 6 x 8 = 48.
Like the Mac mini ? Already on the market. It is multiples of 2 or 4 now but wouldn't be surprisin to be all 4's when the next revisions roll out on Ivy Bridge.
If it is cores without a large amount of memory per core Apple's new design efforts should be trying to incorporate cards like the following
http://www.cpu-world.com/news_2012/2012080201_Details_of_Intel_Xeon_Phi_coprocessors.html
Modular expansion could be done with a x8-x16 PCI-e lanes for this kind of scaling.
Yes, this does raise up a technical question of how to get the interconnects to talk to each other fast enough, but the payoff for Apple is that it collapses the cost of the Mac Pro product line down to fewer hardware configurations to design, build & support.
Not really. If have to invent a new interconnect that raises cost. If have to significantly raise costs to get to a simplification, it isn't necessarily true have lowered overall costs.
Effectively, Apple has already done a config simplification. That's why there is a CPU/core support chip/RAM daughter card in the current models. Those simply leverage PCI-e as the interconnect .... just like all of the other designs without daughter cards. No "new" interconnect technology needed.
Understood, but my point is that since it is already a separate controller chip, the incremental cost to Apple of upgunning the FW800 to (higher) shouldn't have been all that huge, and the specialty-for-Apple aftermarket would have followed,
No. It isn't that one sided of an equation. It is an additional controller chip and design costs for the peripheral vendors. It isn't a viable strategy if only 2-3 "only for Mac" aftermarket followed. That isn't sufficient.
In terms of units deployed, there are much more than just FW800 external HDDs out there.
Yes, they're all external HDDs, but the alternative before TB was eSATA, and TB still isn't quite there yet for being cost-competitive.
Whether Thunderbolt is successful or not doesn't depend at all on the external single (or even dual) HDD market that eSATA and FW800 enabled. That isn't its primary usage and that is the primary driver why it isn't "cost competitive" ( use for something not particularly designed for).
Sure, but TB is still not yet on the Mac Pro. Consumers don't care about the technical challenges or reasons why that is so.
The being deployed to the Mac Pro is also extremely irrelevant to Thunderbolts eventual success or not. TB solves a problem workstations like the Mac Pro don't have: PCI-e expansion and mutliple video output streams. Those are largely a non issue for that class of machines.
Thunderbolt is oriented to machines which have embedded graphics on the motherboard. That is not the standard workstation class architecture. It is also oriented toward machines for which there is little to limited (e.g., ExpressCard) expansion. Again this is not standard workstation class architecture.
There is some push for TB on a Mac Pro to make it "consistant" with the other Macs. However, the Mac Pro (and workstations like it) are not going to make TB viable. It is the
other models it is actually targeted at that will assure that.
Okay, but that's (a) using a RAID0, and (b) a Velociraptor isn't a 3.5" disk, but a precursor to the 2.5" SSD age.
2.5" is only neccesary in that is where the higher densities are deployed earlier. With the 3.5" drives there is always the temptation to just use more platters to crank up the storage capacity. So they trail on densities.
As the densities go up the peak sequential transfer bandwidth goes up also.
"Slower than SATA II" is not a rule of thumb that is going to hold up as the densities increase. Increasing the densities is the only way HDDs survive against the SSD onslaught so it probably will.
Frankly, I don't really expect the MacPro to get an SSD as standard unless there's other major changes in store, such as to form factor, etc.
There is really no need for a major form factor change for that. XServe has substantially less internal volume and had SSDs as an option. Apple's mSATA card derivative could be incorporated relatively easily.
Too late for the 2012 model ... these sorts of extra goodies fit the day it shipped.
Immaterial. the 2012 model is obviously a placeholder. Whether Apple wants to tackle the 3.5-2.5 adapter market would be addressed by them. There is almost nothing new to be discerned about Apple's future direction from the 2012 model than from the 2009 models.
And an iMac with a Promise array prices out to roughly the same as a Mac Pro + Apple LCD + internal HDDs. Doesn't make it compelling at all.
A Promise array incorporates a RAID card. A JBOD box would be significantly cheaper because all that need to enclose is a straightforward, standard SATA III controller.
Given how Apple gave this away with the one XServe RAID model, I don't see them keeping such a "stack" as an in-house product when it is just a bunch of hard drives. There needs to be a brain inside that box for Apple to want to keep it as an Apple product.
There is no "brain" in the Thunderbolt display. don't think they are going to dump that as a product any time soon. No "brain" in the Ethernet or Firewire TB dongles either.
Pushing the drives sleds to another box with another set of fans & power supply is just a direct attach peripheral.
XRAID is a substantially different product. It is an independent system and not direct attached. It was not a simple peripheral, but a system unto itself. So yes it needed a "brain".