I'm thinking in terms of Systems Engineering, and how this would affect users/various vendors, not specifically Apple (sorry about any confusion over that).Why is the doom in the article relevant? The article itself points out that the X79 is similar, but different than the Platsburg -A,-B,-D,-T options.
From what I gathered, it appears that Intel intends to release the -A and -B variants with the initial LGA2011 parts (DMI 2.0 only in those chipsets), and the -D and -T variants at a later date. Now on the surface this may not seem like much as fewer ports usually translates to a reduced bandwidth (n disks * avg. throughput = max avg. storage bandwidth required). Particularly in the case that most, if not all systems, will offer fewer bays than ports available on the -D and -T parts.
Using mechanical only, this won't be a problem. But if SSD's are tossed into the mix, then it's not unreasonable for users to saturate the DMI 2.0 interface. So this is what I'm focusing on between the PCH variants, not the port counts (Hi may name is .... and I'm an I/O junkie
I realize that this is and will continue to be an issue (cost reasons), but the it seemed to me that Intel including additional PCIe lanes that could be dedicated for storage (4x of them), they were trying to mitigate this issue as best as possible and still keep the costs out of the Stratosphere.
Granted, I agree with you that Apple probably won't use either the -D or -T variants (they've never offered the RAID versions before), so the example situation I mentioned will be present in an LGA2011 (SB-E5) based MP regardless.
But if -D and -T parts are delayed, other vendors may suffer a reduction in sales due to the users that can actually utilize the features having to wait for the right chipset to be offered in a system.
Just a thought anyway...
No, they'll use the Xeon variants in the SP systems. But keep in mind, that the only difference is ECC remains Enabled. That's it. The Quantity Pricing will be the same (clock per clock), as well as use the same chipsets (i7 or SP Xeon LGA1366 both use the X58).Apple isn't likely to use the "i7 Extreme" version of LGA2011 offerings.
Where it may get a bit strange, is with the DP systems (so far, there doesn't appear to be a different part, as block diagrams have shown 1x of the existing PCH's announced per CPU; though I'd be surprised if this is a necessity).
I'm not arguing that an LGA2011 based MP will use the -A, or possibly the -B. I also agree with your reasoning - economics.There are 4 drive sled slots in a Mac Pro. The -A model with just 4 6Gbps ports is an extremely good match to that and it controls cost ( which Apple is going to do to keep margins up. ). On the -A model there are still 8 SATA ports just only 4 at the 6Gbps speed. The allusion to Comptex boards with 14 SATA sockets is a problem for other vendors...... seriously, the Apple's design would likely only use 1/2 that number maximum ( 4 sleds , two external drives slots , 1 "extra". Maybe 2 extra if there in the reference design and it is harder to take out than leave in. ).
For me, its about getting around the bandwidth limitation for storage in DMI 2.0, and Intel seems to have addressed this in the -D and -T as best as they could (still keep costs under control on their end).
It wasn't the "C" that got my attention, but C1 (yet another revision, even if minor). I see it as added time is all, and am under the impression that they were pushing it to make the end of Q4 2011 on C0.A 'C' stepping for the CPU isn't particularly surprising either since PCI-e v3 controller is part of at least the package if not the die. v3 testing finalization has slid till Summer-Fall 2011, so adjustments along the way are not necessarily a big problem. Those can be fixes which everyone doing v3 stuff are adjusting to also. PCI-v 3 went final back in November.
In the case of the LGA2011 development, I'm just thinking in terms of the increased complexity (gone from 1366 pin to 2011 pin <47% increase in pin count>, and as you know, the increase in development time isn't linear).