Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Why is the doom in the article relevant? The article itself points out that the X79 is similar, but different than the Platsburg -A,-B,-D,-T options.
I'm thinking in terms of Systems Engineering, and how this would affect users/various vendors, not specifically Apple (sorry about any confusion over that).

From what I gathered, it appears that Intel intends to release the -A and -B variants with the initial LGA2011 parts (DMI 2.0 only in those chipsets), and the -D and -T variants at a later date. Now on the surface this may not seem like much as fewer ports usually translates to a reduced bandwidth (n disks * avg. throughput = max avg. storage bandwidth required). Particularly in the case that most, if not all systems, will offer fewer bays than ports available on the -D and -T parts.

Using mechanical only, this won't be a problem. But if SSD's are tossed into the mix, then it's not unreasonable for users to saturate the DMI 2.0 interface. So this is what I'm focusing on between the PCH variants, not the port counts (Hi may name is .... and I'm an I/O junkie :p).

I realize that this is and will continue to be an issue (cost reasons), but the it seemed to me that Intel including additional PCIe lanes that could be dedicated for storage (4x of them), they were trying to mitigate this issue as best as possible and still keep the costs out of the Stratosphere.

Granted, I agree with you that Apple probably won't use either the -D or -T variants (they've never offered the RAID versions before), so the example situation I mentioned will be present in an LGA2011 (SB-E5) based MP regardless.

But if -D and -T parts are delayed, other vendors may suffer a reduction in sales due to the users that can actually utilize the features having to wait for the right chipset to be offered in a system.

Just a thought anyway...

Apple isn't likely to use the "i7 Extreme" version of LGA2011 offerings.
No, they'll use the Xeon variants in the SP systems. But keep in mind, that the only difference is ECC remains Enabled. That's it. The Quantity Pricing will be the same (clock per clock), as well as use the same chipsets (i7 or SP Xeon LGA1366 both use the X58).

Where it may get a bit strange, is with the DP systems (so far, there doesn't appear to be a different part, as block diagrams have shown 1x of the existing PCH's announced per CPU; though I'd be surprised if this is a necessity).

There are 4 drive sled slots in a Mac Pro. The -A model with just 4 6Gbps ports is an extremely good match to that and it controls cost ( which Apple is going to do to keep margins up. ). On the -A model there are still 8 SATA ports just only 4 at the 6Gbps speed. The allusion to Comptex boards with 14 SATA sockets is a problem for other vendors...... seriously, the Apple's design would likely only use 1/2 that number maximum ( 4 sleds , two external drives slots , 1 "extra". Maybe 2 extra if there in the reference design and it is harder to take out than leave in. ).
I'm not arguing that an LGA2011 based MP will use the -A, or possibly the -B. I also agree with your reasoning - economics.

For me, its about getting around the bandwidth limitation for storage in DMI 2.0, and Intel seems to have addressed this in the -D and -T as best as they could (still keep costs under control on their end).

A 'C' stepping for the CPU isn't particularly surprising either since PCI-e v3 controller is part of at least the package if not the die. v3 testing finalization has slid till Summer-Fall 2011, so adjustments along the way are not necessarily a big problem. Those can be fixes which everyone doing v3 stuff are adjusting to also. PCI-v 3 went final back in November.
It wasn't the "C" that got my attention, but C1 (yet another revision, even if minor). I see it as added time is all, and am under the impression that they were pushing it to make the end of Q4 2011 on C0.

In the case of the LGA2011 development, I'm just thinking in terms of the increased complexity (gone from 1366 pin to 2011 pin <47% increase in pin count>, and as you know, the increase in development time isn't linear).
 
I realize that this is and will continue to be an issue (cost reasons), but the it seemed to me that Intel including additional PCIe lanes that could be dedicated for storage (4x of them), they were trying to mitigate this issue as best as possible and still keep the costs out of the Stratosphere.

-D and -T will probably most often be tied up with dual package offerings. Remember the PCI-e lanes come out of the 'CPU' package. (at this point it is far more than CPU since it has both the memory controller and the high bandwidth PCI-e controller in the package).

So with two package you have a glut of PCI-e slots. Lashes a couple from one of the pair to the Platsburg chip is still going to leave more than probably have physically have room the slots in the box ( unless creating all 8x slots ). One of the two QPI connections is partially used to help move the data coming into one to its pair.


In the SP version, it is a a toss up. Do you blow away bandwidth for one (or more) of your PCI-e slot for cards to hook to the build in controller or do you keep it open? If you were just going to put a affordable RAID controller in there anyway then sure. If you need more network I/O and GPCPU compute time then no.

Again on the Mac Pro since it will probably still only have 4 (or fewer) PCI-e slot this probably isn't an issue. One package, in both the single and double set ups, will probably be connected to the physical PCI-e slots.


No, they'll use the Xeon variants in the SP systems. But keep in mind, that the only difference is ECC remains Enabled. That's it.

In the SP variants the second QPI link is disabled also. :) I think the "extreme i7" will have the both QPI links disabled. there are a few small things but yeah ECC is one of the bigger ones.


Where it may get a bit strange, is with the DP systems (so far, there doesn't appear to be a different part, as block diagrams have shown 1x of the existing PCH's announced per CPU; though I'd be surprised if this is a necessity).

It is not per CPU... the DMI only goes to one. The second package needs to pull through the QPI link if need be (and its DMI link is unconnected. ). If only have one QPI link you take a hit. If you have two QPI link then lower average latency due to higher bandwidth.

For me, its about getting around the bandwidth limitation for storage in DMI 2.0, and Intel seems to have addressed this in the -D and -T as best as they could (still keep costs under control on their end).

It makes sense. Really taking would used to be on PCI-e link and putting it into the "i/o controller". If just hook it back up to PCI-e don't still have same bandwidth to high speed PCI-e controller.


In the case of the LGA2011 development, I'm just thinking in terms of the increased complexity (gone from 1366 pin to 2011 pin <47% increase in pin count>, and as you know, the increase in development time isn't linear).

It is kind of how to have PCI-e lanes without pins to hook them to. :)
In the overall system pin count didn't necessarily go up. The role the old "high bandwidth PCI-e controller and interface to Soutbridge" role got sucked into the CPU package. That that means all of those pins get sucked into the CPU package too. There are fewer components on the board but the still have to lots of other things.

With ability to have 2 16x PCI-e v3 slots in the box and a couple of 8x folks are going to be able to build some heavy I/O iron with these. Like dual package systems with two top end Infinband cards and four higher end 8x RAID cards and not really break a sweat. Apple isn't going to sell one of those though.
 
If they up the base model to more powerful specs, I think more people would be willing to buy it. For 2499 and those specs the cinema should be included..I think that would make it a hot seller. 2499 and no display...I think for many its no deal. I've said it before, I would love to have a mac pro desktop, but I need to see more heavy hitting hardware in that 2499 range to buy it. Hoping for the update.
 
-D and -T will probably most often be tied up with dual package offerings. Remember the PCI-e lanes come out of the 'CPU' package. (at this point it is far more than CPU since it has both the memory controller and the high bandwidth PCI-e controller in the package).
I'm not talking about the PCIe 3.0 lanes on the CPU die (not seen anything on it being a separate die in a single package), but rather the PCIe lanes in the PCH itself (8x Gen 2.0 on all, 4x Gen 3.0 on -D & -T).

The 8x PCIe Gen 2.0 are there for add-on peripheral components (PCI is finally gone), but the -D & -T versions get an additional 4x lanes of PCIe Gen 3.0 to handle the additional bandwidth generated by the additional SATA/SAS ports/RAID functions. Take a look at the following Block diagram.

It's a bit small, but it's readable of a DP configuration using 1x PCH. It has SKU differences on the PCH too.

As per -D and -T variants only showing up in DP systems, I see a market point for those on SP systems as well. Think of an SP workstation with high I/O requirements (i.e. Photoshop and some animation; DP is either not needed or out of budget), or a modest server that one of these particular PCH variants could solve cheaper than a -A or -B with 3rd party RAID cards (say hardware RAID 1, and no more than 6 cores required as VM is either not present or instances kept low). I keep thinking of independents to SMB's here BTW, not large scale enterprise organizations where DP+ systems running high VM instances are almost certainly deployed.

So with two package you have a glut of PCI-e slots. Lashes a couple from one of the pair to the Platsburg chip is still going to leave more than probably have physically have room the slots in the box ( unless creating all 8x slots ). One of the two QPI connections is partially used to help move the data coming into one to its pair.
The wording in this bit is a bit confusing to me. So to be clear, do you mean pair = CPU pair in a DP configuration?

If so, then absolutely. PCH data (1x PCH tied to CPU A for example), is passed to CPU B over QPI when needed (more attractive financially as well).

But from what I understand, it's also possible to run 2x PCH's (1x PCH per CPU). I don't expect this to make it in many board designs for cost reasons, but nevertheless it seems possible from publicly available information (no longer have access to a Developer Account with Intel :().

There are instances where a lot of SATA/SAS ports and 4x Ethernet controllers could be useful (thinking of something like a 24 bay Norco case used for SAN built off of Linux or Open Solaris running Z-RAID1/2). But it's just as easy to accomplish the same thing via inexpensive non-RAID HBA's in slots, so I don't expect to see such a system.

As per physical slots in a DP system, it should be possible to stuff 76 lanes of PCIe Gen 3.0 on a board, particularly on an E-ATX/SSI-EEB format (4x lanes to the -D or -T variant of the PCH). 6 slots in a 16/16/16/16/8/4 or 7 slots in a 16/16/16/8/8/8/4 configuration for example (potential length issues for full size cards not withstanding).

In the SP version, it is a a toss up. Do you blow away bandwidth for one (or more) of your PCI-e slot for cards to hook to the build in controller or do you keep it open? If you were just going to put a affordable RAID controller in there anyway then sure. If you need more network I/O and GPCPU compute time then no.
Even when 4x PCIe gen 3.0 lanes are attached to the -D or -T variants, that still leaves 36 available lanes for slot configurations, which is what we have currently on LGA1366 systems. Not exactly peanuts as it were... ;)

Again on the Mac Pro since it will probably still only have 4 (or fewer) PCI-e slot this probably isn't an issue. One package, in both the single and double set ups, will probably be connected to the physical PCI-e slots.
I don't doubt this.

In the SP variants the second QPI link is disabled also. :)
True, but the second QPI was disabled in both SP LGA1366 series (i7 & Xeon). So I left that out. :p

I think the "extreme i7" will have the both QPI links disabled. there are a few small things but yeah ECC is one of the bigger ones.
As per the SP versions of LGA2011, I'd be amazed if they didn't disable unused QPI channels. As per the rest of it, I'm trying to keep it simple so we don't lose other members. ;)

It is not per CPU... the DMI only goes to one. The second package needs to pull through the QPI link if need be (and its DMI link is unconnected. ). If only have one QPI link you take a hit. If you have two QPI link then lower average latency due to higher bandwidth.
I realize your point, and agree.

Where the confusion came in for me, is I've seen block diagrams both ways (1x PCH on 2x CPU's and 1x PCH per CPU). So I wasn't sure at the time if one was incorrect, or if both scenarios are possible. Due to financial reasons however, I expect a 1x per CPU board won't be produced (cost/benefit doesn't work out well; better to use the lanes for PCIe slot configurations in this case).

It is kind of how to have PCI-e lanes without pins to hook them to. :)
In the overall system pin count didn't necessarily go up. The role the old "high bandwidth PCI-e controller and interface to Soutbridge" role got sucked into the CPU package. That that means all of those pins get sucked into the CPU package too. There are fewer components on the board but the still have to lots of other things.
I realize this, but 40x lanes only accounts for 160 physical pins for data (4x wires per lane = 2x per differential pairs). Obviously there's a few more wires to account for (SMBus, JTAG, DMI, 4th memory channel, ...), and it eventually consumes the pins.

What I've been thinking about, is the internal complexity in the CPU itself (increased complexity in the various controllers + new controllers never on a CPU die previously = increased transistor count = increased complexity = increased time to perform debugging). This is why the hints of another possible delay seems realistic to me is all (better to get it right the first time than rush it out IMO, as the fallout could be disastrous).

With ability to have 2 16x PCI-e v3 slots in the box and a couple of 8x folks are going to be able to build some heavy I/O iron with these. Like dual package systems with two top end Infinband cards and four higher end 8x RAID cards and not really break a sweat. Apple isn't going to sell one of those though.
Unfortunately I wouldn't expect Apple to produce such a system either.

36 usable lanes in an SP or their implementation of a DP isn't exactly horrendous though (same usable lane count in LGA1366 based MP's). ;) Not that some MP users wouldn't like a few more, but at least it's not like what would happen if Apple decided to use an SB-E3 (LGA1155 based Xeon) + C205 chipset. :eek: :p
 
The iMac I bought is doing the job but I'm having to run an external drive caddy with my 2Tb data drive in it.

I just want the new Mac Pros already!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.