Can someone (please) direct me to a cMP (5,1) i/o architecture schematic I've seen somewhere on this site which will help me calculate maximum IO throughput ceilings as defined by the architecture and not the visible IO interface technologies (FW ports, SATA interfaces and PCIE slots) which I falsely assumed (in the early days of my cMP ownership) were a 'true' reflection of the headroom available for concurrent data transfers.
To better explain - until I first saw this diagram/schematic - I counted PCIe x16 (x2) = 32 lanes and PCIe x4 (x2) = 8 lanes and thus believed my PCIe lane count total was 40 lanes. However, the architecture diagram is how I discovered that slot 3 and 4 were shared and therefore the ceiling for PCIe saturation is in effect 36 lanes.
(Testing Theoretical Max 1) 36 lanes should max out at 13500 - 14,000 MB/s
I've also read a thread a while ago that said if I plug 6x Samsung EVO SATA SSDs into the 6 direct connect SATA II slots that I may not be able to achieve 1500MB/s transfer speeds I'm expecting* because of architecture contraints.
*(Testing Theoretical Max 2) based on a basic RAID 0 setup across 6 SSDs all comfortably reading or writing in parallel at 250MB/s each for a total of 1500MB/s.
Lastly - through some of my own testing - I discovered that 4 external OWC Firewire HDDs connected to each of the 4 FW ports on my cMP failed to reach anywhere near the 400MB/s throughput I expected from a RAID 0 setup. I think the reason is that the 4 discrete FW IO ports and sharing channels at the architecture level which is why the average transfer speed is only 180MB/s .
To better explain - until I first saw this diagram/schematic - I counted PCIe x16 (x2) = 32 lanes and PCIe x4 (x2) = 8 lanes and thus believed my PCIe lane count total was 40 lanes. However, the architecture diagram is how I discovered that slot 3 and 4 were shared and therefore the ceiling for PCIe saturation is in effect 36 lanes.
(Testing Theoretical Max 1) 36 lanes should max out at 13500 - 14,000 MB/s
I've also read a thread a while ago that said if I plug 6x Samsung EVO SATA SSDs into the 6 direct connect SATA II slots that I may not be able to achieve 1500MB/s transfer speeds I'm expecting* because of architecture contraints.
*(Testing Theoretical Max 2) based on a basic RAID 0 setup across 6 SSDs all comfortably reading or writing in parallel at 250MB/s each for a total of 1500MB/s.
Lastly - through some of my own testing - I discovered that 4 external OWC Firewire HDDs connected to each of the 4 FW ports on my cMP failed to reach anywhere near the 400MB/s throughput I expected from a RAID 0 setup. I think the reason is that the 4 discrete FW IO ports and sharing channels at the architecture level which is why the average transfer speed is only 180MB/s .