Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I have a nMP coming soon and would
love to know the best way to use both Ethernet ports. I do not have a switch or a router the supports a aggregated setup.....

You would need something more than just the nMP and a switch to get utility out of two ports.

If less than two other systems trying to connect to then a switch aggregating and/or VLAN switch probably isn't needed.

Generally the second port is useful for some internal LAN only traffic. If you don't often have 1 Gb/s worth of that kind of traffic then it not going to get much leverage out of the second port. Not particularly material was wrapping that traffic in a aggregating/trunk/bundle , VLAN, and any other "connection mechanism" if the raw bandwidth need isn't there.


The nMP doesn't have lots of bulk storage capacity. If going to a NAS solution for that then the second port could be assigned to a independent local NAS data traffic network. Either just the NAS box and the nMP or perhaps another swtich, the NAS box , and multiple Macs.

Two examples:

nMP ( port 1: currrent router port 2: NAS box )
NAS box ( port 1 : nMP )


nMP ( port 1: current router port 2 : storage_switch )
NAS box ( port 1 : storage switch )
MBP ( port 1 : storage switch ) [ wifi on router's wifi network ]


Need to do minor configuration of "storage network" so not using same LAN addressing, but no managed switches anywhere in sight. Keeping your storage network clear of general internet traffic ( Netflix , VOIP , streaming audio , flash ad videos, etc. ) should deliver more consistent and better bandwidth to the storage data.

Similarly the "NAS box" could just be another Mac that is tasked with file sharing duties. (e.g., an old MP. ).


You can step up to a single switch with VLAN and/or aggregating capabilities later if get into zone of more than a handful of devices on your storage network.

But unless have a 3 or more nodes in a highly trafficked network then there isn't much leverage for the 2nd port.
 
...
My son wanted me to make the iMac read from Titan at same rate as the 8 and 4 core MPs could. I told him to replace the iMac with a MP. The poor olde iMac has but one ethernet port...

one port out of the box. For 2011+ models adding another via a docking station port isn't hard (or a dongle).



Now all we need is for Apple to develop TCP/IP over Thunderbolt.... (ha ha).

They already did. What they need to do is optimize it and get the bugs out. It doesn't provide a consistent throughput. ( somewhat looks like a summer intern project that worked in the context of getting something running, but isn't a "polished for production" utility. )

A switch-less, point-to-point , low node count network has some potential. (e.g., three nMP in a small workgroup cluster. ), but can do the same now with the two ports on a nMP just at lower bandwdith, but higher consistency.
 
No, this is not factual. This has been discussed multiple times on this forum.

Do you think both ethernet ports are simultaneously used for a single file transmission? No. Each tcp transmission occurs using a single ethernet pair (send/receive); it doesn't split them up. If you get lucky, you can have 2 separate tcp transmissions using 2 ethernet pairs simultaneously--but this is not the same a a 2gig pipe. A file sent say by ftp will only get about 120MB/s maximum speed. You won't be getting 240MB/s for a file send.

http://www.cisco.com/c/en/us/td/docs/ios/12_2sb/feature/guide/sbcelacp.html#wp1053763

Hi: See my post above. Thanks. :)
 
Hi: See my post above. Thanks. :)

Port 1: A Router line from a 1TB TC's LAN port.
Port 2: Connected to a 27" iMac named Saturn
Ports 9 & 10: connected to MP6,1 named Titan (these were bonded to aggregate the line data speeds and for failover).
Ports 12 & 13: connected to 8-core MP named Mercury (these were bonded to aggregate the line data speeds and for failover).
Ports 15 & 16: connected to 4-core MP named BAZ (these were bonded to aggregate the line data speeds and for failover).

I tested as follows

1) Created a large file named Test_file on Titan's internal 1TB PCIe-based flash storage
2) Setup File sharing on Titan so that Test_file could be accessed by other Macs
3) On each Mac (Saturn, Titan, Mercury and BAZ) I performed a 'time cksum Test_file' at same time

Using Activity Monitor on each Mac I observed the Network's Data received/sec and Data sent/sec

On Titan the Data sent was a steady 380 MBytes/sec
On Mercury the Data received was a steady 145 Mbytes/sec
On BAZ the Data received was a steady 155 Mbytes/sec
On Saturn the Data received was a steady 60 Mbytes/sec

Wow, that's amazing! From 2 x 1gigE ports (Ports 9 & 10) on a MP6,1 named Titan you got 380MBytes/sec!

380MB/s * 8bits/byte = 3040 Mbits/sec -- that's well over 3Gbits/sec!

That is so good, some people might think there is an error somewhere.

(FYI--I admit to being a human, and thus make mistakes and errors and realize life is a learning process and we are all living that same process.)

You might retry something simpler on the clients like:
dd bs=1000000 count=1000 if=/dev/zero of=/Volumes/TITANSHAREVOL/zerosfile.client1

That outputs things like:
1000+0 records in
1000+0 records out
1000000000 bytes transferred in 13.469611 secs (74241194 bytes/sec)

It writes zeros to an output file which should be on TITAN.
 
Wow, that's amazing! From 2 x 1gigE ports (Ports 9 & 10) on a MP6,1 named Titan you got 380MBytes/sec!

Why would Titan use the 1GbE ports to access a file on its own internal drive ????????

The experimental set up is:

I tested as follows

1) Created a large file named Test_file on Titan's internal 1TB PCIe-based flash storage
2) Setup File sharing on Titan so that Test_file could be accessed by other Macs

The "other Macs" would not normally include Titan. Even if force Titan to "network mount" its own drive any halfway intelligent OS network stack is going to bypass the physical ports even if engage the top layers of the TCP/IP stack.
 
Wow, that's amazing! From 2 x 1gigE ports (Ports 9 & 10) on a MP6,1 named Titan you got 380MBytes/sec!

380MB/s * 8bits/byte = 3040 Mbits/sec -- that's well over 3Gbits/sec!

That is so good, some people might think there is an error somewhere.

(FYI--I admit to being a human, and thus make mistakes and errors and realize life is a learning process and we are all living that same process.)

You might retry something simpler on the clients like:
dd bs=1000000 count=1000 if=/dev/zero of=/Volumes/TITANSHAREVOL/zerosfile.client1

That outputs things like:
1000+0 records in
1000+0 records out
1000000000 bytes transferred in 13.469611 secs (74241194 bytes/sec)

It writes zeros to an output file which should be on TITAN.

The 380 MBytes/sec was what Activity Monitor's Network tab was displaying as "Data sent/sec". when all Macs (all 4 of them) were accessing the test file. The idea of Titan accessing the data on its own storage was simply to simulate what would be going on in reality when real video/editing was being done collectively by all 4 Macs in the office.

If you sum the data received on BAZ, Mercury and Saturn (145+155+60) this comes to 360 which is close to what Titan (MP6,1) was reporting as "Data sent/sec".

The 12-core MP6,1 has 64GB RAM so it's a real workhorse and has plenty of horsepower to service i/o demands from the 3 other Macs.

It could be that the 380 Mbytes was peak and the average over the test period of some 15 minutes was lower. Activity Monitor's Network tab display shows a graph of the Data received/sec (blue) and Data sent/sec (red) that rises and falls as the test proceeds.

The data being access by all 4 Macs was resident on a 4TB LaCie 2big RAID-0 Thunderbolt-1 device which can deliver around 320 MBytes/sec when pushed. The test data I mentioned in my earlier post was resident on Titan's internal PCIe Flash-based storage, and was just one test I performed to eliminate any spinning disk i/o bottle neck. I performed the test again using a file of 50 GB on a LaCie 2big created with the Apple's mkfile program. This was a more realistic test as this will be the way the office project is performed. That is, all the original project files will reside on Titan's LaCie disks and be accessed by all 4 Macs as they do their work. During their work they read only from the stock footage video data files and return results periodically to the Titan's LaCie disks. In this way ALL the project data resides in one place. We used to have to copy ALL the original stock footage data to each Mac and then each Mac would use their copies and do their work with local i/o and save results on the local disks. This was a nightmare to control and many times resultant work would get lost and/or very difficult to find.

The LaCie can certainly offer up the 320 MBytes/sec as the 4 Macs demand its i/o. I'm unsure if the kernel's file cache/buffer is used in anyway to buffer the data during this test. I've done a little research and according to Activity Monitor's Memory tab the kernel's file cache is not employed for network data transfers between the Mac, unless you're using something like iTunes to download a large movie file... but that's a different story and a different network issue compared to the local LAN I've setup.
 
Last edited:
The 380 MBytes/sec was what Activity Monitor's Network tab was displaying as "Data sent/sec". when all Macs (all 4 of them).

If you sum the data received on BAZ, Mercury and Saturn (145+155+60) this comes to 360 which is close to what Titan (MP6,1) was reporting as "Data sent/sec".

That's still doesn't explain how 3Gb/s of data gets on to 2Gb/s of hardware bandwidth.


The data being access by all 4 Macs was resident on a 4TB LaCie RAID-0 Thunderbolt-1 device which can deliver around 320 MBytes/sec when pushed. [/quote]

That is quite different from your initial description of the experimental data set-up from the internal drive SSD.

Are you looking at the network or disk tab when doing the Titan measurement. The local array access bandwidth would be around the max of the local array.

Likewise a virtual socket connection that was pulling through the local array would be around max local array speeds. ( for the internal SSD that would be reatlively slow speeds though. )
 
That's still doesn't explain how 3Gb/s of data gets on to 2Gb/s of hardware bandwidth.


The data being access by all 4 Macs was resident on a 4TB LaCie RAID-0 Thunderbolt-1 device which can deliver around 320 MBytes/sec when pushed.

That is quite different from your initial description of the experimental data set-up from the internal drive SSD.

Are you looking at the network or disk tab when doing the Titan measurement. The local array access bandwidth would be around the max of the local array.

Likewise a virtual socket connection that was pulling through the local array would be around max local array speeds. ( for the internal SSD that would be reatlively slow speeds though. )

Yes, my first test was using a test file resident on Titan's internal SSD storage. As I said, I did this to have the best possible scenario where Titan's spinning disk was not involved.

My second test was to have the test file resident on Titan's LaCie 2big disk (max delivery at around 320 MB/s).

Both tests gave very similar results.

You say "how 3Gbps of data gets onto 2Gbps off hardware bandwidth"...

So the 380 Mbytes/sec is approximately 3 Gbps, right? The three links from the Cisco switch to Mercury, BAZ and Saturn are each providing 1GbE forgetting for time being that the link to Mercury and BAZ are via 2x 1GbE wires that each form a LAG configured by the Cisco switch. Then Titan has 2x 1GbE wires going to the Cisco switch and these form a LAG also.

I understand the Cisco switches LAGs provide better 'throughput' but their 'bandwidth' is limited to 1GbE, right ?

So, Titan is pushing data out over the 2x 1GbE LAG at 1Gbps max.

Activity Monitor is displaying that Mercury, BAZ and Saturn are collectively getting 360 MBytes/sec. This amounts to the switch somehow delivering 2.88 Gbps to these 3 Macs...!!!!

On the face of it I agree with you.... but then how do we explain what Activity Monitor is displaying on each Mac I observed? Titan's Activity Monitor's Network tab was showing 380 MB/s as Data sent/sec and the other 3 Macs were showing the 145, 155 and 60 which sums to 360.

Does the Cisco switch somehow buffer data ? Dunno.

Thanks for the discussion as I surely want to understand what I'm seeing.

[EDIT]
So another question I have concerning how these 2x 1GbE LAGs is -- can/could/do both wires get used to say send data along one wire while the other one is receiving data, or are they both always used in unison to send or receive data ?
 
Last edited:
can/could/do both wires get used to say send data along one wire while the other one is receiving data, or are they both always used in unison to send or receive data ?

Full duplex means equal bandwidth in and out simultaneously. And unless you've got something configured really poorly, your switch ports and Mac ports should have auto-negotiated themselves to full duplex.
 
Full duplex means equal bandwidth in and out simultaneously. And unless you've got something configured really poorly, your switch ports and Mac ports should have auto-negotiated themselves to full duplex.

Thanks...
I do have the full duplex configured. So to test this aspect I could send data from say BAZ to Titan while also sending data from Titan to BAZ and verify that each is seeing around a 1GbE rate.
 
Thanks...
I do have the full duplex configured. So to test this aspect I could send data from say BAZ to Titan while also sending data from Titan to BAZ and verify that each is seeing around a 1GbE rate.

I tested for Full Duplex with a dd command on Titan, sending data to Mercury, and the same dd command on Mercury, sending data to Titan, both at the same time.

This gave each around 94 Mbytes/sec as reported by dd.

This tells me Full Duplex is working on the LAG bonded ports. Thus the LAGs are supporting sending and receiving at same time.

I also noted what Activity Monitor was reporting for Data sent/sec and Data received/sec on Titan and Mercury. On both it was reporting around 204 Mbytes/sec being sent and received per second. This on the face of it is incorrect. However, I'm of the opinion that Activity Monitor is displaying the TOTAL of what it's seeing as network i/o. This is also a fact when writing to a RAID-0 device with say two disks, where the Application actually is seeing say 150 MBytes/sec yet Activity Monitor will be displaying around 300 MBytes/sec (that is, the data rate to both disks is being summed and displayed by Activity Monitor).

If my speculation is correct then all of my data rates I've reported above in previous posts for Activity Monitor should be divided by 2.
 
Yes, my first test was using a test file resident on Titan's internal SSD storage. ....to have the best possible scenario where Titan's spinning disk was not involved. ...

My second test was to have the test file resident on Titan's LaCie 2big disk (max delivery at around 320 MB/s).

Both tests gave very similar results.

There is no way the local Titan time should be very similar in those to cases. There is something seriously flawed in your measurement approach if those to are turning in very similar results for exactly the reasons you outline.


You say "how 3Gbps of data gets onto 2Gbps off hardware bandwidth"...

So the 380 Mbytes/sec is approximately 3 Gbps, right? The three links from the Cisco switch to Mercury, BAZ and Saturn are

The three links are completely immaterial. Those links don't make a lick of difference to the 'red flag' here. The links from the server to the switch are the choke point.



. Then Titan has 2x 1GbE wires going to the Cisco switch and these form a LAG also.

That's is the core issue of how to drive 3Gb/s of data over 2Gb/s of wire capacity.


I understand the Cisco switches LAGs provide better 'throughput' but their 'bandwidth' is limited to 1GbE, right ?

Throughput is better when have a n:1 or n:m ( where n >> m ) set up where going to have congestion if make all the clients go "single file" through a single 1GbE link to a server. That single link becomes a choke point. Can decrease congestion by effectively increasing the number of lanes on the freeway. It doesn't make a single car faster than it is, but it make the collection of cars flow faster. That's throughput.

Activity Monitor is displaying that Mercury, BAZ and Saturn are collectively getting 360 MBytes/sec. This amounts to the switch somehow delivering 2.88 Gbps to these 3 Macs...!!!!

I'm a bit suspicious about where OS X is probing to get the the data flow numbers from. If it is double counting somehow due to the bounded ethernet ports ( count flow in/out of the unified virtual port and counts the physical ones too ) then Activiity monitor numbers could be inflated. If the auditing system isn't aware of bounding it may have problems. [ Activity monitor shows hyperthread "cores" too. ]


but then how do we explain what Activity Monitor is displaying on each Mac I observed? Titan's Activity Monitor's Network tab was showing 380 MB/s as Data sent/sec and the other 3 Macs were showing the 145, 155 and 60 which sums to 360.

Double counting on all of the bounded nodes should still add up.

If went to single links on the clients and kept the double on the server and it ***** to not adding up then would be an indicator of what is going on.


Does the Cisco switch somehow buffer data ? Dunno.

Can't cache what hasn't reached the swtich. If server-switch link has 2Gb/s of data can't cache more than that. The cache is only going to reduce the latency till when the data can actually arrive. (eliminating resends of 'lost' packets and/or avoid congestion back-off adjustments. )

This might work if the ports on both the Mac and Swtich somehow negotiated a non standard mode (e.g. 1.25Gb/s ). [ most 10GbE and some optical transceivers can be set to 1.25 so not completely random. Just non standard for copper. ] But I don't think that is most likely case, but with Cat6 and reasonable distances the wire isn't in the way.


Thanks for the discussion as I surely want to understand what I'm seeing.

if you "time" the checksum I'd go back to check that file size and "time" to get a secondary measure of bandwidth size/time --> GB/s . If that is waaaaaaaay different than what Activity monitor is reporting then that would be a clue. There is overhead to 'time' and the clock precision is perhaps at different granularity so won't be exactly the same but large differences would be an issue.
 
I'm a bit suspicious about where OS X is probing to get the the data flow numbers from. If it is double counting somehow due to the bounded ethernet ports ( count flow in/out of the unified virtual port and counts the physical ones too ) then Activiity monitor numbers could be inflated. If the auditing system isn't aware of bounding it may have problems. [ Activity monitor shows hyperthread "cores" too. ]

This is most likely what is taking place. When sending traffic over a bonded network interface on OS X, the same traffic will also show up on the physical interfaces that make up that bond. Activity Monitor is most likely adding the traffic from all interfaces together, so the bandwidth reported should be divided by 2 to get the true values. (Even if the Mac Pro is pushing 180MB/sec, that's still above a single 1Gbit pipe.)

I noticed this side-effect when developing my own system monitor, XRG. To compensate, I allow users to select a particular network interface to monitor as an alternative to showing a summation of all interfaces. Just choose bond0 to monitor and XRG will show correct values.
 
There is no way the local Titan time should be very similar in those to cases. There is something seriously flawed in your measurement approach if those to are turning in very similar results for exactly the reasons you outline....snip...

I think you're misunderstanding what timings I took or maybe I was not clear about it. No matter, all timings were network data rates (as provided by Activity Monitor) and not the actual time used to read or write a file to Titan's local internal storage which is many times faster than the 1GbE network.

The intent of reading/writing the test file by a Titan application was simply done to simulate what happens in reality... which is... All Macs are wanting to read/write to Titan's internal storage or its external LaCie 2big devices. Thus, the client Macs are all accessing Titan's data over the LAN via the Cisco switch's LAGs and the Titan application is locally accessing the same data.

On the issue of Activity Monitor's readings displaying Data received/sec and Data sent/sec I firmly believe they are bogus for the intent of measuring the network traffic data rates. If not bogus, they're very misleading.

The true network data rates IMO can only be obtained by having each client measure its own data transmissions to/from Titan. This has been done and I'm satisfied that each client sees close to 1 GbE if they're the only one moving data.

If multiple Mac clients are all sending to or receiving from Titan data they all will share the 1 GbE LAG ports in the Cisco switch connected to Titan's dual 1 GbE ports.

If one Client Mac is reading from Titan and simultaneously another Mac client is writing to Titan then each will see close to 1GbE (Full Duplex). This has been tested and I'm satisfied this is correct.

I've abandoned my use of Activity Monitor for measuring network activity in this regard as I find its numbers are either bogus or extremely misleading.

Thanks... :)
 
This is most likely what is taking place. When sending traffic over a bonded network interface on OS X, the same traffic will also show up on the physical interfaces that make up that bond. Activity Monitor is most likely adding the traffic from all interfaces together, so the bandwidth reported should be divided by 2 to get the true values. (Even if the Mac Pro is pushing 180MB/sec, that's still above a single 1Gbit pipe.)

I noticed this side-effect when developing my own system monitor, XRG. To compensate, I allow users to select a particular network interface to monitor as an alternative to showing a summation of all interfaces. Just choose bond0 to monitor and XRG will show correct values.

Is your XRG compatible with the new MacPro 6,1 ? Thanks... :)
 
My nMP hasn't arrived yet, so I haven't been able to test XRG fully on it. I've heard from others though that XRG runs fine on their nMPs though. It's free, and doesn't do anything dangerous (doesn't even require admin privileges). Give it a shot…

http://www.gauchosoft.com/xrg/
 
Just wanted to say thanks for all the info.

I learned a lot today. Thanks




Mb

ya, they aren't answering you anymore..they are too busy having their own self-involved, egotistical, narcissistic conversation. We are eagerly awaiting the emergence of the alpha dog. :p
 
I wish I had noticed this earlier.

In case some searches for it later it might be useful to add a couple of observations.

Activity monitor simply reports the total network activity for network devices on the system. This can be misleading, because most people think of devices as physical things, and think of network activity as external/inter-system.

The problem is that neither of those interpretations are true for a unix system.

The loopback device, lo0, is also a network device. If you have a lot of intra-system activity via IP, it can throw you off. The bond0 interface is also special. In function, it serves as a special kind of network router. Packets written to that device, instead of heading out via hardware, are delivered as input to a software layer which then writes the packets on other network devices like en0, en1, etc.

On the inbound side, packets received from en0 or en1, are then read by the intermediate layer for reordering or other twiddling and queued as input to bond0 for final output to the processes it serves.

So, to see what is actually happening on a specific network device you need to query the specific device.

Perhaps the easiest way to do this is the netstat command.
Look at the bonded layer device with:
netstat -n -I bond0
would report total IO since boot (or since the stat counters were last cleared).

This command:
netstat -n -I bond0 -w 10
would gather and report activity summed over 10 second intervals, and continue doing so until canceled.

To see more fine grained data, just open multiple shell windows. For instance to see the data flowing through the bonded interface and see how it is being mapped via the physical interfaces, you would use something like these commands:

netstat -n -I en0 -w 60
netstat -n -I en1 -w 60
netstat -n -I bond0 -w 60

These commands, running in separate windows (or otherwise running in parallel) would continuously report packet and byte level I/O for the last 60 seconds on the bonded device and the devices which map to the underlying ethernet transport.

A similar kind of thing used to confuse folks looking at disk I/O when file vault was implemented using .dmg virtual disk devices. Aggregate disk I/O for the underlying hard disk would be confused with the file based I/O of the encrypted disk it was hosting. System wide figures would be inflated by the fact that each I/O passed through 2 different unix devices when only 1 piece of hardware served both streams.

It is too late to help the OP, since you guys figured it out already, but it might shed light on the issue for others in the future.
 
The primary use for two Ethernet ports is to have a secure private network between a separate group of machines that's separate from the main network.

For example, if I was building a cluster of Mac Pros to do stuff, and they needed both internet access and to be able to talk to each other, I would have a private LAN between the Mac Pros so that their network activity between each other is more direct and not clogging up the main LAN with other users on it. Then they would also have a standard Ethernet connection as well. It's more secure that way since there's no physical connection from the main LAN to the private LAN.

This is what got everyone all upset when the NSA tapped Google's internal network lines apparently. It forced them to encrypt the connections on the internal network, which is good practice anyway, but not typically necessary assuming no foul play.
 
When I've used 2x 1Gbit LACP on a Mac, I've noticed Activity Monitor always reports DOUBLE the actual network I/O.

I've seen this on Mac Pro's and Mac Mini's.

I assume it's a bug that Apple hasn't bothered to address.

----------

One use is with Parallels - I've noticed with LACP, the Mac can (and does) use one gigabit link, and Parallels can use the other.

Or obviously you can do that without LACP too.
 
It's interesting that the OP asked this question...

Over past weekend my goal for bonding all 3 MacPros (a 4-core Nehalem, an 8-core Nehalem and a new 12-core MacPro 6,1) at my son's office was successful. My only difficulty was figuring out the IP address of the Cisco switch so I could monitor the link states and obtain statistics of packets being moved over each port and for any errors. There were no errors on any of the links. I disconnected one of each of the bonded ethernet ports to make sure failover worked and that presented no issues.

The switch I used was the Cisco Small Business SG200-18 1 Gigabit switch. It's noiseless (has no fan) and runs quite cool. After 3 hours of use it was slightly warm on ts top surface. This switch has the capacity to process 36 Gbps (just a tad over 4 GBytes/sec) and will support IEEE 802.ad LACP up to 4 groups of dynamic link aggregation, and has a remote web browser-based management providing configuration, system dashboard, system maintenance and monitoring... Very nice indeed. The switch was purchased from NewEgg for $207. I check with Cisco and NewEgg is a certified reseller for Cisco switches. Here's a ref to the switch... http://www.newegg.com/Product/Product.aspx?Item=N82E16833150120. Interestingly enough, I noted that as soon as I placed my NewEgg order for this switch they increased it by $1.00 - very weird.

The Cisco Small Business 200 Series Smart Switch Administration Guide states on page 116 the following

"Link Aggregation Control Protocol (LACP) is part of IEEE specification (802.3az) that enables you to bundle several physical ports together to form a single logical channel (LAG). LAGs multiply the bandwidth, increase port flexibility, and provide link redundancy between two devices.

Two type of LAGs are supported: Static and Dynamic"

I decided to configure Dynamic LAGs for each of the 3 MacPros in my Son's office.

A LAG supports Load Balancing: Traffic forwarded to a LAG is load-balanced across the active member ports, thus achieving an effective bandwidth close to the aggregate bandwidth of all the active member ports of the LAG. Traffic load balancing over the active member ports of a LAG is managed by a hash-based distribution function that distributes Unicast and Multicast traffic based on Layer 2 or Layer 3 packet header information.

This Cisco managed smart switch has 18-ports. I configured things as follows using the switch's management web-based software which was easy to use. I actually configured the switch at home several days before going to my Son's office.

I used CAT6 wires everywhere, although Cisco indicated CAT5 would be sufficient so long as they were not 100s of feet long.

This Cisco switch can have up to 4 LAGs configured. I only wanted to use 3 for the 3 MacPros that each had 2x 1GbE ports.

With this many cables it's important to label the cable ends, especially the ends that plug into the switch.

Port 1: A Router line from a 1TB TC's LAN port.
Port 2: Connected to a 27" iMac named Saturn
Ports 9 & 10: connected to MP6,1 named Titan (these were bonded to aggregate the line data speeds and for failover).
Ports 12 & 13: connected to 8-core MP named Mercury (these were bonded to aggregate the line data speeds and for failover).
Ports 15 & 16: connected to 4-core MP named BAZ (these were bonded to aggregate the line data speeds and for failover).

...snip...

The above quote from my earlier post is simply presented here to associate my Macs and LAN configuration for reference for my current testing. Much of the discussion here has helped me refine my testing to reflect reality.

I used XRG http://www.gauchosoft.com/Products/XRG/ on Titan (12-core, MacPro 6,1) for measuring/observing the Transmit and Receive data rates on my new MacPro6,1. XRG allows the Titan "bond0" to be exclusively monitored and Activity Monitor was not used but I did monitor it's Network send/receive data rates which were always way more than what actually was being transmitted when compared to XRG's display and the dd command transfer rate results.

I used the dd command for writing files from Mercury to Titan and for writing files from Titan to Mercury.

The Mercury's internal disk and Titan's 4TB RAID-0 LaCie 2big Thunderbolt-1 disk were used in all test cases.

The dd command used was dd bs=1000000 count=10000 if=/dev/zero of=/Volumes/...../test_file

The following were my test results

1) Sending data from Titan to Mercury using a single dd command gave 101 Mbytes/sec and was confirmed by XRG.

2) Sending data from Mercury to Titan using a single dd command gave 101 Mbytes/sec and was confirmed by XRG.

3) Sending data from Titan to Mercury using two separate dd commands run at same time showed approximately 60 Mbytes/sec for each dd command and was confirmed by XRG that displayed approximately 120 MBytes/sec.

4) Sending data from Mercury to Titan using two separate dd commands run at same time showed approximately 60 Mbytes/sec for each dd command and was confirmed by XRG that displayed approximately 120 MBytes/sec.

5) Simultaneously sending data from Mercury to Titan and Titan to Mercury (making use of Full Duplex). Each dd command reported 100 MBytes/sec and this was confirmed by XRG on Titan displaying Transmit and Receive data rates.

From these tests I have concluded

1) Any single data transfer running alone between the Macs will see approximately 100 Mbytes/sec.
2) Multiple data transfers to a single Mac will share the dual ethernet bonded LAG link. That is for say two transfers occurring at same time, each will see around 60 MBytes/sec.
3) Any Mac sending a single data stream while also receiving a single data stream from another Mac will see 100 MBytes/sec being transmitted and 100 MBytes/sec being received.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.