Link Aggregation in Action

In order to get an idea of how link aggregation really helps, we first set up the NAS with just a single active network link. The first set of tests downloads the Blu-ray folder from the NAS starting with the PC connected to port 3, followed by simultaneous download of two different copies of the content from the NAS to the PCs connected to ports 3 and 4. The same concept is extended to three simultaneous downloads via ports 3, 4 and 5. A similar set of tests is run to evaluate the uplink part (i.e, data moves from the PCs to the NAS). The final set of tests involve simultaneous upload and download activities from the different PCs in the setup.

The upload and download speeds of the wired NICs on the PCs were monitored and graphed. This gives an idea of the maximum possible throughput from the NAS's viewpoint and also enables us to check if link aggregation works as intended.

The above graph shows that the download and upload links are limited to under 1 Gbps (taking into account the transfer inefficiencies introduced by various small files in the folder). However, the full duplex 1 Gbps nature of the NAS network link enables greater than 1 Gbps throughput when handling simultaneous uploads and downloads.

In our second wired experiment, we teamed the ports on the NAS with the default options (other than explicitly changing the teaming type to 802.3ad LACP). This left the hash type at Layer 2. Running our transfer experiments showed that there was no improvement over the single link results from the previous test.

In our test setup / configuration, Layer 2 as the transmit hash policy turned out to be ineffective. Readers interested in understanding more about the transmit hash policies which determine the distribution of traffic across the different physical ports in a team should peruse the Linux kernel documentation here (search for the description of the parameter 'xmit_hash_policy').

After altering the hash policy to Layer 2 + 3 in the ReadyNAS UI, the effectiveness of link aggregation became evident.

In the pure download case with two PCs, we can see each of the PCs getting around 800 Mbps (implying that the NAS was sending out data on both the physical NICs in the team). An interesting aspect to note in the pure download case with three PCs is that Machine 1 (connected to port 3) manages the same 800 Mbps as in the previous case, but the download rates on Machines 2 and 3 (connected to ports 4 and 5) add up to a similar amount. This shows that the the network ports 4 and 5 are bottlenecked by a 1 Gbps connection to the switch chip handling the link aggregation ports. This is also the reason why Netgear suggests using port 3 as one of the ports for handling the data transfer to/from the link aggregated ports. Simultaneous uploads and downloads also show some improvement, but the pure upload case is not any different from the single link case. These could be attributed to the limitations of the NAS itself. Note that we are using real-world data streams transferred using the OS file copy tools (and not artificial benchmarking programs) in these experiments.

Introduction and Benchmarking Setup The Promise of Gigabit Wi-Fi and Concluding Remarks
Comments Locked

66 Comments

View All Comments

  • Dunkurs1987 - Tuesday, January 5, 2016 - link

    T Think Synology have a competition here with RT1900ac router:
    http://www.span.com/product/Synology-Wireless-Rout...
  • iwod - Thursday, December 31, 2015 - link

    May be the industry should move faster to NBase-T; 2.5Gbps / 5Gbps and thinking about 10Gbps on prosumer / SME Network instead?

    They say 802.11ax will substantially improve real world single client performance, which i hope it is true because as the article has shown, we aren't anywhere near 1Gbps Real world WiFi speed yet.
  • cdillon - Thursday, December 31, 2015 - link

    I would love to see more reasonably-priced 10GBASE-T equipment. It's already not too bad from a prosumer standpoint. There would be little point to developing 2.5GBASE-T or 5GBASE-T at this point, though, since 10GBASE-T has been available for nearly a decade now.
  • iwod - Friday, January 1, 2016 - link

    The idea for Base-T is that with a new controller, ( Router , NAS, and your computer ) You could get 2.5 / 5Gbps from an CAT-5e depending on Cable quality and length. For most home users I don't see why they can't achieve 5Gbps unless you live in a castle size home.

    My problem is that by the time Base-T standardise and widespread in consumer it will be 2018+. Why are we not forward thinking enough to have 10Gbps on the same controller as well? So it will negotiate the best transfer speed for that cable. I am sure for a lot of SME, and Home, their cables are decent and length are short enough to go 10Gbps.
  • jhh - Sunday, January 3, 2016 - link

    By 2018, enterprises will have moved to 25G over copper, so the 10G parts might get cheaper as they try to milk the last revenue out of their 10G switches and NICs. The 25/50/100G ports are already in the market, but you need a PCIe3 x16 to handle 100G, along with interrupt steering to distribute the packets to multiple cores.
  • p1esk - Friday, January 1, 2016 - link

    Exactly. Real world WiFi speeds are nowhere near 1Gbps. This link aggregation is really a solution to the problem from a fantasy land.
  • ganeshts - Friday, January 1, 2016 - link

    I would think working MU-MIMO might alter the situation a bit. As you can see, we do get almost 1 Gbps with the 3x3 dual-radio configuration and link aggregation does help there.

    Also, there is the matter of Qualcomm Atheros 4x4 routers that I am working on right now. They might be able to reach multi-gigabit throughput.
  • magreen - Friday, January 1, 2016 - link

    @plesk: Couldn't agree more on the fantasy land comment.

    Their solution is a solution to the marketing problem--that their inflated marketing numbers have now outstripped the maximum one cat5e cable can provide. So they aggregate so that it's not theoretically impossible for their inflated marketing numbers to be correct.

    It's like "aggregating" two 64-bit CPUs so that you can claim your CPU is 128-bit. Which is useful in the real world for...precisely nothing.
  • keithjeff - Saturday, January 2, 2016 - link

    Oh dear, have you seen the prices of Cisco nbaseT switches (or as they call them - mgig)?

    The enterprise price is bad enough at a fairly large discount. The one off price would stagger you. And yes, they will come down, but I have no idea when.

    Cheers,
  • zodiacfml - Thursday, December 31, 2015 - link

    1024 QAM seems not be working. Do we have a review or test of the efficiency of increasing spatial streams? I once tested my Nexus5 with a speed test and I remember having it around 300 to 320 mbps speeds which is pretty close to the theoretical 433 mbps speed and not far from the 400 to 500 mbps actual speeds of 3 spatial streams.

Log in

Don't have an account? Sign up now