Networking – Application Optimized Routing on a load balancing router

internetload-balancernetworkingrouterrouting

Ive got 2 incoming broadband lines both c.10mb down / c.0.8 up. (one line is slightly faster than the other)

Ive recently setup a TP Link – TL-R470T+ Load balancing router after researching it and watching THIS VIDEO. I run the 2 seperate adsl lines into my 2 separate modems, from there i run 2 cat5s from the modems to the load balancer and then 1 cat5 from the load balancer to a wifi router, all devices connect via the wifi router.

In the video they talk about disabling 'Enable Application Optimized Routing' – if i run a speedtest with it enabled i basically just get the speed result of the faster broadband line. If i disable the above setting and run a speedtest i get the combined speed of both lines.

Next to the option it says the below comment :

Enable Application Optimized Routing With this box checked, all the
data packets of the same network application on multi-connections will
be forwarded via the same WAN ports, which avoids abnormity caused by
forwarding the data packets of this application via different WAN
ports.

What does that mean exactly ? – I use quite a few 'live syncing sites' like Google Drive and Trello which us a mixture of sockets, node js and long polling to stream data back and forth over a continuous connection would these services be effected ?

I also use a cloud backup service on a few machines would something like this be effected ?

I understand that if i have this setting enabled i can still get the automatic switching benefit of both lines, but not use both of them at the same time. What sort of issues / which sort of services will / colud run into issues if i leave this option unchecked ?

Best Answer

With respect to IPv4:

A TCP connection (not UDP, not Multicast, etc) upon which sessions applications establish to conduct transactions and present content is between one and only one source IP:port, and one and only one destination IP:port. The protocol does not permit one-to-many connections for a single session, as far as the public Internet is concerned. Due to the stateful nature of TCP, while it may be possible to have several private hosts conduct parts of a single session, brokered by a load-balancer, it is likely not practical.

The route between these two IP:port hosts may be infinitely dynamic, insofar as neither host runs out of resources or exceeds any timers. This includes gracefully handling out-of-sequence packets, as long as no limits, hard or soft, are exceeded.

This means that in order to load-balance a session over two separate links in the outbound direction, both paths must be able to forward traffic from the same source IP to the same destination IP.

When the two links belong to the same ISP, this is usually not a problem, unless there are strict source IP filters (explicit or implicit) on each connection. In fact, if no specific restrictions, one can balance in the outbound direction over two separate links without any assistance from the ISP.

Not so for load-balancing the inbound traffic, however. The ISP almost always has to step in to enable load-balancing in the inbound direction.

Lets assume the ISP is onboard with implementing load balancing for you:

One of the easiest ways to accomplish this is to assign you your own subnet, apart from the usual networks served by the DSLAM. This subnet could be as small as a single /32 host, or, for an office, perhaps even several hundred hosts.

For reliable load balancing between two IP links and customer premise equipment (CPE), the load balancer ought to have at least 3 separate interfaces, and the two ISP-facing interfaces ought to belong to two different networks, to eliminate any ambiguous routing or switching decisions

Say one of your ISP-facing load-balancer interfaces is 10.2.2.2/30, the other 10.2.2.254/30. Your CPE network is 65.172.1.0/24 and the load balancer's CPE-facing interface is 65.172.1.1.

Your load-balacner would have to do some form of the following:

ip route 0.0.0.0 0.0.0.0 10.2.2.1
ip route 0.0.0.0 0.0.0.0 10.2.2.253

This creates two static default routes of equal priority to each connection to the ISP.

On a cisco router behaving as a load balancer, the default method was to load-balance per destination, as the way route-cache flow works, it's less work for the router. However, there was the option

ip load-sharing per-packet

which would forward traffic that has more than one equivalent route, in a round-robin out both interfaces.

ip load-sharing per-destination

sets it back to its default scheme.

This setup would load-balance your outbound connections.

Your ISP would have to configure these two static routes on their device, with the same per-packet or per-destination option, most likely the former:

ip route 65.172.1.0 255.255.255.0 10.2.2.2
ip route 65.172.1.0 255.255.255.0 10.2.2.254

If set up properly on both sides, both your load balancer's WAN interfaces ought to report the same packet-per-second received, and the same packet-per-second transmitted statistics.

The features you inquire about are very similar to per-packet load-sharing and per-destination. However, if its the same ISP, you can safely leave it on per-packet; The 'optimized' option is more for those load-balancing two connections to different providers. Note that changing this option only affects your outbound traffic, and has no effect on inbound.

It's quite unlikely that you'll be able to implement a two-way load-balanced connection without help (and likely a fee) from your ISP. Your ISP ought to be able to advise you on settings that suit your situation.

However, it is in my opinion, given what I know about your network design, that there will be any noticeable trouble with per-packet.

Related Question