Speeding up mobile networks with FQ CoDel and MPTCP

Fast Times

© Lead Image © Kirsty Pargeter, 123RF.com

© Lead Image © Kirsty Pargeter, 123RF.com

Article from Issue 175/2015
Author(s): , Author(s):

Bufferbloat can take a toll on mobile TCP connections. We'll show you a pair of experimental protocols designed to speed up throughput and reduce latency.

Smartphone sales overtook the sales of PCs as early as 2010 [1][2], and every year, mobile devices work more intensively with data. The Cisco Visual Networking Index (VNI), which predicts global data traffic, expects that the volume of mobile data will increase by a factor of 11 between 2013 and 2018 and will overtake the data volume for wired connections by the end of 2018 [3].

Wireless interfaces come with complications that aren't present in conventional network devices. For instance, power consumption is a limiting factor [4]. Mobile connections also have some special needs when it comes to network performance and quality of service for end-to-end connection between two devices. Performance is measured according to the sizes of throughput, latency, package loss rate, and jitter. Connection initiation, error correction, and flow control all reduce the visible data rate and response time [5] [6] [7].

The central parameters for determining performance are thus goodput and response time. Goodput measures the data rate available to applications, and response time describes the time that passes between when a client makes a request and the server's first response. Both metrics are influenced by processes at all levels of the default layer model.

Mobile phones work on Levels 1 and 2 using WiFi or cellular networks, such as GSM, UMTS, and LTE. The throughput and latency depend on both the radio technology and the quality of the data transmission within the radio cell [8]. These influencing factors appear to be unfluctuating from the operating system's perspective.

The operating system can only intervene at Levels 3 and 4 with the IP and TCP protocols. Goodput and response time essentially depend on the TCP algorithms and the way these algorithms handle connections, package loss, and connection failures.

Bufferbloat

Any (mobile) Internet connection consists of many components with different speeds and latencies. As a result, routers do not always have free capacity on the outbound connections to immediately redirect incoming packages. A router caches incoming packages until the outgoing lines are free again in order to manage differences in speed and load peaks ([9], p. 1).

You expect that routers would have large enough buffers to minimize the number of discarded packets. However, practical experience shows that this expectation is wrong. The reason for the counterintuitive behavior: TCP characteristically approaches the data rate of the slowest part of the connection [10]. The protocol takes discarded packets as an indicator that the connection is saturated and thus slows down [11]. The overall speed of the connection, therefore, will not exceed the speed of the slowest component regardless of how much capacity you build into the rest of the system.

In Figure 1, the connection is already overloaded at the instant of time t0. The first packets are, however, discarded much later – in t1 – due to a large buffer in the network device (router, access point, mobile phone) before the bottleneck, so TCP continues to increase the data rate despite the overload. Because of the long delay, TCP only recognizes the package loss in t2 and reduces the transmission rate. Queues and latency on the bottleneck keep growing, without a rise in the actual data rate.

Figure 1: Queue length, bandwidth, and latency over the course of a bufferbloat scenario.

Along with the packet's Round Trip Time (RTT), the time required for TCP to determine packet loss also grows, which also increases the time necessary to resolve packet loss ([9], p. 61 and [12], p. 2). Delays in response packets from a remote station because of the long queues can lead to a disconnection.

TCP's conservative stance regarding discarding packets and the falling prices of memory modules have encouraged this development ([9], p. 59). Overly large buffers (bufferbloat) therefore reduce the performance of connections in modern networks.

No Longer with Wires

The TCP specification for purely wired Internet makes certain assumptions that no longer apply for wireless interfaces. First, wireless interfaces vary their transfer rates and latencies dynamically, and TCP expects static data rates and latencies.

Most wireless connections are also less performant than their wired counterparts. The bottleneck for many connections is shortly before the device. Buffers for static data rates and latencies continue to be designed and, in combination with the possible bottleneck, promote the emergence of bufferbloat directly to the mobile device ([9], p. 62).

Wireless methods often result in complaints about transmission errors and disconnections. Lost packages are commonly a sign of line congestion; however, package losses even appear as bursts in standard operation with wireless connections, for example, if the connection collapses for a while [13].

However, TCP accepts that package losses are invariably a sign of overload. That is why TCP often restricts wireless connections after a few bit errors, even though the line is not busy. Furthermore, each switch between cellular and WiFi networks causes one connection to disconnect and the other to restart.

Thirdly, the service life of mobile phones is relatively short. Many applications transfer only a small amount of data or have very short-lived connections. For a web browsers, for instance, the website component (image, stylesheet, JavaScript) is only about 2KB [14]. A lot of short-lived TCP connections arise if the transfer succeeds with the still widely used HTTP/1.1 [15] [16].

It often happens that an individual transfer has already occurred before TCP has reached the maximum speed – this is adversely perceptible with the mass. Furthermore, the brevity of the connections means that spontaneously occurring package losses can only be detected at a very late point.

Finally, smartphone hardware behaves differently from a common Linux laptop, desktop, or server system. On the one hand, the baseband processor (i.e., the radio modem) is not open in most mobile phones.

The operating system is prevented from controlling the baseband processor's buffer behavior because it is driven by proprietary firmware [17] [18]. On the other hand, battery life is critical to the success of mobile phones; however, traditional TCP implementations are not designed to optimize battery life.

Finding Solutions

The problems associated with TCP connections are not new. Engineers have developed some established practices that address separate aspects of the overall problem in order to improve response time or goodput. These practices center around the treatment of:

  • connections
  • error handling or standard operation
  • transmitters, receivers, or infrastructure (routers).

TCP Fast Open [19] and Initial Congestion Window [20], which reduce the time required for establishing a connection after an error, are mainly relevant for the TCP endpoints. Tail Loss Probe [21] and Early Retransmit [22] try to analogously detect package losses more quickly or respond to losses in a better manner (Proportional Rate Reduction [22]).

This article focuses on two more recent approaches for improving TCP performance. These solutions promise an efficient influence for operating systems at the communication endpoints. The first approach, Active Queue Management, has a significant importance for the entire Internet, not just for mobile devices.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Controlled Delay Management

    The persistent and growing problem of bufferbloat is getting serious relief from a new active queue management tool known as a Controlled Delay – a tool that, thanks to heroic efforts, is now ready for Linux.

  • Security Lessons: Bufferbloat

    An abundance of buffers hides the Internet’s dirty little secret.

  • Linux News
    • Red Hat Addresses Secure Boot
    • FSF Addresses Secure Boot
    • Android 4.1 Jelly Bean
  • News

    In the news: Purism Launches a Mini PC; openSUSE Leap 15.2 Adds AI and Machine Learning; Google's Nearby Sharing Could Work with Linux; System76 Launches Ryzen-Powered Laptop; Fedora 33 Desktop Defaults to Btrfs; and SUSE Acquires Rancher Labs.

  • IoT with RabbitMQ

    Connect multiple protocols and servers together on your IoT projects.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News