Bufferbloat
Encyclopedia
Bufferbloat is a phenomenon in a packet-switched computer network whereby excess buffering of packets inside the network causes high latency
Latency (engineering)
Latency is a measure of time delay experienced in a system, the precise definition of which depends on the system and the time being measured. Latencies may have different meaning in different contexts.-Packet-switched networks:...

 and jitter
Packet delay variation
In computer networking, packet delay variation is the difference in end-to-end delay between selected packets in a flow with any lost packets being ignored...

, as well as reducing the overall network throughput
Throughput
In communication networks, such as Ethernet or packet radio, throughput or network throughput is the average rate of successful message delivery over a communication channel. This data may be delivered over a physical or logical link, or pass through a certain network node...

. The term was coined by Jim Gettys
Jim Gettys
Jim Gettys is an American computer programmer at Alcatel-Lucent Bell Labs, USA. Until January 2009, he was the Vice President of Software at the One Laptop per Child project, working on the software for the OLPC XO-1. He is one of the original developers of the X Window System at MIT and worked on...

 in late 2010.

This problem is caused mainly by router and switch manufacturers making incorrect assumptions about whether to buffer packets or drop them. As a general rule, packets should not be buffered for more than a few milliseconds. Any more than this can lead to TCP
Transmission Control Protocol
The Transmission Control Protocol is one of the core protocols of the Internet Protocol Suite. TCP is one of the two original components of the suite, complementing the Internet Protocol , and therefore the entire suite is commonly referred to as TCP/IP...

's congestion-avoidance algorithms
TCP congestion avoidance algorithm
Transmission Control Protocol uses a network congestion avoidance algorithm that includes various aspects of an additive increase/multiplicative decrease scheme, with other schemes such as slow-start in order to achieve congestion avoidance....

 breaking, causing problems such as high and variable latency, and choking network bottlenecks for all other flows as the buffer becomes full of the packets of one TCP stream and other packets are then dropped. The buffers then take some time to drain, before the TCP connection ramps back up to speed and then floods the buffers again.

When bufferbloat is present and the network is under load, the symptom is that normal web page loads can take many seconds to complete. Any type of service which requires consistent throughput (whether low or high bandwidth), be it VoIP, networked gaming, text and video chat programs, and interactive application such as remote login become next to impossible.

While latency has been identified as more important than bandwidth for many years, as the price of RAM
Random-access memory
Random access memory is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order with a worst case performance of constant time. Strictly speaking, modern types of DRAM are therefore not random access, as data is read in...

 has fallen, the problem of bufferbloat has become increasingly obvious as large buffers can be implemented extremely cheaply.

The problem can be eliminated by simply reducing the buffer size on the network hardware; however, this is not configurable on most routers and switches, especially if it occurs within third-party networks such as the ISPs.

Details

The TCP congestion avoidance algorithm
TCP congestion avoidance algorithm
Transmission Control Protocol uses a network congestion avoidance algorithm that includes various aspects of an additive increase/multiplicative decrease scheme, with other schemes such as slow-start in order to achieve congestion avoidance....

 relies on packet drops to determine the bandwidth
Bandwidth (computing)
In computer networking and computer science, bandwidth, network bandwidth, data bandwidth, or digital bandwidth is a measure of available or consumed data communication resources expressed in bits/second or multiples of it .Note that in textbooks on wireless communications, modem data transmission,...

 available. It speeds up the data transfer until packets start to drop, then slows down the connection. Ideally it speeds up and slows down until it finds an equilibrium equal to the speed of the link. However, for this to work the packet drops must occur in a timely manner, so that the algorithm can select a suitable transfer speed. With a large buffer, the packets will arrive, but with a higher latency. The packet is not dropped, so TCP does not slow down even though it really should. It does not slow down until it has sent so much beyond the capacity of the link that the buffer fills and drops packets, but this then means it has far overestimated the speed of the link.

In a network buffer, packets are queued before being transmitted. Packets are only dropped if the buffer is full. On older routers, buffers were fairly small so filled quickly and therefore packets began to drop shortly after the link became saturated, so the TCP protocol could adjust. On newer routers buffers have become large enough to hold several megabytes of data, which translates to 10 seconds or more at a 1 Mbit/s line rate used for residential Internet access. This causes the TCP algorithm to work erratically and possibly even time out completely.

The problem also affects other protocols. Since the buffer can easily build up several seconds worth of data before packets start to drop which must wait in the buffer until they are transmitted this can reduce the interactivity of interactive applications and cause latency problems for gamers and VoIP. This is still the case when using DiffServ to prioritise traffic, which uses multiple buffers (queues) for each class of traffic. HTTP and VoIP may be buffered independently, but each buffer will still be independently susceptible to bufferbloat.

With TCP
Transmission Control Protocol
The Transmission Control Protocol is one of the core protocols of the Internet Protocol Suite. TCP is one of the two original components of the suite, complementing the Internet Protocol , and therefore the entire suite is commonly referred to as TCP/IP...

, during network congestion
Network congestion
In data networking and queueing theory, network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay, packet loss or the blocking of new connections...

 bufferbloat causes extra delays, limiting the speed of internet connections. Other network protocols also appear to be affected, including UDP
User Datagram Protocol
The User Datagram Protocol is one of the core members of the Internet Protocol Suite, the set of network protocols used for the Internet. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol network without requiring...

-based protocols. This can cause problems by restricting the speed of connections, affecting interactive Web 2.0
Web 2.0
The term Web 2.0 is associated with web applications that facilitate participatory information sharing, interoperability, user-centered design, and collaboration on the World Wide Web...

 applications, gaming and VoIP.

External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK