Fair queueing
Encyclopedia
Fair queuing is a scheduling algorithm used in computer and telecommunications network
Telecommunications network
A telecommunications network is a collection of terminals, links and nodes which connect together to enable telecommunication between users of the terminals. Networks may use circuit switching or message switching. Each terminal in the network must have a unique address so messages or connections...

s to allow multiple packet flows
Flow (computer networking)
In packet switching networks, traffic flow, packet flow or network flow is a sequence of packets from a source computer to a destination, which may be another host, a multicast group, or a broadcast domain...

 to fairly share the link capacity. The advantage over conventional first in first out (FIFO) queuing is that a high-data-rate
Bit rate
In telecommunications and computing, bit rate is the number of bits that are conveyed or processed per unit of time....

 flow, consisting of large or many data packets, cannot take more than its fair share of the link capacity. Fair queuing can be interpreted as a packet approximation of generalized processor sharing
Generalized Processor Sharing
Generalized processor sharing was developed as a service discipline to share the capacity of congested communications links in an efficient, flexible and fair manner...

 (GPS). It was proposed by John Nagle in 1985 and has since been one of the most studied scheduling algorithms.

Properties

Fair queuing is used in routers, switches, and statistical multiplexers that forward packets from a buffer
Buffer (computer science)
In computer science, a buffer is a region of a physical memory storage used to temporarily hold data while it is being moved from one place to another. Typically, the data is stored in a buffer as it is retrieved from an input device or just before it is sent to an output device...

. The buffer works as a queuing system, where the data packets are stored temporarily until they are transmitted. The buffer space is divided into many queues, each of which is used to hold the packets of one flow, defined for instance by source and destination IP addresses.

With a link data-rate of R, at any given time the N active data flows (the ones with non-empty queues) are serviced each with an average data rate of R / N. In a short time interval the data rate may be fluctuating around this value since the packets are delivered sequentially.

Fair queuing achieves max-min fairness
Max-min fairness
In communication networks and multiplexing, a division of the bandwidth resources is said to be max-min fair when: firstly, the minimum data rate that a dataflow achieves is maximized; secondly, the second lowest data rate that a dataflow achieves is maximized, etc.In best-effort statistical...

, i.e., its first priority is to maximize the minimum data rate that any of the active data flows experiences, the second priority is to maximize the second minimum data rate, etc. This results in lower throughput
Throughput
In communication networks, such as Ethernet or packet radio, throughput or network throughput is the average rate of successful message delivery over a communication channel. This data may be delivered over a physical or logical link, or pass through a certain network node...

 (lower system spectrum efficiency in wireless networks) than maximum throughput scheduling
Maximum throughput scheduling
Maximum throughput scheduling is a procedure for scheduling data packets in a packet-switched best-effort communications network, typically a wireless network, in view to maximize the total throughput of the network, or the system spectral efficiency in a wireless network...

, but scheduling starvation of expensive flows is avoided.

In contrast to round-robin scheduling
Round-robin scheduling
Round-robin is one of the simplest scheduling algorithms for processes in an operating system. As the term is generally used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority . Round-robin scheduling is simple, easy to...

, fair queuing takes into account data packet sizes to ensure each flow is given equal opportunity to transmit an equal amount of data. A weighted version of fair queuing is called weighted fair queuing
Weighted fair queuing
Weighted fair queuing is a data packet scheduling technique allowing different scheduling priorities to statistically multiplexed data flows.WFQ is a generalization of fair queuing . Both in WFQ and FQ, each data flow has a separate FIFO queue...

 (sometimes referred to as fair queuing). Weighting is achieved by multiplying the packet size considered by the fair queuing algorithm with the inverse of a weight for the associated queue. Fair queuing is a special case of weighted fair queuing with equal weights for all queues.

Algorithm

Fair queuing attempts to emulate the fairness of bitwise round-robin sharing of link resources among competing flows. Packet-based flows, however, must be transmitted packetwise and in sequence. Fair queuing selects transmission order for the packets by modeling finish time for each packet as if they could be transmitted bitwise round robin. The packet with the earliest finish time according to this modeling is the next selected for transmission.

Modeling of actual finish time, while feasible, is computationally intensive. The model needs to be substantially recomputed every time a packet is selected for transmission and every time a new packet arrives into any queue.

To reduce computational load, the concept of virtual time is introduced. Finish time for each packet is computed on this alternate monotonically increasing virtual timescale. While virtual time does not accurately model the time packets complete their transmissions, it does accurately model the order in which the transmissions must occur to meet the objectives of the full-featured model. Using virtual time, it is unnecessary to recompute finish time for previously queued packets. Although finish time, in absolute terms, for existing packets is potentially affected by new arrivals, finish time on the virtual time line is unchanged - the virtual time line warps with respect to real time to accommodate any new transmission.

The virtual finish time for a newly queued packet is given by the finish time of the packet queued ahead of it for its flow plus its own size. If there are no packets queued for the flow, the virtual finish time is given by current virtual time plus the packet's size where current virtual time is the assigned virtual finish time for the packet which most recently completed transmission plus progress on the current transmission (if any).

With a virtual finishing time of all candidate packets (i.e., the packets at the head of all non-empty flow queues) computed, fair queuing compares the virtual finishing time and selects the minimum one. The packet with the minimum virtual finishing time is transmitted.

See also

  • Scheduling (computing)
    Scheduling (computing)
    In computer science, a scheduling is the method by which threads, processes or data flows are given access to system resources . This is usually done to load balance a system effectively or achieve a target quality of service...

  • Scheduling algorithm
  • Max-min fairness
    Max-min fairness
    In communication networks and multiplexing, a division of the bandwidth resources is said to be max-min fair when: firstly, the minimum data rate that a dataflow achieves is maximized; secondly, the second lowest data rate that a dataflow achieves is maximized, etc.In best-effort statistical...

  • Weighted fair queuing
    Weighted fair queuing
    Weighted fair queuing is a data packet scheduling technique allowing different scheduling priorities to statistically multiplexed data flows.WFQ is a generalization of fair queuing . Both in WFQ and FQ, each data flow has a separate FIFO queue...

  • Deficit round robin
    Deficit round robin
    Deficit round robin , also deficit weighted round robin , is a modified weighted round robin scheduling discipline. DRR was proposed by M. Shreedhar and G. Varghese in 1995. It can handle packets of variable size without knowing their mean size...

  • Weighted round robin
    Weighted round robin
    Weighted round robin is a scheduling discipline. Each packet flow or connection has its own packet queue in a network interface card. It is the simplest approximation of generalized processor sharing...

  • Statistical multiplexing
    Statistical multiplexing
    Statistical multiplexing is a type of communication link sharing, very similar to dynamic bandwidth allocation . In statistical multiplexing, a communication channel is divided into an arbitrary number of variable bit-rate digital channels or data streams. The link sharing is adapted to the...

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK