Bandwidth management
Encyclopedia
Bandwidth management is the process of measuring and controlling the communications (traffic, packets) on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion
Network congestion
In data networking and queueing theory, network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay, packet loss or the blocking of new connections...

 and poor performance of the network.

Management

Bandwidth management mechanisms may be used to further engineer performance and includes:
  • Traffic shaping
    Traffic shaping
    Traffic shaping is the control of computer network traffic in order to optimize or guarantee performance, improve latency, and/or increase usable bandwidth for some kinds of packets by delaying other kinds of packets that meet certain criteria...

     (rate limiting
    Rate limiting
    In computer networks, rate limiting is used to control the rate of traffic sent or received on a network interface. Traffic that is less than or equal to the specified rate is sent, whereas traffic that exceeds the rate is dropped or delayed...

    ):
    • Token bucket
      Token bucket
      The token bucket is an algorithm used in packet switched computer networks and telecommunications networks to check that data transmissions conform to defined limits on bandwidth and burstiness ....

    • Leaky bucket
      Leaky bucket
      The leaky bucket is an algorithm used in packet switched computer networks and telecommunications networks to check that data transmissions conform to defined limits on bandwidth and burstiness . The leaky bucket algorithm is also used in leaky bucket counters, e.g...

    • TCP rate control - artificially adjusting TCP window size as well as controlling the rate of ACKs being returned to the sender
  • Scheduling algorithms:
    • Weighted fair queuing
      Weighted fair queuing
      Weighted fair queuing is a data packet scheduling technique allowing different scheduling priorities to statistically multiplexed data flows.WFQ is a generalization of fair queuing . Both in WFQ and FQ, each data flow has a separate FIFO queue...

       (WFQ)
    • Class based weighted fair queuing
    • Weighted round robin
      Weighted round robin
      Weighted round robin is a scheduling discipline. Each packet flow or connection has its own packet queue in a network interface card. It is the simplest approximation of generalized processor sharing...

       (WRR)
    • Deficit weighted round robin (DWRR)
    • Hierarchical Fair Service Curve (HFSC)
  • Congestion avoidance:
    • RED, WRED - Lessens the possibility of port queue buffer
      Computer port (software)
      In computer programming, port has a wide range of meanings.A software port is a virtual/logical data connection that can be used by programs to exchange data directly, instead of going through a file or other temporary storage location...

       tail-drops and this lowers the likelihood of TCP global synchronization
      TCP global synchronization
      TCP global synchronization in Computer networks can happen toTCP/IP flows during periodsof congestion because each sender will reduce their transmission rate at the sametime when packet loss occurs....

    • Policing (marking/dropping the packet in excess of the committed traffic rate and burst size)
    • Explicit congestion notification
      Explicit Congestion Notification
      Explicit Congestion Notification is an extension to the Internet Protocol and to the Transmission Control Protocol and is defined in RFC 3168 . ECN allows end-to-end notification of network congestion without dropping packets. ECN is an optional feature that is only used when both endpoints...

    • Buffer tuning
  • Bandwidth reservation protocols / algorithms
    • Resource reservation protocol
      Resource Reservation Protocol
      The Resource Reservation Protocol is a Transport Layer protocol designed to reserve resources across a network for an integrated services Internet. RSVP operates over an IPv4 or IPv6 Internet Layer and provides receiver-initiated setup of resource reservations for multicast or unicast data flows...

       (RSVP)
    • Constraint-based Routing Label Distribution Protocol
      Constraint-based Routing Label Distribution Protocol
      Constraint-based Routing Label Distribution Protocol is a control protocol used in some computer networks.As of February 2003, the IETF MPLS working group deprecated CR-LDP and decided to focus purely on RSVP-TE....

       (CR-LDP)
    • Top-nodes algorithm

Link performance

Issues which may limit the performance of a given link include:
  • TCP
    Transmission Control Protocol
    The Transmission Control Protocol is one of the core protocols of the Internet Protocol Suite. TCP is one of the two original components of the suite, complementing the Internet Protocol , and therefore the entire suite is commonly referred to as TCP/IP...

     determines the capacity of a connection by flooding it until packets start being dropped (Slow-start
    Slow-start
    Slow-start is part of the congestion control strategy used by TCP, the data transmission protocol used by many Internet applications. Slow-start is used in conjunction with other algorithms to avoid sending more data than the network is capable of transmitting, that is, to avoid causing network...

    )
  • Queueing in routers results in higher latency
    Lag
    Lag is a common word meaning to fail to keep up or to fall behind. In real-time applications, the term is used when the application fails to respond in a timely fashion to inputs...

     and jitter
    Jitter
    Jitter is the undesired deviation from true periodicity of an assumed periodic signal in electronics and telecommunications, often in relation to a reference clock source. Jitter may be observed in characteristics such as the frequency of successive pulses, the signal amplitude, or phase of...

     as the network approaches (and occasionally exceeds) capacity
  • TCP global synchronization
    TCP global synchronization
    TCP global synchronization in Computer networks can happen toTCP/IP flows during periodsof congestion because each sender will reduce their transmission rate at the sametime when packet loss occurs....

     when the network reaches capacity results in waste of bandwidth
  • Burstiness of web traffic requires spare bandwidth to rapidly accommodate the bursty traffic
  • Lack of widespread support for explicit congestion notification and Quality of Service
    Quality of service
    The quality of service refers to several related aspects of telephony and computer networks that allow the transport of traffic with special requirements...

     management on the Internet
  • Internet Service Providers typically retain control over queue management and quality of service at their end of the link
  • Window Shaping allows higher end products to reduce traffic flows, which reduce queue depth and allow more users to share more bandwidth fairly

See also

  • INASP
    INASP
    The International Network for the Availability of Scientific Publications was established in 1992 by the International Council for Science to "improve access to information and knowledge through a commitment to capacity building in emerging and developing countries."-See also:*Open Access Scholarly...

     runs bandwidth management training workshops and produces reports
  • Bandwidth management using NetFlow
    Netflow
    NetFlow is a network protocol developed by Cisco Systems for collecting IP traffic information. NetFlow has become an industry standard for traffic monitoring and is supported by platforms other than Cisco IOS and NXOS such as Juniper routers, Enterasys Switches, vNetworking in version 5 of...

     and IPFIX data
  • Network congestion avoidance lists some techniques for prevention and management of congestion on routers
  • Traffic shaping
    Traffic shaping
    Traffic shaping is the control of computer network traffic in order to optimize or guarantee performance, improve latency, and/or increase usable bandwidth for some kinds of packets by delaying other kinds of packets that meet certain criteria...

     and Rate limiting
    Rate limiting
    In computer networks, rate limiting is used to control the rate of traffic sent or received on a network interface. Traffic that is less than or equal to the specified rate is sent, whereas traffic that exceeds the rate is dropped or delayed...

     are bandwidth management (traffic control) techniques
  • Bandwidth management is a subset of Network management
    Network management
    Network management refers to the activities, methods, procedures, and tools that pertain to the operation, administration, maintenance, and provisioning of networked systems....

     and Performance management
    Performance management
    Performance management includes activities that ensure that goals are consistently being met in an effective and efficient manner. Performance management can focus on the performance of an organization, a department, employee, or even the processes to build a product or service, as well as many...

  • Network traffic measurement
    Network traffic measurement
    In computer networks, network traffic measurement is the process of measuring the amount and type of traffic on a particular network. This is especially important with regard to effective bandwidth management.- Tools :...

     is a subset of Network monitoring
    Network monitoring
    The term network monitoring describes the use of a system that constantly monitors a computer network for slow or failing components and that notifies the network administrator in case of outages...

  • Bandwidth throttling
    Bandwidth throttling
    Bandwidth throttling is a reactive measure employed in communication networks to regulate network traffic and minimize bandwidth congestion. Bandwidth throttling can occur at different locations on the network. On a local area network , a sysadmin may employ bandwidth throttling to help limit...

  • Bandwidth cap
    Bandwidth cap
    A bandwidth cap, also known as a bit cap, limits the transfer of a specified amount of data over a period of time. Internet service providers commonly apply a cap when a channel intended to be shared by many users becomes overloaded, or may be overloaded, by a few users...


External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK