End-to-end principle
Encyclopedia
The end-to-end principle is a classic design principle of computer networking which states that application specific functions ought to reside in the end hosts of a network rather than in intermediary nodes, provided they can be implemented "completely and correctly" in the end hosts. First explicitly articulated in a 1981 conference paper by Saltzer, Reed, and Clark, it has inspired and informed many subsequent debates on the proper distribution of functions in the Internet
Internet
The Internet is a global system of interconnected computer networks that use the standard Internet protocol suite to serve billions of users worldwide...

 and communication networks more generally.

Going back to Paul Baran
Paul Baran
Paul Baran was a Polish American engineer who was a pioneer in the development of computer networks.He invented packet switching techniques, and went on to start several companies and develop other technologies that are an essential part of the Internet and other modern digital...

's work on obtaining reliability from unreliable parts in the early 1960s, the basic intuition behind the original principle is that the payoffs from adding functions to the network quickly diminish, especially in those cases where the end hosts will have to implement functions for reasons of "completeness and correctness", any efforts of the network notwithstanding. The canonical example for the end-to-end principle is that of arbitrarily reliable data transfer between two end hosts. However, the principle has also, for example, shown to apply to the question of buffer sizes in intermediary nodes
Bufferbloat
Bufferbloat is a phenomenon in a packet-switched computer network whereby excess buffering of packets inside the network causes high latency and jitter, as well as reducing the overall network throughput...

.

A common misperception about the end-to-end principle is that it implies network neutrality
Network neutrality
Network neutrality is a principle that advocates no restrictions by Internet service providers or governments on consumers' access to networks that participate in the Internet...

 or a "dumb" network
Dumb network
A dumb network is marked by using intelligent devices at the periphery that make use of a network that does not interfere with an application’s operation. The dumb network concept is the natural outcome of the end to end principle...

. First, there are functions orthogonal to application specifics or which simply cannot be implemented "completely and correctly" in the network ends; and, second, there may be good reasons to offload functions from end hosts to intermediary nodes, for performance reasons or when these are better equipped to perform the functions or part thereof.

History

The concept and research of end-to-end connectivity and network intelligence at the end nodes reaches back to packet-switching networks in the 1970s, cf. CYCLADES
CYCLADES
The CYCLADES packet switching network was a French research network created in the early 1970s. It was developed to explore alternatives to the ARPANET design and to support network research generally...

. A 1981 academic paper entitled End-to-end arguments in system design by Jerome H. Saltzer
Jerome H. Saltzer
Jerome H. Saltzer is a computer scientist who has made many notable contributions.-Career:He received an Sc. D in Electrical Engineering from MIT in 1966...

, David P. Reed
David P. Reed
David P. Reed is an American computer scientist, educated at the Massachusetts Institute of Technology, known for a number of significant contributions to computer networking....

, and David D. Clark
David D. Clark
David Dana Clark is an American computer scientist. He graduated from Swarthmore College in 1966. In 1968, he received his Master's and Engineer's degrees in Electrical Engineering from the Massachusetts Institute of Technology, where he worked on the I/O architecture of Multics under Jerry...

, argued that reliable systems tend to require end-to-end processing to operate correctly, in addition to any processing in the intermediate system. They pointed out that most features in the lowest level of a communications system have costs for all higher-layer clients, even if those clients do not need the features, and are redundant if the clients have to reimplement the features on an end-to-end basis.

This is sometimes said to lead to a model of a dumb, minimal network
Dumb network
A dumb network is marked by using intelligent devices at the periphery that make use of a network that does not interfere with an application’s operation. The dumb network concept is the natural outcome of the end to end principle...

 with smart terminals, a completely different model from the previous paradigm of the smart network with dumb terminals.

In 1995, the Federal Networking Council adopted a resolution defining the Internet as a “global information system” that is logically linked together by a globally unique address space based on the Internet Protocol
Internet Protocol
The Internet Protocol is the principal communications protocol used for relaying datagrams across an internetwork using the Internet Protocol Suite...

 (IP) or its subsequent extensions/follow-ons; is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite
Internet protocol suite
The Internet protocol suite is the set of communications protocols used for the Internet and other similar networks. It is commonly known as TCP/IP from its most important protocols: Transmission Control Protocol and Internet Protocol , which were the first networking protocols defined in this...

 or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and provides, uses or makes accessible, either publicly or privately, high level services layered on this communications and related infrastructure.

Limitations

The rapid expansion of the Internet and the resulting IPv4 address exhaustion problem have forced some design changes in the originally conceived architecture of the IP address space in terms of address assignment and routing technologies. Technologies invented to alleviate the exhaustion problem have introduced elements, such as network address translation
Network address translation
In computer networking, network address translation is the process of modifying IP address information in IP packet headers while in transit across a traffic routing device....

, that do not abide by the end-to-end principle. Without this property, some network protocols require the specific support of network elements during traversal
NAT traversal
NAT traversal is a general term for techniques that establish and maintain Internet protocol connections traversing network address translation gateways. Network address translation breaks end-to-end connectivity. Intercepting and modifying traffic can only be performed transparently in the...

. This impediment hinders deployment of many new, often interactive, applications, including security (IPsec
IPsec
Internet Protocol Security is a protocol suite for securing Internet Protocol communications by authenticating and encrypting each IP packet of a communication session...

), migration to IPv6
IPv6
Internet Protocol version 6 is a version of the Internet Protocol . It is designed to succeed the Internet Protocol version 4...

 (tunneling IPv6 in IPv4), peer-to-peer
Peer-to-peer
Peer-to-peer computing or networking is a distributed application architecture that partitions tasks or workloads among peers. Peers are equally privileged, equipotent participants in the application...

 applications, and networked games.

Sometimes end-to-end connectivity is deliberately broken, erroneously, as a means of implementation of network security, as using address translation also limits the routing scope, which means that computers behind NAT cannot be addressed directly from untrusted zones. However, consensus among security experts shows that this does not provide proper security features and in fact may impede implementation of proper techniques.

Such implementation trends divide Internet users into those who have "real" Internet connectivity and those who are restricted to use applications that only use outbound network connections.

Examples

In the Internet Protocol Suite
Internet protocol suite
The Internet protocol suite is the set of communications protocols used for the Internet and other similar networks. It is commonly known as TCP/IP from its most important protocols: Transmission Control Protocol and Internet Protocol , which were the first networking protocols defined in this...

, the Internet Protocol
Internet Protocol
The Internet Protocol is the principal communications protocol used for relaying datagrams across an internetwork using the Internet Protocol Suite...

 is a simple ("dumb"), stateless protocol that moves datagram
Datagram
A datagram is a basic transfer unit associated with a packet-switched network in which the delivery, arrival time, and order are not guaranteed....

s across the network, and TCP
Transmission Control Protocol
The Transmission Control Protocol is one of the core protocols of the Internet Protocol Suite. TCP is one of the two original components of the suite, complementing the Internet Protocol , and therefore the entire suite is commonly referred to as TCP/IP...

 is a smart transport protocol providing error detection, retransmission, congestion control, and flow control end-to-end. The network itself (the routers) needs only to support the simple, lightweight IP; the endpoints run the heavier TCP on top of it when needed.

A second canonical example is that of file transfer. Every reliable file transfer protocol and file transfer program should contain a checksum, which is validated only after everything has been successfully stored on disk. Disk errors, router errors, and file transfer software errors make an end-to-end checksum necessary. Therefore, there is a limit to how secure TCP checksum should be, because it has to be reimplemented for any robust end-to-end application to be secure.

A third example (not from the original paper) is the EtherType
EtherType
EtherType is a two-octet field in an Ethernet frame. It is used to indicate which protocol is encapsulated in the PayLoad of an Ethernet Frame. This field was first defined by the Ethernet II framing networking standard, and later adapted for the IEEE 802.3 Ethernet networking standard.EtherType...

 field of Ethernet
Ethernet
Ethernet is a family of computer networking technologies for local area networks commercially introduced in 1980. Standardized in IEEE 802.3, Ethernet has largely replaced competing wired LAN technologies....

. An Ethernet frame
Ethernet frame
A data packet on an Ethernet link is called an Ethernet frame. A frame begins with Preamble and Start Frame Delimiter. Following which, each Ethernet frame continues with an Ethernet header featuring destination and source MAC addresses. The middle section of the frame is payload data including any...

 does not attempt to provide interpretation for the 16 bits of type. To add special interpretation to some of these bits would reduce the total number of Ethertypes, hurting the scalability of higher layer protocols, i.e. all higher layer protocols would pay a price for the benefit of just a few. Attempts to add elaborate interpretation (e.g. IEEE 802 SSAP/DSAP
Service Access Point
A Service Access Point is an identifying label for network endpoints used in Open Systems Interconnection networking.When using the OSI Network Layer , the base for constructing an address for a network element is an NSAP address, similar in concept to an IP address...

) have generally been ignored by most network designs.

Views on the principle

Some view the end-to-end principle as a pragmatic engineering philosophy for network system design that merely prefers putting intelligence towards the end points. It does not forbid intelligence in the network itself if it makes more practical sense.

from the beginning, the end to end arguments revolved around requirements that could be implemented correctly at the end-points; if implementation inside the network is the only way to accomplish the requirement, then an end to end argument isn't appropriate in the first place.


Indeed, as noted in RFC 1958 edited by Brian Carpenter in June 1996, entitled “Architectural Principles of the Internet,” “[i]n searching for Internet architectural principles, we must remember that technical change is continuous in the information technology industry. The Internet reflects this. . . .In this environment, some architectural principles inevitably change. Principles that seemed inviolable a few years ago are deprecated tomorrow. The principle of constant change is perhaps the only principle of the Internet that should survive indefinitely.” This is particularly true with respect to the so-called “end-to-end” principle.

As noted by Bob Kahn, co-inventor of the Internet Protocol:
The original Internet involved three individual networks, namely the ARPANET
ARPANET
The Advanced Research Projects Agency Network , was the world's first operational packet switching network and the core network of a set that came to compose the global Internet...

, the Packet Radio network and the Packet Satellite network, all three of which had been developed with DARPA support. One early consideration that was rejected was to change each of these networks to be able to interpret and route internet packets so that there would be no need for external devices to route the traffic. However, this would have required major changes to all three networks and would have required synchronized changes in all three to accommodate protocol evolutions. Instead, it was decided to create what were called “gateways,” the forerunner of today’s routers, to handle the IP protocol-based networks. Reliable packet communication was handled by a combination of factors, but, ultimately, the TCP protocol provided an end-to-end means of reassembly of packet fragments, error checking and acknowledgment back to the source. The resulting fact that no changes were needed in the individual networks was interpreted by some as implying that the Internet design assumed only dumb networks with all the smarts being at the boundaries. Nothing could have been further from the truth. The initial choice of using gateways/routers was purely pragmatic and should imply nothing about how the Internet might operate in the future.


In comments submitted by Patrice Lyons to the United Nations Working Group on Internet Governance (November 4, 2004), entitled “The End-End Principle and the Definition of Internet,” on behalf of Bob Kahn’s non profit research organization, Corporation for National Research Initiatives (CNRI), it was noted that:
To argue today that the only stateful elements that may be active in the Internet environment should be located at the edges of the Internet is to ignore the evolution of software and other technologies to provide a host of services throughout the Internet. The layering approach has many advantages and should be retained along with more integrated system architectures; the approach was a practical way of overlaying the Internet architecture over existing networks when it was difficult to coordinate the modification of these networks, if indeed such modifications could have been agreed upon and implemented. For some newer applications, maintaining state information within the network may now be desirable for efficiency if not overall performance effectiveness. In addition, current research efforts may need to draw upon innovative methods to increase security of communications, develop new forms of structuring data, create and deploy dynamic metadata repositories, or real-time authentication of the information itself.


Specifically, CNRI proposed that, in the third element of the FNC definition of Internet, after the phrase "high level services layered on", it is advisable to add the following words: "or integrated with", and observed that this point is "directly relevant to the ongoing discussions about the so-called ‘end-to-end’ principle that is often viewed as essential to an understanding of the Internet". Further, while the end-to-end principle may have been relevant in the environment where the Internet originated, it has not been critical for a number of years going back "at least to the early work on mobile programs, distributed searching, and certain aspects of collaborative computing".

External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK