The end-to-end principle is one of the central design principles of the Internet and is implemented in the design of the underlying methods and protocols in the Internet Protocol Suite. It is also used in other distributed systems. The principle states that, whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system, or as close as possible to the resource being controlled.

According to the end-to-end principle, protocol features are only justified in the lower layers of a system if they are a performance optimization, hence, TCP retransmission for reliability is still justified, but efforts to improve TCP reliability should stop after peak performance has been reached.

History

The concept and research of end-to-end connectivity and network intelligence at the end-nodes reaches back to packet-switching networks in the 1970's, cf. CYCLADES. A 1981 paper entitled "End-to-end arguments in system design" by Jerome H. Saltzer, David P. Reed, and David D. Clark, argued that reliable systems tend to require end-to-end processing to operate correctly, in addition to any processing in the intermediate system. They pointed out that most features in the lowest level of a communications system have costs for all higher-layer clients, even if those clients do not need the features, and are redundant if the clients have to reimplement the features on an end-to-end basis.

This leads to the model of a "dumb, minimal network" with smart terminals, a completely different model from the previous paradigm of the smart network with dumb terminals. However, the End-to-end principle was always meant to be a pragmatic engineering philosophy for network system design that merely prefers putting intelligence towards the end points. It does not forbid intelligence in the network itself if it makes more practical sense to put certain intelligence in the network rather than the end-points. David D. Clark along with Marjory S. Blumenthal in "Rethinking the design of the Internet: The end to end arguments vs. the brave new world" wrote in 2000:

from the beginning, the end to end arguments revolved around requirements that could be implemented correctly at the end-points; if implementation inside the network is the only way to accomplish the requirement, then an end to end argument isn't appropriate in the first place.

Examples

In the Internet Protocol Suite, the Internet Protocol is a simple ("dumb"), stateless protocol that moves datagrams across the network, and TCP is a smart transport protocol providing error detection, retransmission, congestion control, and flow control end-to-end. The network itself (the routers) needs only to support the simple, lightweight IP; the endpoints run the heavier TCP on top of it when needed.

A second canonical example is that of file transfer. Every reliable file transfer protocol and file transfer program should contain a checksum, which is validated only after everything has been successfully stored on disk. Disk errors, router errors, and file transfer software errors make an end-to-end checksum necessary. Therefore, there is a limit to how secure TCP checksum should be, because it has to be reimplemented for any robust end-to-end application to be secure.

A third example (not from the original paper) is the EtherType field of Ethernet. An Ethernet frame does not attempt to provide interpretation for the 16 bits of type in an original Ethernet packet. To add special interpretation to some of these bits, would reduce the total number of Ethertypes, hurting the scalability of higher layer protocols, i.e. all higher layer protocols would pay a price for the benefit of just a few. Attempts to add elaborate interpretation (e.g. IEEE 802 SSAP/DSAP) have generally been ignored by most network designs, which follow the end-to-end principle.

Applicability

The end-to-end principle has proved to work well for applications that require a high degree of data accuracy combined with high tolerance for delay, such as file transfer, and much less well for real-time applications such as telephony where low latency is more important than absolute data accuracy. The end-to-end model is also not appropriate for large multicast and broadcast networks, especially those with high loss such as some wireless systems because the overhead it imposes on retransmission is too high for most applications to bear.[citation needed]

Network neutrality debate

Some advocates of network neutrality argue that neutrality is needed in order to ensure the end-to-end principle. Under this principle, a neutral network is a dumb network, merely passing packets regardless of the applications they support. This point of view was expressed by David S. Isenberg in his seminal paper, The Rise of the Stupid Network[1] to wit:

A new network "philosophy and architecture," is replacing the vision of an Intelligent Network. The vision is one in which the public communications network would be engineered for "always-on" use, not intermittence and scarcity. It would be engineered for intelligence at the end-user's device, not in the network. And the network would be engineered simply to "Deliver the Bits, Stupid," not for fancy network routing or "smart" number translation. . . . In the Stupid Network, the data would tell the network where it needs to go. (In contrast, in a circuit network, the network tells the data where to go.) In a Stupid Network, the data on it would be the boss. . . .End user devices would be free to behave flexibly because, in the Stupid Network the data is boss, bits are essentially free, and there is no assumption that the data is of a single data rate or data type.

These terms merely signify the network's level of knowledge about and influence over the packets it handles - they carry no connotations of stupidity, inferiority or superiority.

The seminal paper on the End-to-End Principle, End-to-end arguments in system design by Saltzer, Reed, and Clark,[2] actually argues that network intelligence doesn't relieve end systems of the requirement to check inbound data for errors and to rate-limit the sender, not for a wholesale removal of intelligence in the network core. End-to-end is one of many design tools, not the universal one:

The end-to-end argument does not tell us where to put the early checks, since either layer can do this performance-enhancement job. Placing the early retry protocol in the file transfer application simplifies the communication system, but may increase overall cost, since the communication system is shared by other applications and each application must now provide its own reliability enhancement. Placing the early retry protocol in the communication system may be more efficient, since it may be performed inside the network on a hop-by-hop basis, reducing the delay involved in correcting a failure. At the same time, there may be some application that finds the cost of the enhancement is not worth the result but it now has no choice in the matter.

The appropriate placement of functions in a protocol stack depends on many factors.


See also

  1. ^ Isenberg, David (1996-08-01). "The Rise of the Stupid Network" (HTML). Retrieved 2006-08-19. ((cite web)): Check date values in: |date= (help)
  2. ^ "End-to-end arguments in system design", Jerome H. Saltzer, David P. Reed, and David D. Clark, ACM Transactions on Computer Systems 2, 4 (November 1984) pages 277-288