IETF News

Internet Society Panel: Evolution of the Internet’s End-to-End Architecture

By: Carolyn Duffy Marsan

Date: July 1, 2014

line break image

On 4 March, concurrent with the IETF 89 meeting in London, the Internet Society held a panel discussion about the Internet’s underlying end-to-end principle—and whether it’s worth retaining.

The end-to-end principle, which is often understood as ‘smart endpoints and a dumb network’, has been a guideline for Internet engineers for decades. The principle originated from the idea that it was best not to put functionality in a communications network, if that functionality could only be completely and correctly implemented with cooperation from the application residing at the endpoint. Over the years, the Internet’s end-to-end principle also proved valuable in maintaining openness, increasing reliability, and enabling new service development.

Leslie Daigle, chief Internet technology officer for the Internet Society, opened the panel discussion by questioning whether or not the end-to-end principle can survive in an era of pervasive monitoring.

“Revelations of pervasive monitoring, and any number of reactions to these revelations, may or may not take us in directions that are less than optimal for the end-to-end principle, such as the drive for localization of data for services that are meant to be global or madly encrypting everything, everywhere, all of the time,” Daigle said.

She opened the discussion by asking a panel of experts: “Does the end-to-end principal matter in today’s Internet and going forward? If the answer is yes, and I heartily hope it is, how does it matter?”

Fred Baker, a Cisco Fellow and former IETF chair, argued that the Internet has never been a truly dumb network.

“When the application gives the network an address and says, ‘Please send this parcel of information over there,’ it doesn’t tell the network how to get it there. The network is presumed to have the intelligence to get it there,” Baker explained.

He argued that what the end-to-end principle defines is a system in which a lower layer of the network should always do what the upper layer expects it to do. He added that the end-to-end principle should be violated only if there is good reason with measurable benefit.

“I don’t think it’s fair to call the network stupid, but I also don’t want it to be smart,” Baker said. “The statement of the end-to-end principle as I would understand it is that the network should be predictable. It should do what I expect it to do.”

Having a predictable network benefits both network operators and users, Baker explained. It enables network operators to route traffic in the manner that they prefer and to fix problems as they occur. It also allows users to accomplish their tasks.

“The network might be behaving intermittently, where sometimes I can get a packet through and sometimes I can’t. That causes users to call somebody,” Baker said. “That’s the case we don’t want to happen. If I’m an operator, and I’m trying to run a network, that costs me money. If I’m a user, and I’m trying to get a file from here to there, then I’m unable to do what I set out to do.”

Baker says one challenge on the horizon for the end-to-end principle is when carriers treat their Internet capacity as a private resource, what he refers to as “walled gardens.” This trend decreases interoperability, he warned. However, he said one positive shift is the pressure carriers feel to simplify their networks and adopt IPv6.

Andrew Sullivan, an IAB member and Dyn principal architect, said the end-to-end principle is a challenge for Internet infrastructure companies that provide DNS or email services to corporate customers by deploying smart middle boxes inside what is supposed to be a dumb network.

Infrastructure operators “rely on this predictable network,” Sullivan said.”But we also have to alter network behavior based on the user because our customers are only going to buy stuff from us if it is roughly as good as if they were running it themselves. We don’t want to put something at every point of presence because the whole point of this is that we’re going to make money based on the economies of scale. We end up having to modify network behavior. The difficulty is there are a lot of us, and we’re all doing it at once.”

Sullivan said he is hoping the Internet engineering community will create protocols that improve how infrastructure operators modify network behavior.

“One thing that I hope will emerge, although this might be wishful thinking, is that we develop protocols that give enough hints that we can do the stupid DNS or email tricks in the middle without being really harmful,” Sullivan said. “But we need protocols that allow you to make intelligent decisions in the middle and allow the infrastructure to give different kinds of tailored responses in an effort to make that experience as good as possible and make the latency as low as possible.”

Sullivan pointed out that today’s Internet applications don’t operate according to the assumptions that were in place when the end-to-end principle originated. He explained that applications are no longer using a client/server model, where there are clear endpoints with a network in between. Instead, Internet applications are stitched together on-the-fly from components scattered around the network.

“The network needs to be able to stitch the stuff together or you don’t have an application at all,” he added.

Harald Alvestrand, a former IETF chair who works for Google, said the end-to-end principle is challenging for applications because they don’t have a direct relationship with the network. Instead, they have direct relationships with users and their own backend resources.

“For an application, the network is not my friend,” Alvestrand said. “None of the operators have my wellbeing at heart… and none are under my control. But I depend on the network infrastructure to reach my customers. If the network goes away, I have no purpose in life.”

Alvestrand explained that applications use the network in an end-to-end fashion, but that involves traversing over a network that it doesn’t control.

“I have to extend trust to the network and to some degree do it blindly,” he said. “I will extend the absolute minimum amount of trust that I can get away with. I will take resources I can get away with. And if I don’t get the network behavior I’m expecting, I will do what it takes to make things work. If I could wish for what the network would do for me, me being an application, I would wish for consistency that at least if things go haywire, let them go wrong in only one way.”

Going forward, Alvestrand recommended that the network use the end-to-end principle to remain simple.

“Seen from the application perspective, I want to extend trust to the minimal amount possible and deliver a service to the user because that is my purpose,” Alvestrand said.

All of the panelists asserted that the end-to-end model is important for future protocol development.

The end-to-end model as an enabler of a predictable network “is terribly important because the network operators can’t deliver a service they can sell if that’s not true,” Baker summarized.

Alvestrand said the principle helps hold back the floodgates of complexity and keep the network as simple as possible.

“We need to make it happen in practice because it’s one of the guidelines for protocols and services,” Alvestrand said. “If you can’t elucidate the specific function for what the ends are, your design is wrong.”

Sullivan added that end-to-end is a pragmatic principle. “I think the economic pressure is there to keep this as one of the core principals of how we build this network,” Sullivan said.