Networking Research

The Untethered Future of the Internet

By: Leslie Daigle

Date: March 6, 2011

line break image

RFC 1122: Requirements for Internet Hosts—Communication Layers and RFC 1123: Requirements for Internet Hosts—Application and Support lay out the basic, somewhat idealized, expectations of Internet hosts, circa 1989. As we look at the Internet that exists today, we can see that much has already changed from the ideal laid out in those documents, with more change to come as people increasingly use devices that can operate “untethered”—from any particular network, or fixed source of power. This article reviews the historical perspective from those documents, looks at today’s reality in comparison, and draws some conclusions about the approach to updating our notions of Internet host requirements in the face of future realities for devices and the Internet.

RFC 1122 and 1123 lay out the basic, somewhat idealized, expectations of Internet hosts, circa 1989. They acknowledge that the Internet’s reality was changing, and expressed the expectation that updates would follow. In point of fact, there have been no major updates (beyond RFCs updating specific points of protocol usage), and these documents remain the baseline ideal for Internet host requirements.

These RFCs enumerate standard protocols that a host connected to the Internet must use, with the expectation that the specifications of these documents “must be followed to meet the general goal of arbitrary host interoperation across the diversity and complexity of the Internet system.” These documents recognize that Internet hosts span a wide range of size, speed, and function, ranging in size “from small microprocessors through workstations to mainframes and supercomputers”, and ranging in function from “single-purpose hosts (such as terminal servers) to full-service hosts that support a variety of online network services, typically including remote login, file transfer, and electronic mail.”

To give a sense of the expectations from those documents, their introductory paragraphs outline:

  • A host computer, or simply “host,” is the ultimate consumer of communication services. A host generally executes applications programs on behalf of user(s), employing network and/or Internet communication services in support of this function. […]
  • An Internet communication system consists of interconnected packet networks supporting communication among host computers using the Internet protocols. The networks are interconnected using packet-switching computers called “gateways” or “IP routers” by the Internet community[…].
  • The current Internet architecture is based on a set of assumptions about the communication system. The assumptions most relevant to hosts are as follows:
    • (a) The Internet is a network of networks.Each host is directly connected to some particular network(s); its connection to the Internet is only conceptual. Two hosts on the same network communicate with each other using the same set of protocols that they would use to communicate with hosts on distant networks.
    • (b) Gateways don’t keep connection state information.To improve robustness of the communication system, gateways are designed to be stateless, forwarding each IP datagram independently of other datagrams. As a result, redundant paths can be exploited to provide robust service in spite of failures of intervening gateways and networks. All state information required for end-to-end flow control and reliability is implemented in the hosts, in the transport layer or in application programs. All connection control information is thus colocated with the end points of the communication, so it will be lost only if an end point fails.
    • (c) Routing complexity should be in the gateways.Routing is a complex and difficult problem, and ought to be performed by the gateways, not the hosts. An important objective is to insulate host software from changes caused by the inevitable evolution of the Internet routing architecture.
    • (d) The System must tolerate wide network variation.A basic objective of the Internet design is to tolerate a wide range of network characteristics—e.g., bandwidth, delay, packet loss, packet reordering, and maximum packet size. Another objective is robustness against failure of individual networks, gateways, and hosts, using whatever bandwidth is still available. Finally, the goal is full “open system interconnection”: an Internet host must be able to interoperate robustly and effectively with any other Internet host, across diverse Internet paths.

Experiences of the past 20 years have already challenged some of the key points in RFCs 1122, 1123. The development and deployment of network address translators (NATs), as a mechanism for using a single IP address to give several computers access to the Internet, inherently challenges some of the principles of “Internet host.” Either the NAT “is” the Internet host, or the computers behind it are nonconforming hosts (because they are not individually addressable on the Internet—the “end to end” principle outlined in 1122/1123).

Nor do Internet hosts typically conform to the applications expectations outlined in RFC 1123. In general, there has been a trend away from having each Internet host running a full suite of application services. The endpoints that Internet service providers enabled by providing access to home consumers were not naturally equipped or maintained as host servers. ISPs prevented, or charged extra (“business service”) for customers running their own server software (Web, mail, other). This was argued on the basis that these servers generated unwanted traffic—either in terms of legitimacy (spam) or simply volume.

As the final unused IPv4 addresses are assigned, further distance from the requirements outlined in RFC 1122/23 can be expected in the IPv4 Internet, at least, with the deployment of “Carrier Grade NATs” (CGNs), which share a single IP address across multiple customer (networks) at a time.

There is little argument that the Internet is still in full “growth mode”. More users are coming online, and more people have more devices connected to the Internet at any given time, between their desktops, laptops, and (smart) mobile phones. They may even have some of which they are not aware—their home entertainment boxes, their thermostats, and maybe eventually their refrigerators.

Some “size” numbers, from IDC .

“There were more than 450 million mobile Internet users worldwide in 2009, a number that is expected to more than double by the end of 2013. Driven by the popularity and affordability of mobile phones, smartphones, and other wireless devices, IDC’s Worldwide Digital Marketplace Model and Forecast (an IDC Database service) expects the number of mobile devices accessing the Internet to surpass the one billion mark over the next four years.

[…]

The most popular online activities of mobile Internet users are similar to those of other Internet users: using search engines, reading news and sports information, downloading music and videos, and sending/receiving email and instant messages.”

and

“• More than 1.6 billion devices worldwide were used to access the Internet in 2009, including PCs, mobile phones, and online videogame consoles. By 2013, the total number of devices accessing the Internet will increase to more than 2.7 billion.

• China continues to have more Internet users than any other country, with 359 million in 2009. This number is expected to grow to 566 million by 2013. The United States had 261 million Internet users in 2009, a figure that will reach 280 million in 2013. India will have one of the fastest growing Internet populations, growing almost two-fold between 2009 and 2013.”

Apart from the obvious indicators of growth, what these data show is that the future Internet will feature many more untethered devices, and, importantly, that people expect to be able to do all their “usual” Internet activities while on the move.

The realities faced by mobile and other networks of small devices (sensor networks) were discussed during the week of IETF 79 at an Internet Society hosted panel discussion (see page 5). Some of the issues identified are not actually new—constrained bandwidth, concerns about processing power being insufficient to support the full Internet protocol suite. However, the shear number and scope of the expected growth in the area of Internet access through mobile handsets, sensor networks, etc, suggest that the expected impact and design decisions should be reviewed.

Untethered devices are typically more constrained in their processing capabilities than traditional Internet hosts. Sensor hardware development has pushed back on implementing the full Internet protocol stack on the claim of lack of processing capability (and lack of perceived need for all those features). While the modern smartphone has more processing power than the average Internet host had when 1122/23 were written, their display and input capabilities are still quite limited as compared to more general purpose Internet hosts.

Power is a real concern for untethered devices: it is finite. Furthermore, it may be necessary to ensure some power reserves in order to carry out a primary function (e.g., make a phone call; communicate an update from a sensor, etc). Therefore, untethered devices tend not to be “always on”, and can’t reasonably be the policy enforcement point for deciding which traffic to ignore: unwanted traffic is expensive, and a device that is deciding whether it is acceptable or not has already received at least some portion of the traffic.

Connectivity often poses a problem, as well. Bandwidth may be relatively constrained, and is currently costly to the end user. These are somewhat tied to business models of the access providers, but those, in turn, are influenced by the finite availability of spectrum, and the costs of obtaining licenses, for example.

Altogether, these untethered devices highlight further possible stretching of the expectations of Internet hosts. The number of users (people) associated with a given host may be 0 (sensor), 1 (mobile handset), or several (server machines, shared computers), or even fractional (one user’s context spread across several devices). This has implications in terms of expectations for identifiers—for hosts and for users. In today’s Internet, there is an (often inaccurate) operational assumption that individual accountability can be tied to an IP (host) address. This will be increasingly inaccurate as the model of users-to-connections evolves.

The notion of connectivity is put into question by untethered devices that must cope with power reserve limitations. Rather than being always-on, always-reachable, individual hosts may choose to be selective about the time and type of connections accept. This is consistent with the 1122/23 model of putting control at the endpoints, but challenges the premises of supporting a set of always-on services in each conforming Internet host. Alternatively, considering a “split node” approach, with a set of policies implemented on a fixed server governing policies for which traffic gets forwarded to the untethered device, would allow the support for those application services in the split-node host, but may challenge the principle of putting the control at the endpoint (untethered device, in this case).

Untethered devices further challenge the notion of network positioning: future Internet hosts may be stable in the network, mobile within one network type, mobile between network types (e.g., wifi and mobile data), or even providing multiple network interfaces with different policies in place at the same time. That is, a mobile handset may be open to all traditional Internet host connections over the wifi interface, but operating in selective mode over a mobile data network, at the same time.

The challenge, going forward, will be to determine which of these present new Internet architecture design requirements, because of a change in nature or scale, and which of them represent technology growing pains that have been seen before and will be overcome again.

Certainly, there is, and has long been, work done within the IETF to address some of the base issues. There have been a number of working groups focused on mobile IP (Mobile IP WG and follow on work) and policy frameworks (e.g., COPS-PR—RFC3084). Application protocols have looked to accommodate different user realities (numbers of users per device, device capabilities, etc). Delay Tolerant Networking has explored the issue of handling networking in a context with unprecedented round trip times and other related constraints—in interplanetary IP. And there are ongoing discussions of whether or how to introduce new identifiers within the application or routing infrastructures. Each represents a fascinating challenge in its own right. The questions raised, but not answered, during the briefing panel, suggest answers that run through the fabric of many working groups, recognizing the changing landscape of Internet hosts, rather than point solutions.

Perhaps the best way to look at the future is to look away from the trendlines, and focus on “what good looks like”.

For users, the important thing is for their experience of Internet-delivered services to be consistent across network locations and hosts. In the last decade, this has been at the heart of arguments for a single DNS root, and efforts to prevent “balkanization” of the Internet. The principle is important, going forward, even as the differences of access platforms will necessarily challenge the definition of “same”.

Diversity and openness remain critical in Internet deployment, in order to continue to foster innovation. 1122/23 stress the importance of recognizing that individual networks would be set up and operated according to local design choices. The open Internetwork application framework is what has permitted the creation of novel applications without requiring permission from network operators or device manufacturers for deployment. The World Wide Web was one such idea that took hold like wildfire. Time and again, users and usage of Internet have laid the groundwork for the Internet’s evolution, not some master control. It’s important to retain the ability to support that kind of innovation and open experience, as provided for in the hosts requirements in 1122/23. In that light, the notion of an open standard “split node” model, with individual users establishing and controlling preferences for policies, would allow more growth and innovation than, for instance, “one size fits all” policy assumptions implemented as network operator private controls.

In the end, then, it’s clear the future Internet will support many users and uses based on untethered devices, and thus feature hosts that exceed the expectations of 1122/23. But the framework in those documents is sound: provide a set of requirements for interoperation at the transport and application levels, and unfettered innovation will follow. Time will tell whether the requirements of hosts are updated to accommodate the practical realities of power and bandwidth constraints understood with untethered devices, or whether the “host” will become some tethered server supporting multiple roaming devices, for example. The only wrong choice is no choice at all.