Network Management

Evolution of the IP Model

By: Dave Thaler

Date: February 7, 2009

line break image
Figure 1
Figure 1: The model exposed by IP to higher-layer protocols and applications

In the technical plenary, the Internet Architecture Board (IAB) presented its work on the Evolution of the IP (Internet Protocol) model. For purposes of this work, the IP model refers to the service model exposed by the IP layer to upper-layer protocols and applications (figure 1). That is, the IP model can be viewed either as a set of behaviours that can be relied on by higher layers or as a set of expectations that higher layers have around IP. In this sense, it is similar to a loosely defined contract that has evolved over time.

A Short History Lesson

Dave Thaler
Dave Thaler speaking at the technical plenary at IETF 73. Photo by Peter Löthberg

In the beginning, IP was first published in 1978 as an Internet Experiment Note (IEN). At the time, IENs were a separate series from RFCs but later merged into the RFC series. After several updates as IENs, IP became RFC 760 in 1980; and finally, the one we cite today, RFC 791, was published in 1981. There was considerable evolution in IP during those three years.

The evolution didn’t stop there, however, and the IP model has continued to change over the years to meet new demands. Some of those changes were intentional. Some were because we found deficiencies. Others were the result of new capabilities. Often, the changes were a consequence of trying to do something else.

By 1989, there was already some confusion concerning the IP model. RFC 1122 was written in an effort to clear up some of that confusion as well as to extend the service model. There are plenty of other RFCs that offered advice on various specific aspects of the IP model, and as a result, to gain an understanding of the IP model, one needed to search many RFCs.

Another RFC appeared in 2004, which is probably the one that is closest in spirit to this work. That one-RFC 3819-offered advice for link-layer protocol designers on how to minimize the impact on layers above the link layer. Hence, it dealt with the service model at the bottom of IP, whereas the present work deals with the service model at the top of IP.

Through it all, many applications and higher-layer protocols have been built on top of IP. Besides the things that were actually documented in those RFCs, they made various assumptions about IP. Those assumptions today are not listed in one place. They’re not necessarily that well-known. They’re not necessarily thought about when changes are being made. And, as we’ll see, increasingly they’re not even true.

The goals of the IAB work were first, to collect these assumptions (or, increasingly, myths) in one place-or at least provide references to the various other places that already have them-and second, to document to what extent those assumptions are true and to what extent they are not. Beyond that, we were interested in providing some guidance for the community.

The collected assumptions were previously presented to various subsets of this community. For example, much of the information about the assumptions was presented in the Internet area meeting at IETF 72 in Dublin, and another subset was presented in the EXPLISP BoF. The IAB solicited input from the community-or at least those subsets of the community-so we could go off and come up with guidance. We have now done that between Dublin and Minneapolis. The focus in the technical plenary was thus to discuss the IAB guidance as a working session.

Assumptions

The basic IP service model described in RFC 791 indicated that senders are able to send to an address without signalling a priori. Receivers can listen on some address they’ve already obtained-without signalling a priori. Packets can be of variable size, and there’s no guarantee of reliability, ordering, or lack of duplication. That’s the model that we held up as the great IP service model.

That left a lot unstated, however. RFC 1122 added some clarification-for example, with respect to defining the notion of strong-host (or end-system) versus weak-host models, both of which were allowed and supported on different platforms. On one hand, in the strong-host model, when a host sends a packet from a particular source address, it has to send the packet out on an interface that corresponds to that source address. Similarly, if a packet comes in to a particular destination address, it has to arrive on an interface corresponding to that destination address, or the host will drop it. In the weak-host model, on the other hand, the host can send and receive packets on any interface. Routers, for example, follow the weak-host model when forwarding is enabled.

So while RFC 1122 added some clarification, we had two different behaviours and so ended up with two different variants of the IP model. Since different platforms implemented different behaviours, applications could not rely on a single behaviour.

Even with RFC 1122 and other RFCs, many other assumptions made by upper-layer protocols and applications are not well documented. Such assumptions or myths generally fall into four categories: assumptions about IP connectivity, assumptions about addressing, assumptions about upper-layer protocol extensibility, and assumptions about security.

When talking about claims that may or may not be true, Snopes.com has a nice model that includes a claim, examples, and a status of true or false or partially true. We will use this same model below. Note that all of the assumptions we will talk about are at best partially true, but that hasn’t stopped applications from making the assumption anyway.

Assumptions about IP Connectivity

The document covers a number of connectivity-related assumptions, three of which we will mention here by way of example.

Claim: Reachability is symmetric, or “If I can reach you, you can reach me.”
Some examples of upper-layer protocols and applications that do this include request-response protocols; in other words, if a request can reach you right now, then the response can come back to me. That’s a fairly small time window. Then there are much broader assumptions, such as, “If I can reach you today, then you can reach me tomorrow.”

There are lots of reasons that this is not entirely true. For example, we have NATs, firewalls, one-way media such as satellite links, and even wireless situations such as 802.11 ad hoc mode, whereby if my radio is stronger than yours, I can get a packet to you, but you can’t get one back to me. There have also been some efforts to try to make this claim be more true. For example, RFC 3077 was one effort to help restore this for some of these cases. Today, request and response paradigms usually work, but not callbacks over a much longer time frame.

Claim: Reachability is transitive, or “If I can reach you and you can reach her, then I can reach her.”
The same sorts of things (NATs, firewalls, packet radio technologies, etc.) interfere with this assumption too, so today you not only have lack of symmetry, but you also have lack of transitivity.

Claim: The latency of the first packet that you send to a destination is typical of what you’ll see after that.
This assumption is commonly made when picking from a set of candidate servers or addresses. Many applications and protocols send a packet to each of them and then use the one that responds first, assuming that it’s the best one to talk to. A number of things interfere with that assumption today. First, the first packet may have additional latency due to a routing-related cache miss in an intermediary (e.g., ARP or flow-based routers). While resolving the next hop, the packet may be queued in the meantime. When comparing two different destinations, if one of them needs to do a resolution and one doesn’t, there can be a difference from what you might expect. Second, there are a number of protocols that have the notion of path switching. For example, we see this in Mobile IPv6, Protocol Independent Multicast-Sparse Mode (PIM-SM), the Multicast Source Discovery Protocol (MSDP), and various Routing Research Group proposals, wherein packets initially follow one path and then quickly switch to a more efficient path. That means the first burst may have a much longer latency than subsequent packets do. If you have another destination that is already switched, you might unduly think that one is closer, but it might not be. So, as a result, if you make that assumption, you can end up with, in some cases, highly suboptimal choices, and that can result in longer paths, lower throughput, and a higher load on the Internet.

In terms of IAB guidance around IP connectivity-related assumptions, we first observed that the reasons they are no longer true-or at least much less true-tend to fall into two categories: either they are effects of something done at the link-layer independent of IP or they are effects of things done specifically by network-layer technologies.

Usually, the link-layer effects are not intentionally trying to break IP. Designers don’t set out trying to invent a link type that is difficult to run IP over, but it’s when defining IP over them that we inadvertently create problems. RFC 3819 gives good advice to link-layer designers about what they can do. The other piece of guidance that the IAB added for those of us who define IP over various link types is that we try to recognize the gaps mentioned in the document and compensate for them as much as is practical. Examples of where the IETF has actually made attempts at doing this is RFC 3077, which attempts to compensate for unidirectional links such as satellite links, and RFC 2491, which attempted to compensate for non-broadcast-capable links. A notable gap today is in the area of compensating for lack of transitivity, such as with IP over 802.11 ad hoc mode.

IETF 73 plenary attendees line up at the mic
IETF 73 plenary attendees line up at the mic. Photo by Peter Löthberg

As for network-layer technologies that interfere with these assumptions, in the IETF we like the notion of reachability. We say that everyone should be able to talk to everyone. That’s true, of course, until we start getting stuff we don’t like getting. Most of us then realize we don’t actually want to be reachable by everyone; we want to be reached only by the good guys. The notion of restricting reachability to only some portion of those who might want to communicate with us is already a part of the current IP model. An example is with IPsec (Internet Protocol Security), which is now a core part of IP itself. The point is that blocking communication to or from unauthorized parties is legitimate.

When reachability is affected for reasons beyond simply restricting access to authorized parties only, the IETF should attempt to proactively avoid such hindrances for new technologies-or solve them for existing technologies (e.g., 802.11 ad hoc). Referring back to figure 1, the IP model is what is exposed to the transport layer and above on the source and destination. One approach designers sometimes use to avoid some of the effects (e.g., non-transitivity) of odd link types is to hide the link from hosts that run upper-layer protocols and applications and use such links only between routers.

We give the following principle, with wording inspired by another principle from the late Jon Postel: “When defining a protocol, be liberal in what effects you accept from lower layers, and conservative in what effects you cause to upper layers.” Using the general principle, being liberal and conservative, we have roughly two categories of work. Perhaps half of the work in the IETF is around the network layer and below, and half is around the transport layer and above. Many of us are actually in both camps. Work at the higher layer should avoid making the assumptions when practical-and at least consider them in the writing of requirements and applicability statements. Work at lower layers should avoid making the assumptions be less true when practical and similarly document any remaining effects on the assumptions made by upper layers so that other designers and administrators are aware of the impact.

Note the use of the word practical earlier. To illustrate what we mean by this, let’s look at a specific example. IAB RFC 4903 on multilink subnet issues talks about non-broadcast multi-access (NBMA) links. An example of an NBMA link is 6to4, which is not intended to go across a particular network but across the Internet. It doesn’t support multicast, but we often don’t have multicast deployed across the public Internet anyway. So 6to4 is an example wherein trying to add multicast over it may not be practical. It would add complexity that would not be needed until you actually have wide-area multicast that would be needed across the same environment.

Assumptions about IP Addressing

Claim: Addresses are stable over long periods of time.
Once upon a time, that was mostly true, at least until we started having things like the Dynamic Host Configuration Protocol (DHCP) and hosts that move around. In common application programming interfaces (APIs), such as the sockets API that many of us use, applications call a name resolution API, such as gethostbyname or getaddrinfo, and then connect to one or more resolved addresses. When a name is resolved in DNS, the requester gets back a time to live (TTL) together with the addresses, but this TTL is not present in the API, and so, applications may cache the answers for longer than indicated by the DNS, resulting in eventual communication with the wrong entity.

We also see some efforts that are intentionally, or as a side effect, trying to restore this assumption to some effect. Proxy Mobile IPv6 (RFC 5213), for example, tries to restore it for some level of mobility within a network. Protocols such as Mobile IP and the Host Identity Protocol (HIP) try to provide stable addresses to some extent by adding an additional stable address that an application can use. Hence, if applications that make this assumption use the stable address, then they work better.

Claim: A host has only one address and one interface.
Unfortunately, there exist many applications that resolve a name to a set of addresses and then simply pick the first one and use it. We saw this in a lot of applications when we started porting them to use IPv6, for example. Other applications use an address to identify a user or a machine and get confused if multiple users or multiple machines use the same address or if the same user or machine uses multiple addresses. Another common problem is that there are many DHCP options for per-machine information, whereas DHCP options are obtained over a particular interface from a particular network, and often, it’s not mentioned how the host then converts per-interface information (such as a set of DNS servers) to machine-wide information when it gets multiple answers from different interfaces or networks.

So, of course, this assumption is much less true today. Many hosts have multiple interfaces, and hosts have both IPv4 and IPv6 addresses even if they have only one interface. The use of virtual private networks (VPNs) is also fairly common and results in multiple interfaces. To some extent, protocols like Mobile IP and HIP are trying to restore this by adding another “address” that applications that make this assumption can use and be isolated from the use of multiple other addresses the host may have.

Claim: An “address” used by an application is the same as the “address” used for routing.
What some call an ID/locator split is an example of when this is not true. Many applications have assumptions, however incorrect they may be, about the relationship between proximity in the address space and proximity in the topology. That is, if you and I have similar addresses, you must be close to me, and hence you’re a better peer to talk to than someone with an address that appears to be much less similar.

audience
IETF 73 plenary audience. Photo by Peter Löthberg

Similarly, some applications or protocols have a service select addresses to put in a referral to a client based on the client’s address and how it relates to the potential candidates’ addresses. This assumption is certainly not true with tunnelling to and from hosts, because applications see the address in the inner IP header, whereas routing uses the address in the outer IP header. Similarly, it is not true with IP/locator split schemes that split them at a host.

Again, the assumptions mentioned earlier are examples that serve to motivate the guidance that follows, and more assumptions are discussed in the draft.

So, what does the IAB think about these? If we look back at architectural principles of the Internet, there’s a good statement in RFC 1958: “In general, user applications should use names rather than addresses.” If only that were true!

Today we have many APIs that unnecessarily expose addresses to applications, and many applications have to deal with the concept of an address only because they need to use it to open a transport connection instead of being able to connect by name, as Stuart Cheshire discussed in the plenary in Dublin. Today it’s often an implementation issue, not a protocol issue, but there are also protocols defined to carry only IP addresses instead of carrying names when doing a referral or request for a later callback.

Figure 2
Figure 2: Steve Deering’s hourglass showing the “waist” of the Internet

In general, anything that’s already dependent on some naming system should try to avoid using addresses and use only names. As a side effect, this happens to ease the transition to IPv6 because applications that know nothing about IP addresses generally work without changes.

Assumptions about Upper-Layer Extensibility

Claim: New transport-layer protocols can work across the Internet.
Figure 2 shows the hourglass from Steve Deering’s presentation at the IETF 51 plenary back in 2001 regarding the “waist” of the Internet. Besides TCP and UDP, it shows “”¦”. The IP model is not static, and neither are other layers. Some applications use raw sockets and make this assumption-or at least want this assumption to become more true. But today devices such as NATs and firewalls support only UDP and TCP or, even worse, support only HTTP. As a result, many applications and protocols (such as the whole Web Services architecture) today are built on top of HTTP instead of TCP or UDP, and we even see IP over HTTP, resulting in an architecture more like that shown in figure 3.

Claim: If one stream to a destination can get through, then so can others.

For example, you have applications that open multiple connections to get better throughput, and you have applications such as FTP that open separate connections for data and control. However, a number of factors may interfere with that assumption. Some firewalls may block specific ports, for example. Also, some middleboxes keep per-connection state and may run out of memory or ports when an application attempts multiple connections. This has come up in discussions of carrier-grade NATs, for example. Just because you can get one, doesn’t necessarily mean you can get a hundred.

Figure 3
Figure 3: An updated hourglass showing an architecture often seen today

In considering these assumptions, we observe that RFC 791 doesn’t actually describe what the requirements were, but there’s a great paper by Dave Clark referenced in the document that does list the requirements that were discussed when IP was first designed. One such requirement was to support the widest possible range of applications by supporting a variety of types of service at the transport level.

The issues with this today arise either in the name of security-or as a side effect of something else, such as address shortage-and the same guidance applies as for the IP connectivity-related assumptions.

Assumptions about Security

In terms of security, the examples are well-known, including modifications to packets in transit, privacy, and forged source addresses. Fortunately, RFC 3552, which talks about how to write security considerations, already has excellent guidance for what people should do about these assumptions.

It’s also worth mentioning that changes to any assumptions, not just assumptions about security itself, can impact security if some application or upper-layer protocol bases its behaviour on that assumption. For example, consider an application that binds to an IP address, running on a host with two interfaces: one on a “safer” network and one on some untrusted network that you don’t want to do certain things on. If the application binds to an address on the good network, will it see only traffic that comes in across the good network? There are applications that assume this is the case. We saw earlier, however, that this is true for strong-host systems and false for weak-host systems. As a result, great care should be taken when making an assumption less true. Upper layers should also carefully consider the impact of basing security on any such assumption.

Of course, many assumption violations are actually done in the name of security even though they break some applications.

Conclusions

Unless you can enumerate all possible applications that are run, any changes to the assumptions listed in the document will probably break some applications. We’ve realized that it becomes harder and harder over time to evolve the IP model, because there are more and more things that might have assumptions built in. Still, the IP model is not static, and continuing to evolve it to meet new demands is important. Changes must be made with extreme care, however. Adding functionality that has no impact unless the upper layer asks for it is generally safe, but fewer entities will actually use it.

For those who make changes to the network layer or below, write down the effects on upper layers-for example, as part of requirements and applicability statements. For those who work on technologies at the transport layer or above, avoid these assumptions whenever practical, and if you do depend on any, write them down-for example, in requirements and applicability statements.