Routing

A Retrospective View of NAT

By: Lixia Zhang

Date: October 7, 2007

line break image

Today, network address translators are everywhere. Their ubiquitous adoption was promoted neither by design nor by planning but by the continued growth of the Internet, which brings forth an ever-increasing demand not only on IP address space but also on other functional requirements that network address translation (NAT) is perceived to facilitate. This article provides a personal perspective on the history of NAT, the lessons we may learn from it, and some articulations on best ways forward from where we are today.

Introduction

NAT commonly refers to a box that interconnects a local network to the public Internet, where the local network runs on a block of private IPv4 addresses as specified in RFC 1918. In the original Internet architecture design, each IP address is defined to be globally unique and globally reachable. In contrast, a private IPv4 address is meaningful only within the scope of the local network behind a NAT and, as such, the same private address block can be reused in multiple local networks, as long as those networks do not directly talk to each other. Instead, they communicate with each other-and with the rest of Internet-through NAT boxes.

Like most unexpected successes, NAT’s ubiquitous adoption was not foreseen when the idea first emerged more than 15 years ago (RFC 1287 and RFC 1335). Back then, had anyone foreseen where NAT would be today, it is possible that the NAT deployment itself might have followed a different path: one that was better planned and standardised. The set of Internet protocols that have developed over the past 15 years might have also evolved differently, and we might have seen less overall complexity in the Internet than what we have today.

Although the clock cannot be turned back, I believe it’s a worth-while exercise to revisit the history of NAT so that we may learn some useful lessons. It may also be worthwhile to assess-or reassess-the pros and cons of NAT, as well as to take a look at where we are today in our handling of NAT and how best to proceed into the future.

I would like to emphasise that this writing represents a personal view, and my recall of history is likely to be incomplete and to contain errors. My personal view on this subject has also changed substantially over time, and it may continue to evolve, as we are all in a continuing process of understanding this fascinating and dynamically changing Internet.

How NAT Works

Lixia Zhang and Tony Li at IETF 69 in Chicago
Lixia Zhang and Tony Li at IETF 69 in Chicago

As I mentioned earlier, IP addresses were designed to be globally unique and globally reachable. This property of the IP address is a fundamental building block in supporting the Internet’s end-to-end architecture. Until very recently, almost all of the Internet protocol designs, especially those below the application layer, have been based on the aforementioned IP address model. However, the explosive growth of the Internet in early 1990s not only signalled the danger of IP address space exhaustion but also created an instant demand on the IP addresses: suddenly, connecting large numbers of user networks and home computers demanded IP addresses instantly and in large quantity. Such demand could not possibly be met by going through the regular IP address allocation process. NAT came into play to meet that instant high demand.

Because NAT was not standardised before its wide deployment, a number of different NAT products exist today, each with somewhat different functionality and different technical details. Since this article is about the history of NAT deployment-and not an explanation of how to traverse specific NAT boxes-I will describe a popular NAT implementation as an illustrative example. Interested readers may visit Wikipedia to find out more about various NAT products.

A NAT box N has a public IP address for its interface connecting to the global Internet and a private address facing the internal network. N serves as the default router for all of the destinations that are outside the local NAT address block. When internal host H sends an IP packet P to a public IP destination address D in the global Internet, the packet will be routed to N. N translates the private source IP address in P‘s header to its public IP address and adds an entry to its internal table that keeps track of the mapping between the internal host and the outgoing packet. This entry represents a piece of state, which enables all subsequent packet exchanges between H and D. For example, when D sends a packet P’ in response to P, P’ will arrive at N, and N can find the corresponding entry from its mapping table and replace the destination IP address-which is its own public IP address-with the real destination address H, so that P’ will be delivered to H. This mapping entry times out after a certain period of idleness, which is normally set to a vendor-specific value. In the process of changing the IP address carried in the IP header of each passing packet, a NAT box must also recalculate the IP header checksum-as well as the transport protocol’s checksum-if it is calculated based on the IP address, as in the cases for TCP and UDP check-sums.

From this description, it is easy to see the major benefit of NAT: one can connect a large number of hosts to the global Internet by using a single public IP address. Other benefits of NAT also became clear over time, as discussed in more detail later.

At the same time, a number of NAT’s drawbacks can also be identified immediately. First and foremost, NAT changed the end-to-end communication model of the Internet architecture in a fundamental way: Instead of allowing any host to talk directly to any other host on the Internet, hosts behind a NAT now must go through the NAT to reach others, and all communications through a NAT box can be initiated only by an internal host first in order to set up the mapping entries. In addition, since ongoing data exchange depends on the mapping entry kept at the NAT box, the box represents a single point of failure: if the NAT box crashes, it may lose all of the existing state, and the data exchange between all of the internal and external hosts will have to be restarted. This is in contrast to the original IP’s goal of delivering packets to their destinations as long as any physical connectivity exists between the source and destination. Furthermore, because NAT alters the IP addresses carried in a packet, all protocols that are dependent on IP addresses are affected. In certain cases, such as TCP checksum, which includes IP addresses in the calculation, the NAT box can hide the address change by recalculating the TCP checksum when forwarding a packet. For some of the other protocols that make direct use of IP addresses, such as IPSec, the protocols can no longer operate on the end-to-end basis as originally designed; for some application protocols that embed IP addresses in the application data, application-level gateways are needed to handle the IP address rewrite. As discussed later, NAT also introduced some other drawbacks that surfaced only recently.

A Recall of NAT History

I started graduate school at Massachusetts Institute of Technology to work on network research at the same time as RFC 791, the Internet Protocol Specification, was published in September 1981. Thus I was fortunate to witness the most fascinating unfolding of this new system called the Internet. During the next 10 years, the Internet grew rapidly. RFC 1287, Towards the Future Internet Architecture, was published in 1991 and is probably the first RFC that raised the concern about IP address space exhaustion in a foreseeable future.

RFC 1287 also discussed three possible directions for extending IP address space. The first one pointed to a direction similar to today’s NAT:

Replace the 32 bit field with a feld of the same size but with different meaning. Instead of being globally unique, it would now be unique only within some smaller region …

RFC 1335, published in May 1992, provides a more elaborate description of the use of internal IP addresses (in other words, private IP addresses) as a solution to IP address exhaustion. The first paper describing the NAT idea, “Extending the IP Internet Through Address Reuse,” appeared in the January 1993 issue of Computer Communication Review and was published a year later as RFC 1663. Although these RFCs may be considered forerunners in the development of NAT, as explained later, for various reasons the IETF did not take actions to standardise NAT.

The invention of the Web further accelerated Internet growth in the early 1990s. The explosive growth underlined the urgency to take action toward solving both the routing scalability and the address shortage problems. The IETF took several follow-up steps, which eventually led to the launch of the IPng development effort. I believe the expectation at the time was to get a new IP developed within a few years, followed by a quick deployment. However, the actual deployment during the next 10 years took a rather unexpected path.

The planned solution

As pointed out in RFC 1287, the continued growth of the Internet exposed strains in the Internet architecture as originally designed, the two most urgent of which were routing system scalability and exhaustion of IP address space. Since long-term solutions require a long lead time to develop and deploy, efforts started on developing both a short-term solution and a longterm solution to those problems.

Classless Inter-Domain Routing, or CIDR, was proposed as a shortterm solution. CIDR removed the class boundaries embedded in the IP address structure, thus enabling more efficient address allocation, which helped extend the lifetime of IP address space. CIDR also facilitated routing aggregation, which slowed the growth of the routing table. However, as stated in RFC 1481, IAB Recommendation for an Intermediate Strategy to Address the Issue of Scaling, “This strategy (CIDR) presumes that a suitable longterm solution is being addressed within the Internet technical community.” Indeed, a number of new IETF working groups that started in late 1992 aimed at developing a new IP as a longterm solution, and the Internet Engineering Steering Group (IESG) set up a new IPng area in 1993 to coordinate the efforts.

CIDR was rolled out quickly, which effectively slowed the growth of the global Internet routing table. Because it is a quick fix, CIDR did not address emerging issues in routing scalability-in particular, the issue of site multihoming. A multihomed site would want to be reachable through any of its multiple provider networks. In the existing routing architecture, this requirement translates into having the prefix, or prefixes, of the site listed in the global routing table, thereby rendering provider-based prefix aggregation ineffective. (Interested readers are referred to the article “An Overview of Multihoming and Open Issues in GSE,” published in the September 2006 issue of the IETF Journal for a more detailed description on multihoming and its impact on routing scalability.)

The creation of the IPng working group (later renamed to IPv6) was announced on 7 November 1994.

The new IP development effort, on the other hand, took much longer than anyone imagined when the effort first began. At the time of this writing, the IETF is finally wrapping up the IPv6 working group, almost 13 years after its establishment.

The IPv6 deployment has also been slow in coming. As of today, there have been a small number of IPv6 trial deployments. There is one known operational deployment in a provider network, but there are no known commercial user sites that use IPv6 as the primary protocol for their Internet connectivity.

If one day someone sits down to write an Internet protocol development history, it would be very interesting to look back and understand the major reasons for the slow development and adoption of IPv6. But even without doing any research, one could say with confidence that NAT played a major role in meeting the IP address need that arose out of the Internet growth, which at least deferred the demand for a new IP.

The unplanned reality

While largely unexpected, NAT has played a major role in the explosive growth of Internet access; in fact, the growth of the Net turned in large part on NAT growth. Nowadays it is common to see multiple computers, or even multiple LANs, in a single home. It would be unthinkable for every home to obtain multiple IP addresses from its network service provider. Instead, a common setting for home networking is to install a NAT box that connects one home network or multiple home networks to a local provider. Similarly, most enterprise networks deploy NAT as well. It is also well-known that countries with large populations, such as India and China, have most of their hosts behind NAT boxes; the same is true for countries that got connected to the Internet only recently. Without NAT, the IPv4 address space would have been exhausted a long time ago.

For reasons discussed later, the IETF did not standardise NAT implementation or operations. However, despite the lack of standards, NAT was implemented by multiple vendors, and the deployment spread like wild?re. This is because NAT has several attractions, as described here.

Why NAT Succeeded

NAT started as a short-term solution while we were waiting for a new IP to be developed as the longer-term solution. The first set of recognised NAT advantages were stated in RFC 1918:

With the described scheme many large enterprises will need only a relatively small block of addresses from the globally unique IP address space. The Internet at large benefits through conservation of globally unique address space which will effectively lengthen the lifetime of the IP address space. The enterprises benefit from the increased flexibility provided by a relatively large private address space.

Today, NAT is believed to offer advantages well beyond that modest claim. Essentially, the mapping table of a NAT provides one level of indirection between hosts behind the NAT and the global Internet. As the popular saying goes, “Any problem in computer science can be solved with another layer of indirection.” This one level of indirection enables the following features associated with NAT:

  • NAT can unilaterally be deployed by any end site, without any coordination from anybody else.
  • One can use a large block of private IP addresses-up to 16 million-without asking for permission, and one can connect to the rest of the Internet by using only asingle allocated IP address. In fact, for most user sites, it is difficult to get an IP address block that is much bigger beyond their immediate need.
  • This one level of indirection means that one never needs to worry about renumbering the internal network when changing providers-other than renumbering the NAT box.
  • Similarly, a NAT box also makes multihoming easy. One NAT box can be connected to multiple providers and use one IP address from each provider. Not only does the NAT box shelter the connectivity to multiple ISPs from all the internal hosts, but also it does not require any of its providers to “punch a hole” in the routing announcement (such as making an ISP deaggregate its address block). Such a hole punch would be needed if the multihomed site takes an IP address block from one of its providers and asks the other providers to announce the prefix.
  • This one level of indirection is also perceived as one level of protection, because external hosts cannot directly initiate communication with hosts behind a NAT, nor can they easily ?gure out the internal topology.

Last, but not least, another important reason for NAT’s quick adoption is that its gains were realised on day one, while its potential drawbacks showed up only slowly and lately.

The other side of the NAT

NAT disallows the hosts behind a NAT from being reachable by external hosts and hence disables them from being a server. However, in the early days of NAT deployment, many people believed they would have no need to run servers behind a NAT. Thus this architectural constraint was viewed as a security feature and believed to have little impact on users or network usage otherwise. For example, RFC 1335 gave four reasons for the use of private addresses:

  1. In most networks, the majority of the trafic is confined to its local area networks. This is due the nature of networking applications and the bandwidth constraints on internetwork links.
  2. The number of machines that act as Internet servers, i.e., running programs waiting to be called by machines in other networks, is often limited and certainly much smaller than the total number of machines.
  3. There are an increasingly large number of personal machines entering the Internet. The use of these machines is primarily limited to their local environment. They may also be used as “clients” such as ftp and telnet to access other machines.
  4. For security reasons, many large organisations, such as banks, government departments, military institutions and some companies, may only allow a very limited number of their machines to have access to the global Internet. The majority of their machines are purely for internal use.

As time goes on, however, the above reasoning has largely been proved wrong.

Today, network bandwidth is no longer a fundamental constraint. In the past few years, VoIP (voice over IP) has become a popular application. VoIP changed the communication paradigm from client-server to a peer-to-peer model, meaning that any host may call any other host. Given that more than half of the Internet hosts are behind NAT, a number of NAT traversal solutions need to be developed in order to support VoIP. A number of other recent peer-to-peer applications, such as BitTorrent, have also become popular recently, and each has to develop its own NAT traversal solution.

In addition to the change of application patterns, a few other problems also arise due to NAT’s use of private IP addresses. For instance, a number of business acquisitions and mergers have run into situations where two networks behind NAT needed to be interconnected, but, unfortunately, they were running on the same private address block, resulting in address conflicts. Another problem emerged more recently. The largest allocated private address block is 10.0.0.0/8, commonly referred to as “net 10.” The business growth of some provider and enterprise networks is leading to, or has already resulted in, the net 10 address exhaustion. An open question facing these networks is what to do next. One provider network migrated to IPv6; a number of others simply decided on their own to use another unallocated IP address block.

It is also a common misperception that a NAT box makes an effective firewall. This may be due partly to the fact that in places where NAT is deployed, firewall function is often implemented in the NAT box. A NAT box alone, however, does not make an effective firewall. Numerous home computers behind NAT boxes have been compromised and have been used as launchpads for spam or DDoS attacks. Firewalls set up control policies on both incoming and outgoing packets to minimise the chances of internal computers’ getting compromised or being abused. Making a firewall serving as a NAT box does not make it more effective in fencing off bad packets; good control polices do.

Why the IETF Missed the Opportunity to Standardise NAT

During the decade following NAT’s deployment, a big debate arose in the IETF community about whether NAT should, or should not, be deployed. Due to its use of private addresses, NAT moved away from the IP’s basic model of providing end-to-end reachability between any hosts, thus representing a fundamental departure from the original Internet architecture. This debate went on for years. As late as April 2000, a message posted to an IETF mailing list stated that NATs are “architecturally unsound” and that the IETF and the IESG “should in no way endorse their use or development.” Whoever posted that message was certainly not alone in holding that position.

These days most people would accept the position that the IETF made a mistake not to standardise NAT early on. How did we miss the opportunity? A simple answer could be that the crystal ball was cloudy. I believe that a little digging would reveal a better understanding of the factors that clouded our eyes at the time. From my personal viewpoint, the following factors played a major role.

First, I believe the feasibility of designing and deploying a brandnew IP was misjudged, as were the time and effort needed for such an undertaking. Those who were opposed to standardising NAT had hoped to get a new IP developed in time to meet the needs of a growing Internet. However, the miscalculation was off by perhaps an order of magnitude. While the development of a new IP was taking its time, Internet growth did not wait. NAT is simply an inevitable consequence, which the IETF community failed to see clearly at the time.

Another closely related factor was an inadequate understanding on how to make engineering trade-offs. Architectural principles should be treated as guidelines for problem solving; they help guide us toward developing better overall solutions. However, when the end-to-end reachability model was interpreted as an absolute rule, it ruled out NAT as a feasible means to meet the demand for IP addresses at the time. Furthermore, viewing the architectural model in an absolute way contributed to the one-sided view of NAT’s drawbacks-hence the lack of a full appreciation about NAT’s advantages as covered earlier, let alone any effort to develop a NAT solution that can minimise NAT’s impact on end-to-end reachability.

The misjudgment on NAT cost us dearly. While the big debate went on, NAT deployment was rolled out, and the absence of a standard led to a number of different behaviors among various NAT products. A number of new Internet protocols were also developed or finalised during this time-such as IPSec, SAP, and SIP, to name a few. All of their designs were based on the original model of IP architecture, wherein IP addresses are assumed to be globally unique and reachable. When those protocols became ready for deployment, they faced a world that was mismatched with their design. Not only did they have to solve the NAT traversal problem, but also the solution had to deal with a variety of NAT box behaviors.

Although NAT has been accepted as a reality today, not all of the confusions around NAT deployment have been clarified. One example is the recent debate over Class-E address block usage. Class-E refers to the IP address block 240.0.0.0/4 that has been on reserve until now. As such, many existing router and host implementations block the use of Class-E addresses. Putting aside the issue of required router and host changes to facilitate Class-E usage, the fundamental debate is whether the address block should go to the public address allocation pool or to the collection of private address allocations. The latter would give those networks that face net-10 exhaustion a much bigger private address block to use. However, this gain is also one of the main arguments against it, which is raised in an effort to press those networks to migrate to IPv6 instead of staying with NAT. Such a desire sounds familiar, because similar arguments had been used against NAT standardisation in the past. If the past is any indication of the future, we should know that pressures do not dictate protocol deployment; rather, economical feasibility does. This argument does not imply that migrating to IPv6 has no economical feasibility. On the contrary, I believe it does. New efforts are needed both in protocol developments (in order to make it a reality) and in documentations (to show clearly the short- and long-term gains from moving to IPv6).

What Can and Should Be Done Now?

The long-ago predicted IPv4 address space exhaustion is finally upon us today, yet the IPv6 deployment is barely visible on the horizon. What can and should the IETF do to enable the Internet to grow along the best path into future? I hope the review of NAT history helps shed some light on the answer.

First, we should recognise not only that IPv4 NAT is widely deployed today but also that some forms of network address translation boxes will be likely with us forever. We should have a full appraisal on the pros and cons of such boxes; the discussion earlier on IPv4 NAT merely serves as a starting point.

Historical status means that a protocol is considered obsolete and thus removed from the Internet standard protocol set.

We should not view all network address translation approaches as a “bad thing” that must be avoided at all cost. Several years ago, an IPv4 to IPv6 transition scheme called Network Address Translation-Protocol Translation (NAT-PT, RFC2766) was developed but later classified to historical status-due mainly to concerns that (1) it works much in the same way as an IPv4 NAT does and (2) it doesn’t handle all of the transition cases. However, in view of IPv4 NAT history, it seems worthwhile to revisit that decision. IPv4, as well as IPv4 NAT, will be with us for years to come. NAT-PT seems to offer a unique value in bridging IPv4-only hosts and applications with IPv6-enabled hosts and networks. There have also been discussions on the desire to perform address translations between IPv6 networks, which deserve further attention. The Internet would be better off with well-engineered standards and operational guidelines for bridging the IPv4 and IPv6 worlds and for traversing IPv4 and IPv6 NATs that aim at maximising interoperability rather than repeating IPv4 mistakes.

Accepting the existence of NAT in today’s architecture does not mean we simply take the existing NAT traversal solutions as given. Instead, we should fully explore the NAT traversal design space to steer the solution development toward adherence to the Internet architecture model. A new effort in this direction is the NAT Traversal through Tunneling (NATTT) project.

Contrary to most existing NAT traversal solutions, which are server based and protocol specific, NATTT aims to provide generic, incrementally deployable NAT traversal support for all applications and transport protocols.

Last, but not least, I believe it is important to understand that successful network architectures can and should change over time. All new systems start small. Once successful, they grow larger. The growth will bring the system to an entirely new environment that the original designers may not have envisioned, together with a new set of requirements that must be met. In order to properly adjust a successful architecture, we must have a full understanding of such an architecture’s key building blocks as well as the potential impacts of any changes to them. I believe the IP address is this kind of key building block that touches-directly or indirectly-all other major components in the Internet architecture. The impact of IPv4 NAT, which changed IP address semantics, provides ample evidence. During IPv6 development, much of the effort also involved a change in IP address semantics, such as the introduction of new concepts like that of the link-local address and the site-local address. The site-local address was later abolished and partially replaced by Unique Local IPv6 Unicast Addresses (ULA), another new type of IP address. The debate over the exact meaning of ULA is still going on. The original IP design clearly defined an IP address as being globally unique and globally reachable and identified an attachment point to the Internet. As the Internet architecture evolves, proposals to change the original IP address de?nition continue to arise. What should be the de?nition, or de?nitions, of an IP address today? I believe an overall examination of IP address’s role in today’s changing architecture deserves special attention at this critical time in the Internet’s growth.

Acknowledgment
I sincerely thank Mirjam Kühne for her encouragement and patience in helping put this article together. I also thank Wendy Rickard for her hard work in making the article more readable.