Congestion Exposure: We’re All in This Together

By: Philip Eardley

Date: January 1, 2010

line break image

One of the fundamental characteristics of the Internet’s architecture is that capacity is shared on a packet-by-packet basis (there are no circuits). Today, several mechanisms are used to achieve such capacity sharing, including TCP, deep-packet inspection (DPI), and volume caps. The mechanisms for achieving capacity sharing are partly cooperative (such as between two users’ TCP algorithms) and partly competitive (such as between TCP and DPI or between one user’s 10 TCP flows and another’s single TCP flow). This is part of what is called the tussle in cyberspace.1

Capacity sharing and congestion are two sides of the same coin. A (good) transport protocol needs to fill the bottleneck link, or else the network will be underutilized. That means a certain amount of congestion is a good thing. Thus, sharing out capacity is also about sharing out congestion. Note also that the amount of congestion a sender suffers is equal to the amount it causes.

All of this suggests that it is sensible to try to make capacity management more cooperative-among and between users, content providers, Internet service providers (ISPs), and carriers. We believe doing so will lead to more-effective use of bottleneck links and encourage deployment of more capacity where required.

Congestion exposure seeks to promote such cooperative capacity management. It would constitute a new capability for the Internet-one in which any node would be able to see the rest-of-path congestion; that is, the congestion between the node and the destination.

IETF 76 in Hiroshima, Japan, saw a well-attended BoF session, where it was decided that the IETF should work on congestion exposure. A new working group (WG) called conex is expected to be formed by IETF 77.

How Congestion Exposure Works

Currently, information about congestion is visible only at the transport layer in the end systems (the information is hidden from the network layer). We would like to see the congestion information in the header of each IP datagram, which would make it visible to all nodes in the network. This will require a new mechanism that enables senders to inform the network of the level of congestion they expect their packets to encounter.

Such a mechanism means having the sender include in the IP header an indication of the current congestion across the path to the destination. A router suffering congestion adds Explicit Congestion Notification (ECN) marks to packets (as defined in RFC 3168) or else drops them. (The latter course of action is less desirable, but proper discussion of this point is beyond the scope of this article.) The destination then reports back to the sender the total congestion, which completes the loop (Figure 1).

The whole-path-congestion information, which is added by the sender, does not get modified en route. So, by subtracting the congestion-so-far (the information carried by the ECN marks), any node can infer the rest-of-path congestion.

One obvious requirement for true functionality is accurate information. For example, can the sender lie about the amount of congestion it is causing in order to gain an advantage? If so, one potential mechanism for preventing this type of cheating might involve having the final network on the end-to-end path check that the sender’s declaration of the total congestion matches the actual congestion experienced. If there is a discrepancy, then the final network may impose a punishment, such as dropped packets.

How to Use Congestion Exposure

First, let us consider how an ISP might use our new congestion exposure metric (we will discuss the benefits to end users shortly). Today, most ISPs use techniques like DPI, volume capping, and FairShare (see The Bandwidth Bandwagon on page 1 for an explanation of the FairShare system as deployed by Comcast) to deal with bandwidth shortages, especially in the busy hour. Essentially, they are trying to improve the experience of the majority of users at the expense of a limited group of high-bandwidth users and bandwidth-intensive applications. For instance, the ISP may use DPI to bias against applications it considers low value (perhaps peer to peer) in order to improve the experience for other users and applications.

Congestion exposure helps the operator by improving the granularity of the information available to inform bandwidth-management mechanisms. With congestion exposure, the operator sees the current congestion, which is an indication of the actual stress on capacity, as opposed to just a count of the volume over the course of a month (or the busy hour), which is an extremely crude metric. The operator also sees the whole-path congestion rather than just conditions within its own network.

Contracts between ISPs and their users could be modified to take account of the congestion exposure information, such as by attaching a congestion allowance clause. The allowance could be in the form of a token bucket, wherein a user is allowed to cause congestion (on the whole path) up to a particular rate and burst size, but anything higher would result in consequences, such as dropped traffic or traffic that gets forwarded at a lower priority. All contracts could look the same as today (at least to nongeek users) and include a tier of options, such as basic, medium, and advanced.

The key difference between this new congestion exposure paradigm and what is happening today is that in the new paradigm, an end host’s operating system could potentially optimize the user’s experience by favouring some of its applications. The operating system deduces when the user’s congestion allowance is endangered, and it then balances congestion between its applications. Perhaps during a period of heavy congestion, the user’s videoconferencing would continue at full rate while a file download would be paused. Similarly, if the download happened to be more important, then that could be favoured instead.

Applied in this fashion, congestion exposure would encourage the use of scavenger transports (similar in aim to LEDBAT; see that preserve the user’s congestion allowance for more-important applications. The extra benefit of congestion exposure is that it enables fully flexible yet simple differentiation of QoS (quality of service) under the control of the end user’s applications and operating system.

By enabling greater end-system control and freedom over transmissions, congestion exposure makes it possible for operators to increase their network’s efficiency. In particular, the combined utility of an operator’s users will increase. It is exactly this kind of cooperation between users and networks that lies at the heart of our hopes for congestion exposure.

Interestingly, congestion exposure is application and protocol agnostic. Deciding which applications consume the user’s congestion allowance is under the control of the user’s particular preferences; in other words, the network doesn’t care, so DPI is no longer needed. Thus, regulators requiring nondiscriminatory traffic management should also be happy.

Other Benefits of Congestion Exposure

There are other potential benefits of congestion exposure, for both users and networks. For example, when there is little congestion, the application can run at a much faster rate than TCP would achieve. When there is congestion, a sender is likely to favour short transfers over large ones (because they use up less congestion allowance); for example, a sender might favour Web browsing over peer to peer. As Figure 2 shows, the result is that the short transfer now completes much more quickly, while the larger one takes barely any longer.

We also believe that congestion exposure can improve an operator’s incentive to invest in new capacity. One of the stumbling blocks to investment that we face today is that a few users tend to grab nearly all of the extra bandwidth while the cost is spread out over all users through fees. With congestion exposure, an operator is motivated to invest because the benefit is more evenly spread out (or targets those who want to pay for it). An operator may also be able to identify which of its links have the greatest incipient demand and hence determine where to focus that investment.

Other potential benefits include new tools that exploit the congestion information for DoS mitigation, traffic engineering, and internetwork service-level agreements.

The Future of Congestion Exposure at the IETF

The proposed conex WG will concentrate on the specification of how rest-of-path congestion information is carried in IP packets. This does, however, require standardizing a change to IP, which is not an insignificant step!

Conex will also work on how to transport the whole-path-congestion information from the destination to the sender, as well as how to prevent parties from being less than truthful about congestion. The WG will encourage experiments to determine which use cases are most useful and hence worthy of deeper consideration.

It is hoped that the work of the proposed conex WG will lead to better cooperation between users and networks-and so achieve better capacity sharing on the Internet.

For more information…


1. D. Clark, J. Wroclawski, K. Sollins, R. Braden. “Tussle in Cyberspace: Defining Tomorrow’s Internet”; IEEE/ACM Transactions on Networking (TON), Volume 13, Issue 3 (June 2005),

This article was posted on 20 January 2010 .