IETF News

The Bandwidth Bandwagon

By: Mat Ford

Date: January 1, 2010

line break image
IETF 76 participants
Richard Woundy
Bandwidth Bandwagon panellists
Panel moderator Leslie Daigle
Bert Wijnen
Panellist Kenjiro Cho
Bob Hinden

By Mat Ford

A panel discussion at IETF 76 helps shed light on the realities of bandwidth growth, operator responses to a changing landscape, and new, relevant IETF work.

While the Internet did experience episodes of “congestion collapse” more than 20 years ago, the mechanisms implemented at that time to address the problem have largely stood the test of time. Despite this, rumours of imminent network meltdown are never very far away. In November 2009, the Internet Society organized a panel discussion in Hiroshima, Japan, adjacent to the IETF 76 meeting, for the purpose of making the issues of growing Internet bandwidth accessible to a wider audience-in essence, “pulling the message out of engineering and talking to the real world,” as moderator Leslie Daigle, the Internet Society’s chief Internet technology officer, put it in her introduction.

Leslie said decision makers are increasingly trying to understand the parameters of Internet bandwidth growth and management because these have implications for both network-neutrality debates and business decisions based on predictions of growth and usage. The panel was intended to bring new clarity to the answers to such questions as, What are the bottlenecks? What causes congestion? Is congestion bad? What is the impact? and What is being done about it? The panel was composed of individuals with real data, real network issues to resolve, and real technologies to make Internet bandwidth use more effective and efficient for all.

Broadband Landscape in Japan

First up was Kenjiro Cho, a senior researcher at Internet Initiative Japan. Kenjiro presented his research results, which had been based on data collected from six ISPs in Japan starting in 2004 and covering 42 percent of Japanese Internet traffic. As of June 2009, there were 30.9 million broadband subscribers in Japan, and the market is relatively mature, increasing by only 3 percent of households in 2008 to thereby comprise 63 percent of Japanese homes. While growth of cable deployments remains steady, the great majority of households enjoy fibre-to-the-home (FTTH) connections, and existing DSL customers are shifting to FTTH in large numbers. In the Japanese market, 100-Mbps, bidirectional connectivity via FTTH costs USD 40 a month. The relatively high access bandwidth in the Japanese market leads to higher skew in the distribution of per-user bandwidth consumption statistics: there is more variability in bandwidth consumption profiles per user.

ISPs are starting to see the value in sharing traffic growth data as a way to help others better understand their concerns. Of course, ISPs make internal measurements, but measurement methodologies and policies will typically differ from one ISP to the next. By aggregating standardized and anonymized measurements, ISPs can help third parties come to understand the pressures, concerns, and motivations that are shaping their perspective.

Understanding traffic growth on the Internet is critically important, as it is one of the key factors driving investment decisions in new technologies and infrastructure. The balance between supply and demand is crucial. Kenjiro has observed modest growth of about 40 percent per annum since 2005 based on traffic peaks at major Japanese Internet exchanges. For residential traffic, growth rates are similar-around 30 percent per annum. As network capacity is observed to grow at approximately 50 percent per annum, according to various sources, there does not appear to be a problem in catering to Internet traffic growth, at least at the macro scale.

Kenjiro discussed some of the observed shifts in residential user behaviour in the period 2005-09. In 2005, the ratio of inbound to outbound traffic was almost 1:1, suggesting that file sharing was a very widespread use of the network at that time. In 2009, the outbound traffic (download from a user perspective) was noticeably greater, suggesting a shift from peer-to-peer file sharing to streamed content services. Increases in the mode of download volumes over the period are greater (nearly 4 times: from 32 MB to 114 MB per day) than increases in upload volumes (less than 2 times: from 3.5 MB to 6 MB per day), while average download volumes are now 1 GB per day per user.

In analyzing a scatterplot of in/out volumes per user in 2009, Kenjiro observed that while there are two clusters (client-type users and peer-type, heavy hitters), there is no clear boundary between the two groups. This is an important point to bear in mind when considering the effectiveness of coarse-grained bandwidth management techniques deployed by some ISPs today. Most users make some use of both client-server-style and peer-to-peer-style applications.

Kenjiro concluded with the observation that while the data is interesting, it is nevertheless difficult to predict the future of Internet bandwidth given the variety of technical, economic, and political factors at play.

ISPs Working with IETF

The next presenter was Richard Woundy, senior vice president at Comcast, a large, U.S.-based cable ISP. Richard began by explaining some of the ISP’s motivations for congestion management: the need to be responsive to very dissimilar customer application demands; to balance the competing concerns of the Internet community, regulators, investors and so on; and the fact that network capacity increases are not instantaneous. Richard noted the daily challenge of having to tune Comcast’s network to ensure, for example, that VoIP service providers aren’t disadvantaged, followed by the need to then check that, again, for example, another third-party video services provider hasn’t gotten unintentionally disadvantaged in the process. For Comcast, the goal of congestion management practice is to ensure consistent performance of Internet applications even in the presence of heavy background traffic, such as from peer-to-peer file sharing. Comcast aims to be both protocol and application agnostic and compatible with current Internet standards. “We’re always worried about what our customers think of our service,” said Richard. “For an ISP it’s a balancing act.”

For best-effort traffic over the cable network, the Comcast congestion management plan utilizes two different Quality of Service (QoS) levels: Priority Best Effort (PBE), which is the default QoS level, and Best Effort (BE). When levels of traffic on a particular port exceed a set threshold, that port enters a near-congestion state. Customers determined to be contributing disproportionately to the total traffic volume of a port in the near-congestion state will have their traffic marked as BE for a short duration. That marking impacts the traffic of users marked BE only when congestion is actually present; otherwise, PBE and BE traffic are treated identically. In the presence of congestion, traffic marked BE will experience additional latency (on the order of a few microseconds) as it gets queued, while PBE traffic takes priority. Less than 1 percent of Comcast’s customer base is impacted by this congestion management plan, said Richard.

Richard also highlighted the work Comcast is doing to collaborate with the IETF on new protocols that could form part of future solutions for end-to-end congestion management, such as the conex BoF and alto and ledbat working groups (WGs), described in more detail later. Richard said, “It’s about making sure-while we’re executing a reasonable upgrade schedule-that when flash crowds happen or some new streaming application appears that chews up bandwidth, we can handle all those services gracefully.”

Hypergiants, Port 80, and a Flatter Internet

Danny McPherson, chief security officer at Arbor Networks, then presented the recent results of the ATLAS Internet Observatory, which is collaborative research between Arbor Networks, the University of Michigan, and Merit Network. The ATLAS Internet Observatory utilizes a commercial probe infrastructure deployed at more than 110 participating ISPs and content providers to monitor traffic flow data across hundreds of routers. That commercial probe infrastructure is believed to be the largest Internet monitoring infrastructure in the world, and the observatory’s results represent the first global traffic engineering study of Internet evolution.

Major findings from the ATLAS project are, first, the consolidation of content around so-called hypergiants-the 30 companies that now account for 30 percent of all Internet traffic. Content is migrating from the enterprise or network edge to large-content aggregators. Consolidation of large Internet properties has progressed to the point where now only 150 Autonomous Systems contribute 50 percent of all observed traffic.

Second, applications are consolidating around TCP port 80, because the Web browser is increasingly the application front end for diverse content types, such as e-mail and video. For application developers, TCP port 80 works more deterministically due to the presence of middleboxes in the network that filter or otherwise interfere with traffic using different transports and alternative ports.

Third, evolution of the Internet core and economic innovation mean that the majority of traffic is now peered directly between consumers and content. Declining transit prices have not prevented this disintermediation from taking place on a large scale. High-value-content owners are starting to experiment with a paid-peering model and dispensing with transit altogether, meaning that if your ISP doesn’t pay to play, then you won’t be able to view that content at all-although this phenomenon is difficult to quantify due to the inevitable commercial secrecy surrounding such deals. Disintermediation of the historical tier-1 networks means a flatter Internet with much higher interconnection density.

ATLAS also observed the trend away from peer-to-peer and toward streaming video distribution mentioned by Kenjiro earlier. Observations of the Internet’s size (9 exabytes per month) and growth rate (44.5 percent compound annual growth) also agree with others’ analyses. While those numbers certainly indicate significant growth, they’re well within projected increases in gross network capacity and so no cause for concern.

Danny concluded by observing that the Internet appears to be at an inflection point as it transitions from a focus on connectivity to a focus on content.

Sharing Means Caring

The final panellist was Lars Eggert, principal scientist at Nokia Research Center and Transport Area director at the IETF, who briefly introduced the IETF activities related to bandwidth management, or capacity sharing. Lars observed that the Internet is all about capacity sharing. The connectionless, best-effort, end-to-end nature of the Internet enabled it to scale and resulted in the tremendous innovation that we all now take for granted. Sharing Internet resources as the Internet does requires congestion control mechanisms at the transport layer and requires applications to be “social” in their behaviour toward each other. “Sharing means caring,” as Lars explained.

The architectural principles of the Internet mean that in general, the responsibility is split between the applications and the network. The network is required to provide neutral information about path conditions in a timely manner, while applications and transport protocols choose how to act on that information. But the smart-edge, dumbcore paradigm gets you only so far, and there’s a valid role for the network, as exemplified by the Comcast experience. As Lars observed, “It’s not all about the edges.”

The IETF toolbox includes TCP and TCP-friendly congestion control that allows hosts to determine their transmission rate according to path conditions based upon observed roundtrip time and packet loss. Extensions and optimizations include Explicit Congestion Notification (ECN) and Active Queue Management (AQM). However, as Lars observed, “Mechanisms like ECN and AQM were developed in the 1990s, when core speeds were around 45 Mbps. Now we have those speeds in the access network. Stuff that we did back then for the core should be revisited to see what we could use in the access network.”

Congestion Collapse
In the past, when more packets were sent than could be handled by intermediate routers, the intermediate routers discarded many packets, expecting the end points of the network to retransmit the information. However, early TCP implementations had very bad retransmission behaviour. When this packet loss occurred, the end points sent extra packets that repeated the information lost, thereby doubling the data rate sent-exactly the opposite of what should be done during congestion. This pushed the entire network into a congestion collapse, wherein most packets were lost and the resultant throughput was negligible.
Source: Wikipedia

A new IETF WG (Low Extra Delay Background Transport, or ledbat) is standardizing a congestion control algorithm to allow hosts to transmit bulk data without substantially affecting the delay seen by other users and applications. Another new WG (Multipath TCP, or mptcp) is endeavouring to extend TCP so as to enable one connection to transmit data along multiple paths between the same two end systems. This effectively pools the capacity and reliability of multiple paths into a single resource and enables traffic to quickly move away from congested paths.

The Application Layer Traffic Optimization, or alto, WG is focused on improving peer-to-peer application performance while simultaneously aligning peer-to-peer traffic better with ISP constraints. Providing peer-to-peer applications with network, topology, and other information should enable them to make better-than-random peer selection, thereby improving performance for the application and alignment with ISP preferences.

A new BoF meeting at IETF 76 was Congestion Exposure (conex), which targets exposing the expected congestion along an Internet path. This would be a new capability and could allow even greater freedom over how capacity is shared than we have today. “This is a very powerful mechanism that provides an information exchange between the network core and the edges that wasn’t there before, and it has lots of potential uses,” said Lars. Such a capability could be used for a variety of purposes, such as congestion policing, accountability, service-level agreements, and traffic engineering.

Finally, Lars drew attention to another BoF meeting taking place during IETF 76 on recommendations for home gateways: homegate, which is intended to collect requirements from disparate RFCs and provide an overview for implementers of home gateway devices. The goal is to improve the network experience for an end user using a home gateway to access the Internet.

There are already many tools to share Internet capacity fairly, effectively, and efficiently, and the IETF is designing new and better tools where needed. Lars concluded by noting that a lot could be gained by more consistently and appropriately using the tools we already have.

A Balanced Approach to the Impact of Broadband

In discussion after the panel, both Lars Eggert and Richard Woundy observed that it is the impact of broadband on the network that has largely exposed a lot of these issues for congestion management. “I’m very glad that this discussion has picked up over the last few years with the rise of broadband,” said Lars. “We need mechanisms to handle the [new broadband] speeds safely.” Higher access speeds make it possible for individual end users to have a significant impact on the network.

Bandwidth Bandwagon Panellists
Leslie Daigle, Internet Society (moderator)
Kenjiro Cho, Internet Initiative Japan
Lars Eggert, Nokia
Danny McPherson, Arbor Networks
Richard Woundy, Comcast
Richard emphasized that it would be a mistake to conclude from all of this that ISPs want to stop investing. The concern is, rather, to ensure ISPs are able to deliver a good customer experience for all, even when traffic increases in unexpected ways (doubles overnight, for example). “That’s the kind of situation where congestion management makes sense, but it needs to be followed up with capacity upgrades,” Richard said. “You don’t do one without the other; otherwise, you’re just letting your service fall apart.”

Details of the event, a set of slides, audio, and a transcript are available from the ISOC Web site .

IETF Toolbox

alto: http://www.ietf.org/dyn/wg/charter/alto-charter.html
conex: http://www.ietf.org/proceedings/09nov/agenda/conex.txt
homegate: http://www.ietf.org/proceedings/09nov/minutes/homegate.htm
ledbat: http://www.ietf.org/dyn/wg/charter/ledbat-charter.html
mptcp: http://www.ietf.org/dyn/wg/charter/mptcp-charter.html
TCP road map: http://www.ietf.org/rfc/rfc4614.txt

This article was posted on 20 January 2010 .

 

Full Caption Text:

Image 1: IETF 76 participants listen as panellists discuss bandwidth issues;  Image 2: Panellist Richard Woundy at IETF 76;  Image 3: Bandwidth Bandwagon panellists (from left) Kenjiro Cho, Danny McPherson, Richard Woundy, and Lars Eggert;  Image 4: Panel moderator Leslie Daigle of the Internet Society;  Image 5: IETF 76 participant and ISOC Board member Bert Wijnen attends the panel discussion;  Image 6: Panellist Kenjiro Cho, of the Internet Initiative Japan, gives a panel presentation;  Image 7: Bob Hinden asks a question during the openmic portion of the IETF 76 panel discussion;