Internet of Things

How Internet Traffic Measurements Can Bolster Protocol Engineering

By: Carolyn Duffy Marsan

Date: April 17, 2016

line break image

Can the IETF adopt measurement-driven engineering in the design of Internet protocols? That was the technical topic at the IETF meeting held in Yokohama, with presenters Brian Trammell, who leads the Internet Architecture Board’s (IAB’s) IP Stack Evolution Program, and Alberto Dainotti, a research scientist with the Center for Applied Internet Data Analysis (CAIDA).

Trammell introduced the topic by saying that measurement-driven engineering would allow protocols to be designed for common occurrences, while taking into account the risks of uncommon occurrences.

“We’d like to apply measurement wherever we can to know the difference“ between these two situations, he explained, adding that it might even be possible to take measurements at runtime.

Trammell focused his talk on the role that measurement can play in writing protocols related to IP stack evolution and path impairment. He noted that many solutions assume that the Internet can be run over User Datagram Protocol (UDP), but said “we need more data” before making that decision.

Trammell showed a picture of the evolving IP stack, which resembles a two-stem martini glass,and noted that we now have two layer threes, with IPv4 and IPv6 coexisting “more or less well.”

However, he said that we have problems with fuzzy boundaries between layers three and four, with Transmission Control Protocol (TCP), Transport Layer Security (TLS), and Hypertext Transfer Protocol (HTTP) on top of IPv4, while UDP and new transports will be layered on top of IPv6.

“We’d like to fix this problem by putting in new transport layers and by rethinking the layer boundary with UDP encapsulation, crypto to reinforce the boundary between endpoint and path visible headers… and add explicit cooperation to give back transport and application semantics the path they actually need,” he explained.

Trammell said the IETF has taken the assumption that all of this explicit relayering can be done with UDP encapsulation. “We assume that UDP works. Does it?” he asked.

Trammell explained that it’s important to measure path impairment, which shows the likelihood that traffic with given characteristics will experience problems on a given path. These problems might include increased latency, reordering, connectivity failure, or selective disablement of features. The goal of measuring path impairment is to discover how and how often a proposed feature would break.

“Basically, the way we measure this is we put a bunch of packets on the Internet, and we see what happens,” Trammell said.

He provided results from two testbeds, PlanetLab and Ark, which cover about 10,000 paths and have widely different results in measuring the percentage of paths modifying a selected packet feature. For example, with a TCP Initial Sequence Number (ISN), PlanetLab measures an error rate of 10.7% and Ark measures a rate of 1.8%.

Trammell pointed out that these two testbeds represent “a really tiny fraction of the Internet, which has billions and billions of paths,” Trammell noted. “So the results are highly dependent on the vantage point.”

Further, these testbeds have the same bias because they are deployed by people who are knowledgeable about networking. Yet they have widely different results.

“We need more data here and more diversity,” Trammell added.
Trammell said the IETF needs to engineer protocols that work for path impairments that are common, such as Network Address Translators (NATs), but that they shouldn’t create a lot of excess code to deal with rare problems.

“We need information about the prevalence of these [situations] in order to make informed decisions,” he said.

Trammell pointed out several challenges for measurement-driven protocol engineering. First, measuring the Internet is hard, and measurements don’t always measure what you want. Further, the Internet is not homogenous, so it is difficult to extrapolate from measurements on any given link. Further, researchers face the problem of having not enough data and too much data at the same time.

Trammell recommends that the IETF consider using measurements that are inadvertently gathered by protocols such as how TCP measures its round-trip time. Further, he suggests that the IETF design protocols with built-in measurements in mind, thereby making instrumentation accessible and operational at runtime.

He also suggested that the IETF enhance the testbeds that are available, such as PlanetLab and Ark, as well as existing measurement tools like the Large Scale Measurement of Broadband Performance (LMAP). The IETF should use these testbeds and tools to create a framework to bring comparability and repeatability to observations.

“The goal would be to combine measurements from different vantage points and data sources for wider and deeper insight,” he explained. “Here are a couple things we can do: develop common information models and query sources, and develop common coordination and control protocols.”

Next, Dainotti considered the role that Internet traffic measurement can play in the development of protocols related to the Border Gateway Protocol (BGP).

“BGP is the central nervous system of the Internet,” Dainotti said. “BGP design is known to contribute to issues in availability, performance, and security. So we know that we need to engineer protocol evolution. However, it’s difficult to make protocol engineering decisions because we know very little about the structure and dynamics of the BGP ecosystem.”

Dainotti said researchers need more and better data about BGP operations, including more information from routers, as well as more data collectors and more experimental testbeds. Further, the Internet engineering community needs better tools to learn from the data, so that data analysis is easier, faster, and better able to cope with larger data sets. Researchers would like to monitor BGP in near real-time and tighten data collection, processing, and visualization.

Dainotti shared his research related to BGP outages, including those caused by country-level Internet blackouts and natural disasters. Before his Internet Outage Detection and Analysis (IODA) project, it took four months to analyze an Internet shut-down such as the Arab Spring in 2011. With IODA’s live Internet monitoring, he is able to detect Internet outages, such as a 20-minute outage experienced by Time Warner Cable in September 2015, in near real-time.

“We built some complex software and hardware to track outages in near real-time and to perform additional measurements while the event is actually happening,” Dainotti explained. “Christmas last year, we were able to follow the outages in North Korea in almost real-time—just a 30-minute delay. This was thanks to the infrastructure we built.”

Now CAIDA is making these tools more generally available. For example, it has a new tool called BGPstream that provides a software framework for historical and live BGP data analysis. This tool is available open source at bgpstream.caida.org.

BGPstream is “used mostly by the scientific and operational community,” Dainotti said. “It efficiently deals with large amounts of distributed BGP data from multiple BGP collectors. The main library offers a time-ordered data stream from heterogeneous sources. It supports near real-time data processing and targets a broad range of applications and users.”

CAIDA has built several tools including PyBGPstream, which can be used to study AS path inflation, and BGPcorsaro for monitoring address space. Another project tracks BGP hijacking attacks.

In conclusion, Dainotti described a BGP Hackathon that CAIDA hosted in February focused on live BGP measurements and monitoring. To learn more about this event, visit [email protected]

“How you can contribute is to… propose problems that are worth addressing and things you would like to see in tools used to study BGP,” he concluded.

To start the Q&A discussion, Trammell returned to the question of why the IETF doesn’t have enough data to support the idea of running the Internet over UDP.

“This is a question I’m spending a fair amount of time working on,” he said. “Lots of firewalls block or limit or impair UDP for security reasons, particularly DDOS attacks. So they turn it off… Depending on which of the commonly available testbeds that we consider, we see 2% to 6% of access networks are actually blocking UDP. That’s kind of a high number. We’d like to understand the shape of that impairment before we talk about UDP encapsulation.”

Audience members questioned both the overhead costs and privacy risks associated with having protocols take live measurements, essentially those monitoring user behavior.

“There is a huge cost in data, not just in storage of it but in privacy,” Dainotti admitted, adding that the research community has many ways to anonymize data and doesn’t need to retain the data forever.

In response to another question, Trammell said two protocols that could benefit from additional measurement are DNS and DNSSEC. “We should be measuring the tradeoffs of assurance versus the ability to use it for attacks,” he said.