Network Performance

IAB Plenary Explores Challenges of Network Performance Measurements

By: Carolyn Duffy Marsan

Date: March 1, 2013

line break image

Whether or not the Internet Engineering Task Force (IETF) should create a unified set of standards for measuring network performance was the topic of discussion at the Internet Architecture Board (IAB) technical plenary in Atlanta.

The discussion was prompted by recent efforts to create global testbeds and frameworks for measuring the performance of Internet access networks. Among the measurements that are typically collected by these efforts are packet loss, delay, and throughput of the broadband Internet service.

Sam Crawford, a network engineer who operates the SamKnows broadband performance measurement service, said end-to-end performance measurements help Internet service providers (ISPs) pinpoint the cause of service problems. For example, SamKnows measurements helped one ISP discover why its throughput rates dropped 14 percent over a six month period.

“Our probes were seeing massive drops in throughput, but the users weren’t complaining,” Crawford said. “Later, we realized wired throughput was being limited, but wireless access in homes was unaffected. The ultimate cause was that the ISP’s latest consumer premises equipment had a bug, which caused this massive degradation in wired spaces but not in wireless spaces.”

Begun in 2008, the SamKnows measurement service has 50,000 hardware probes deployed in 34 countries taking performance measurements 24/7. In addition to the probes, SamKnows operates hundreds of measurement servers that process data from the probes. The data collection infrastructure not only gathers data from the probes, it also handles command and control and scheduling of measurements. SamKnows compiles its network performance measurements for regulators and ISPs.

“Management of the measurement probes is one of the key areas that I think would benefit from standards work,” Crawford said.

SamKnows collects, processes, and archives more than 1 billion data points per month, and Crawford said it needs to do extensive post-processing to turn the data into useful information. “If we’re considering building large-scale measurement platforms, should we be giving consideration to some of the post-processing challenges of working with the data as well?‘’ he asked.

In addition to the data challenges, Crawford discussed the operational challenges involved with shipping and maintaining the tens of thousands of devices that gather network performance data. “All of this goes away if we embed measurement software in customer gateways, rather than shipping a separate probe,” Crawford explained. “I hope that would be another part of the standardization effort.”

SamKnows is looking toward conducting measurements of network performance for mobile devices, such as smartphones and their applications, and comparisons of IPv4 and IPv6 performance. Crawford said “there is a fair amount that could be standardized” in the area of network performance measurements.

Henning Schulzrinne, chief technology officer for the U.S. Federal Communications Commission (FCC), said existing network performance measurements don’t work well—they don’t provide usable, reliable data to consumers, nor do they scale well enough to provide detailed data about network delays to service providers.

“Users need to be able to diagnose and validate their own connectivity. They need to find out if they are getting the performance that they bought,” Schulzrinne said, adding that this requires a better network management infrastructure. “For those of us in public policy, we want to check on how broadband is evolving, and if it is getting faster or not. Whether in rural areas or urban areas, consumers should be able to make a good choice.”

Schulzrinne pointed out that while the FCC has traditionally acquired and analyzed network performance data for legacy telephone networks, only during the last two years has it begun to measure the performance of broadband services delivered to consumers.

The FCC’s Measuring Broadband America effort includes 13 ISPs that cover 86 percent of the U.S. population, as well as other vendors, trade groups, and academic institutions. Approximately 9,000 consumers participate in the study, which measures 16 metrics including sustained download and upload rates, packet loss, domain name system (DNS) failures, and latency under load. The FCC issued two annual reports describing the results of its survey and providing spreadsheets of all study measurements.

“We’re only trying to measure a small part of the existing infrastructure,” Schulzrinne said. “Currently, we’re focused on the stretch between the Internet connection to the home network and the point where the consumer ISP connects to the Internet-at-large. We do not measure backbone ISP performance or the home network, but we recognize that these are important to consumers.”

Schulzrinne said that the FCC has found that most ISPs deliver close to their advertised rates during peak hours. He also said that overall ISP performance improved between 2011 and 2012, which he attributed to the FCC’s measurement effort. “You improve what you measure,” he explained.

This year, the FCC plans to measure the network performance offered by four major wireless providers. However, Schulzrinne said that mobile performance measurements have several challenges, including the variation in the capabilities of mobile devices and the need to ensure location privacy.

Schulzrinne said there are many things the FCC can’t measure, including network performance and the reliability of small ISPs, as well as access to such advanced features as IPv6 and DNSSEC. “We want to figure out which country has the cheapest broadband and why, what drives consumer adoption, and why one-third of the United States does not use the Internet at home,” he said.

Schulzrinne spoke positively about the proposed Large-Scale Measurement of Access Network Performance (LMAP) effort, which, if chartered, would standardise an architecture and a small number of infrastructure-agnostic protocols.

“Good telecom policy needs good data,” Schulzrinne concluded, urging the IETF to help the FCC to improve its Internet performance measurement effort. “We want to reuse measurements for three purposes: ISP diagnostics and planning, consumer diagnostics, and public policy data gathering.”

Attendees at the IAB panel expressed support for the IETF creating a standards-based architecture for network-performance measurements. In particular, they mentioned the need to explore the performance and network behavior surrounding emerging technologies, such as IPv6 and DNSSEC.

“Being able to measure performance for IPv6 is critical,” said Yannick Pouffary, a distinguished technologist with Hewlett Packard, adding that this would demonstrate the difference between ISPs that roll-out IPv6 natively, and those that use carrier-grade network

No Comments to Show

Leave a Reply

Your email address will not be published. Required fields are marked *