Network Performance

Internet Society Panel Tackles Transient Congestion

By: Carolyn Duffy Marsan

Date: November 1, 2013

line break image

Alongside the IETF meeting, the Internet Society hosted a panel discussion about the impact of transient congestion on the end-user experience and whether or not the IETF could develop transport-layer strategies to improve overall network performance.

Leslie Daigle, chief Internet technology officer for the Internet Society, moderated the discussion, which was entitled “Improving Internet Experience: All Together, Now.” Participants considered how to improve overall network performance by addressing latency, throughput, jitter, and other issues that affect application performance.

Daigle explained that alleviating congestion problems requires attention from a variety of people: application software developers, operating system developers, access network hardware vendors, access network operators, transit network operators, and other infrastructure providers. Each person optimizes systems for their own needs, which may not lead to overall network optimization, she said.

“It can lead to brittleness,” Daigle said. “People are making assumptions that may not play well together… And there is the very real fact that the existing model of the Internet may not fit with today’s reality of mobile handsets as the primary mode of access of the Internet.”

Panelist Patrick McManus, who is responsible for the networking module for Mozilla Firefox, said he considers application responsiveness to be one of the key areas needing improvement in order for the Internet to continue to grow and innovate.

McManus gave an example of downloading a 1-kilobyte image to explain the amount of excess communications required by the various protocols—SSL, DNS, DNSSEC, TCP, and HTTP—involved in this simple user request.

“By the time you’re done, that’s about one second to get your 1-kilobyte image,’’ McManus says. “It doesn’t matter if you upgrade your home from 2 megabits/second access to 50 megabits/second because of the many protocol interactions involved. The issue is: How do I get reliable congestion control? I’m just moving 1 kilobyte of data. Why does it take me one second to do that? How do I deploy this application so it can work faster with all the middle boxes and firewalls of the Internet?”

McManus said that increasingly developers are creating new transport protocols to reduce the overhead associated with TCP and SSL and tunneling them over UDP, such as Google’s QUIC.

“The question for us as an Internet community is… How do we design more robust building blocks where we don’t reinvent the entire wheel?” he said.

Jason Livingood, vice president of Internet and communications engineering at Comcast Cable, said he is intrigued by efforts to measure the end user’s experience on the Internet. He pointed to the IETF’s Large-Scale Measurement of Broadband Performance (LMAP) working group, which is looking at aspects of Internet performance including throughput, latency, and jitter.

“In the past, people mostly focused on throughput: speed, speed, speed, and that’s all,” Livingood said. “There’s a lot more to your Internet experience than that. One of the big [issues] is latency.”

He added that the excess buffering of packets can have a big impact on end-user performance as can protocols used by Content Delivery Networks. “Optimizations to your part of it can have unexpected effects on other parts of the Internet,” he said.

Stuart Cheshire, distinguished engineer at Apple, said it is important to reduce the number of round trips that Internet protocols require for particular applications in order to improve responsiveness. However, he warned that “coordinating all these different improvements that are going on so they don’t conflict with each other is really challenging.”

Daigle asked the panelists to identify one change that they wish they could make to improve Internet performance.

McManus said he’d address the lack of transport security. “In Firefox, we see 20 percent of our traffic in some form of SSL (secure sockets layer). That’s just appalling,” he said. “That’s largely a technological failure, and it’s a process failure.”

Cheshire said he’d find a way to improve performance through middle boxes. “Everything is really centered on the Web: stock quotes, weather forecasts, maps. Everything is an HTTP GET,” he said. “If you’re on a network that requires you to use HTTP proxies, you can only use HTTP-based applications. That’s fairly depressing because everything is getting forced into a very narrow pipe.”

To illustrate the point, Cheshire cited new IETF transport protocols such as WebSockets, which maintains a two-way connection between a Web server and browser over TCP to facilitate live content, and WebRTC, which supports browser-to-browser applications.

Livingood said he would tackle Wi-Fi network performance. “There are millions of consumer electronic devices in people’s houses that will take years to get upgraded,” he said. “There’s a big tail on what end users think of as Internet performance.”

While some of these issues such as eliminating middle boxes are seemingly impossible, Daigle asked the panelists to identify actions that the IETF can take to reduce congestion.

Cheshire pointed to Minion, a new TCP-based service model and conceptual API (application programming interface) being developed jointly with Janardhan Iyengar at Google, as a way of improving Internet application performance through middle boxes. “Building everything in UDP (user datagram protocol) is not a panacea,” he warned. “The issue is whether we carry on down the path of trying to use TCP (transmission control protocol) or do we give up on it completely.’’

McManus said he likes Minion because it offers connection management and congestion control. “With today’s protocols, developers have to choose: Do you want to use TCP and pick up latency delays? Or do you want to roll your own on top of UDP? That’s never a satisfying choice,” he said. “Minion is a great example of ways we can go forward.”

Another suggestion made by McManus is to look for ways to create a single building block that combines the behavior of more than one protocol, such as stream control transmission protocol (SCTP) and datagram transport layer security (DTLS). “SCTP and DTLS are really made for each other, and yet you establish one before the other and you get this strip of serial things going on,” he explained. ‘You could establish one building block that is those two things mushed together.”

Livingood pointed out that for any of these new transport protocols that address congestion issues, such as Minion, it will be difficult to adequately test end-to-end performance across the Internet given the diversity of access capabilities.

“One of the benefits we get in this community of the IETF is that the people who participate here have a good understanding of how varied the network is,” Cheshire argued. “One of our responsibilities at the transport area is to improve TCP, to cut out some of the round trips, and to make it more responsive and make it better for low-latency, real-time data. The longer we fail to do that, the more developers are pushed towards doing their own thing with UDP.”

The audience responded positively to the panelists and their suggestions for new transport area work to relieve the risk of excessive congestion caused by badly designed UDP-based transport protocols that lack TCP’s congestion control protections.

“These are really hard questions that are largely being ignored,” said long-time IETF participant Dave Crocker during the Q&A period. “The IETF is not attending to these issues, and it would be great if this panel triggers something.”

Crocker suggested that the IETF leadership consider the issue of latency-related round trips when it charters working groups much as it asks about security and privacy concerns. “It could get interesting if we try to press for charters to make some assertions… along a set of parameters such as latency or jitter,” he said.