ISOC Chicago Arranges for Experts’ Panel at IETF 69

By: Henri Wohlfarth

Date: October 7, 2007

line break image

When the IETF comes to town, the ISOC chapters in the region are encouraged to take advantage of the experts. At IETF 69, not only did ISOC Chicago members get free passes to attend the Newcomers’ Tutorial and the Plenary session; they also organised a panel discussion featuring a handful of IAB members. The panel was intended to help the area’s chapter members gain insight into current issues as well as developments at the IETF and on the Internet in general.

While the speakers are currently members of the IAB and former members of the IESG, their comments were made on their own behalf.

Panellists: Brian Carpenter (former chair of the IETF), Olaf Kolkman (IAB chair), Danny McPherson, Dave Oran, and Lixia Zhang
Moderator: Bill Slater, ISOC Chicago chapter president

Chapter Panel
Together with chapter members, ISOC Chicago chapter panellists gathered at IETF 69 with ISOC president Lynn St. Amour (seated, fourth from left). Pictured (seated) are moderator Bill Slater (third from left) and panellists Olaf Kolkman (fifth from left) and Brian Carpenter (far right). Also pictured (standing, second row) are panellists Lixia Zhang (fourth from left), and Danny Pherson (far right). Not pictured, Dave Oran.

What do you see as current threats to the Internet, and how are they being addressed within the IETF?

Danny: Unwanted Traffic and Security crosses all areas of the IETF. There is a lot of work going on in many IETF working groups on this topic, from infrastructure security for DNS and routing systems to application layer security. We’ve got a great deal of work to do and there’s no ‘silver bullet.’ It’s all about layered security and incremental advances.

Dave: Making protocols less susceptible to threats is also important. Very often, adding features that are intended to prevent threats can be counterproductive. Some of those mechanisms overreact, such as by not allowing any traffic to pass through. Therefore, we need to be sufficiently aware of the work going on in other areas in the IETF.

Olaf: There are a number of areas where problems arise because of the content of the data being transmitted and the IETF is not able to prevent that bad content from occurring in the payload of the protocols, as happens with viruses, botnets, and spam. The IETF develops protocols through which entities communicate regardless of the content of the communication. You can include protection mechanisms when developing the protocol, such as DKIM, but you cannot foresee what might be the actual data that is being transmitted.

Brian: This is an important point. For example, when the mail protocol delivers spam, it’s doing exactly what it’s supposed to do-at least from a protocol point of view.

How does the work of ICANN affect the IETF and its work?

Brian: The IETF does not engage in politics. There’s an MoU [memorandum of understanding] between the IETF and ICANN that draws a precise line: IANA assigns technical parameters according to instructions it gets from the IETF, except for policy questions related to the assignment of TLDs and IP address space-unless those TLDs and IP addresses are used for purely technical purposes. As long as we stay within those boundaries, things are clear for the IETF, so we don’t need to care about what TLD is delegated and why.

What are the big challenges for the future of the Internet in both the near term and the long term, and how do you propose to meet those challenges?

Lixia: It’s always hard to predict the future, but looking at the past can offer some hints for the future. Back in the early days of networking, there were two difficult problems: congestion and routing. Over the years, we seem to have gotten a good handle on the congestion problem: not only did we develop successful congestion control protocols, but also several technological advances helped out tremendously. Congestion control can prevent congestion collapses, but good performance requires adequate bandwidth, which is met by technology advances.

Routing, however, remains a major problem today-not because we’ve made no progress but because the problem has changed: the goal used to be picking the best or shortest path. Nowadays data must follow paths that cost the least money, and complex policies were introduced in the routing. In addition, the global routing table is growing out of control. Late last year we passed the 200,000 threshold. Today we have 240,000 entries, which is faster than linear growth.

Aside from the ongoing routing challenge, we also now face the relatively new challenge of network security. This is a much tougher problem than scalability is. However, we shouldn’t be surprised that we have a security problem today. Some research papers say the original design of the Internet did not take security into account, but that assessment is not entirely fair. Initially, the Internet was designed for a specific environment it was supposed to work in. And the original designers of the Internet did a great job, which is the reason the Internet has been able to grow to its current size.

We should keep in mind, however, that good design is not the sole enabler: Without the evolution of technology-especially the technology of affordable and ever-faster computers-the Internet would not have been able to grow as big or as quickly as it did. Unfortunately, the adage that “Everything that can be used can also be abused” applies to the Internet. Affordable computers with Internet connectivity have enabled innovation and changed society, but they also have been used for more dubious purposes.

Olaf: Indeed, the Internet grew more than anyone expected. As a result, there needs to be serious reimplementation to make scaling properties better so the Internet can scale for the next 15 to 20 years. Such reimplementation not only needs to happen but also needs to be paid. We also want the Internet to be affordable to everyone, which is a challenge. The Internet is an important mechanism to make information accessible to everyone-including people in developing countries-but we shouldn’t forget that solving the scalability issues will create more complexity. In the near term, in addition to the routing problem, we’re still working on IPv4 and with the fact that IPv4 address space is limited and will run out fairly soon. IPv6 has been developed, and we expected that it would get picked up by the industry. Now the deployment to IPv6 is becoming more and more important and some problems associated with that transition are more pressing.

Brian: There’s a strong temptation for ISPs to keep their dinosaur business models alive and to protect their walled gardens-that is, closed service with lower quality that cannot fully reach the Internet. WiMAX could become such a limited service, but I’m not sure the IETF can do anything about that except preach.

Dave: I agree. This is a substantial danger, and one that could cause serious fragmentation. But I see other challenges as well. The first is that the nature of peer-to-peer traffic today is substantially different from what we’ve seen before. The traffic profile is substantially different, and only now are we starting to understand it both economically and technically. Peer-to-peer traffic has the effect of finding spare bandwidth wherever it can and using it, the result being that an ISP adds capacity, and before it can make any profit from the increased capacity, the bandwidth is already being consumed by peer-to-peer traffic.

What is peer-to-peer traffic? It means that people are sharing data, legally or otherwise. From a technical standpoint, they form on a dynamic community that makes available everything they are interested in, and allows the community to get the data in small pieces from each other. The traffic patterns look much more random, and traffic engineering is much more challenging.

Another challenge is the evolution of mobile devices. Today a very small fraction of those devices is Internet enabled, but the number is likely to grow dramatically. Yet another challenge is what I refer to as the Internet of things. The number of those devices can be extremely large. Every light-bulb, switch, and so on will have the potential to be Internet enabled.

Danny: I believe mobility and scalability are going to put pressure on the Internet. Another challenge is the convergence of various security threats. For example, the perception is that your ISP is sending ‘filthy water’, and you think, All this junk is coming down my pipe, isn’t there anything my ISP can do to filterit out? No, there isn’t anything they can do, or at least doing so is much more complex than most folks realise. Much of the infrastructure does not allow segmentation of traffic or services based on individual users. Then there are consumer privacy rights, providing a subscriber with the ability to clean their system, and regulatory and other service requirements, such as maintaining availability of VoIP-enhanced 911 services.

Brian: Botnets are serious threats to the Internet, as described in the Unwanted Traffic Report. We don’t know how to deal with the botnet problem. However, it’s important to point out that it’s not a problem of the network; it’s a problem of the end system.

Marcos Sanz (ISOC chapter member): I’ve read the Unwanted Traffic Report and, since then, I’ve had nightmares. Can someone offer some comforting words so I can get back my sleep?

Olaf: If after reading the Unwanted Traffic Report you’re having sleepless nights, then the report was a success. People need to be woken up. We also need to reach out to people outside the technical community, and we’re working with ISOC to do just that.

Brian: Many enterprises and organisations spend a lot of money to keep unwanted traffic out of their networks. But this is a small price to pay compared with not doing business on the Net at all.

Olaf: It’s cynical, but the bad guys are interested in keeping the Net running because they want to use it to do their bad business.Danny: We’ve performed a lot of analysis on those security threats. Network congestion-inducing worms, such as Slammer, are not used so much anymore, mainly because they melted parts of the network when propagating and they were far too visible. Nowadays, threats happen much more quietly; they fly under the radar while compromising and remotely administering systems, rather than appearing as loud infection and propagation vectors. Much of the threat today is economically motivated and, believe it or not, the miscreants often provide service-level availability agreements as well. If the network is not up, they can’t collect their spoils.

Olaf: People who engage in this kind of activity are highly skilled. They probably cash big paychecks.

How many IPv6 addresses are there, and is there a name for this number?

Olaf: 340 undecillion – 3.5. x 1038, or 340 trillion trillion trillion. That’s not the number of addresses that is truly usable. It’s basically chopped into halves: 64 bits identify a station, and 64 bits identify the individual network where the station sits. Still, with many trillions of addresses, we don’t think we’ll run out of addresses very soon.Is the impact of the IPv4-to-IPv6 transition comparable to Y2K-the switch from 1999 to 2000? Are there reasons, from an end-user perspective, that we need to be concerned about the transition?

Brian: The transition from IPv4 to IPv6 is different from the Y2K switch, primarily because there is no drop-dead date. Still, in terms of strategic planning, one should start now. In a few years, the Regional Internet Registries will not have IPv4 addresses to hand out. People think there will be a market for IPv4 address space, but at some point it will be cheaper to switch to IPv6. But as I said, this will not happen on a certain date, like in the case of Y2K.There is one way, however, in which it’s the same as Y2K: you have to check to see if the router and the rest of your equipment and software are IPv6 compatible. The devil is in the details.

Dave: Actually, the problem is much worse than Y2K. Much of the industry is still only building IPv4. There are 2 million to 4 million Cisco IP phones that have little or no ability to be field upgraded to run IPv6. They don’t even have enough memory to run dual stack. There are millions of devices being built every month by myriad manufacturers that are not IPv6 capable.

Danny: One of the big challenges with IPv6 is related to translating between IPv6 and IPv4. Strict use of transition is the wrong term. There will be IPv6, and we need to provide for solutions that ease deployment burdens, such as enhanced NAT-PT, but IPv4 will still be around for a very long time.

Brian: The good news is that people are starting to understand that something has to be done.

Olaf: But people really need to start looking at their networks to see what needs to be done in order to move over to IPv6.^It’s possible that by 2016, Moore’s law will become void due to the limitations of physics and the current manufacturing technologies-such as photolithography. It is projected that the impact of this on the computing world will be a requirement to write better and more-optimised software, because we could be stuck with the latest, fastest processor for several years.

Does the IETF foresee any such potentially disruptive events in the world of the Internet in the coming years?

Brian: we’re going to be moving toward more parallel processing as well as other mechanisms. We also have to work on power issues. Otherwise, the computer will simply be melting.

Dave: This will hit the router community long before 2016. A lot of parallelism will have to be developed, such as channelised inverse multiplexing.WiFi and WiMAX are everywhere. The average person might think this is magic.

Are there other technologies on the horizon? Is there a model to make this kind of thing profitable?

Danny: If it’s not a strict-access, services-based subscription model and the question is, Who is subsidising WiFi and WiMAX? the answer is likely “the advertisers,” or at least the folks who find advertising revenue. The providers will give you access, but it’s demographic-based advertising that’s paying for it. There are economic motivators here as well, I assure you.

Olaf: This kind of advertising-sponsored access is a typical use case for the World Wide Web and not for applications that run over IP.

Lixia: The value of the Internet is in its applications. If we step up a level and look at a bigger picture, we may see a different view regarding whether [offering ubiquitous wireless access] is something to be subsidised or something that will return value in another way. In the early days, people kept looking for ‘killer applications’ but the reality has taught us better: No one can predict what the next killer applications will be. The only thing we know for sure is that they will come. Look at MySpace, YouTube, and Facebook. Those killer apps keep popping up out of nowhere. Their inventors were nobodies. Look at Wikipedia, which serves as a showcase of what the online community as a whole can accomplish. You give connectivity to people, and you open the door to infinite innovations in great applications.


No Comments to Show

Leave a Reply

Your email address will not be published. Required fields are marked *