Plenary Report

By: Mirjam Kühne

Date: December 7, 2007

line break image
Mirjam Kühne

Mirjam Kühne

Following a warm welcome by IETF chair Russ Housley, Stephen Wolff of the Cisco Research Center, one of the hosts of IETF 70 together with Microsoft, gave a presentation in which he reflected on Internet research.

Stephen’s participation in the IETF goes back to its beginnings. Stephen recalled that at the second meeting of the IETF in April 1986-which was considerably smaller than today’s meetings and which had a much smaller network-a presentation by Bob Hinden showed the actual size of the Internet: “131 Networks, 85+ Gateways, 160,000,000 packets/week.”

At the time, there was not a lot of Internet-related research and there weren’t many textbooks on networking. In fact, the entire library of Internet-related books would most likely have fit on one shelf.

However, even at that time, Stephen said, the Internet was a rich source of problems. On the technical side there were routing failures, collapses through congestion, fast long-distance networks, and lack of security. But there were nontechnical problems as well.

In his presentation Stephen described one particular research project-the Gigabit Testbeds (1990-1995)-that was of particular interest. Done in cooperation with CNRI (the Corporation for National Research Initiatives) and funded by 20 million USD over five years, Craig Partridge called it “How Slow Is One Gigabit per Second?” With a total of five testbeds, the project had mixed success, but according to Stephen, the community gained a much better understanding of the challenges of speed over long distances.

Much has happened since. Today there are two big research projects:

  1. GENI (Global Environment for Network Innovation), which, with a budget of approximately 367 million USD, is much larger in scope than the Gigabit Testbeds project but similarly organised, and
  2. FIND (Future Internet Network Design), a major, new, long-term initiative of the NSF NETS research programme and that has a budget of 30 million-40 million USD per year.

Together they add up to more than 40 different projects. “Many of them are good,” said Stephen, who encouraged everyone who is interested in research to look over the programmes at There also is the work of the Internet Research Task Force (IRTF). A number of IRTF research groups (RGs) have funded FIND proposals, such as dtnrg, eme, end2end, mrg, p2prg, and rrg. The Crypto Forum RG (cfrg) seeded the GCM mode for IPSec (RFC 4106) and UMAC message authentication (RFC 4418).

Stephen closed his presentation by welcoming unsolicited proposals to the Cisco Research Center.

Following Stephen’s presentation was the Network Operations Center (NOC) report, which was presented by Morgan Sackett of VeriLAN. The NOC was again run by VeriLAN staff and volunteers. Upstream connectivity was provided by Telus and BCNET. An IPv6 tunnel was set up to ISC. Once again, the NOC did a fantastic job. The network operated without disruptions during the entire meeting.

Lakshminath Dondeti, chair of the Nominations Committee (NomCom), introduced the new NomCom members and gave a status report. The NomCom regularly sends requests for feedback about the various candidates. The return rate is, at best, 12 percent and, at worst, 4 percent. This is not good enough. More feedback is needed in the future. Community input is crucial for this process to work effectively.

Henrik Lewkovetz put together some extremely useful tools. Lakshminath thanked Henrik and the NomCom members for all the work they have put into this process.


A number of people were recognised for their contributions to the IETF and the Internet. Mark Foster, chief technology officer of NeuStar, was recognised for his pivotal role in the administrative restructuring of the IETF. Without his assistance, the restructuring would have taken much longer and would have been much more difficult.

Jon Postel Award

The Jon Postel Award Committee announced that the 2007 Jon Postel Award was given to Nii Quaynor for “his vision and pioneering work that helped countless others to spread the Internet across Africa.”

Nii Quaynor

Jon Postel Award Winner Nii Quaynor
Photo Credit: Peter Löthberg, with permission

The award is traditionally presented to an individual who has made outstanding contributions in service to the data communications community and to honor an individual who, like Jon Postel, has provided sustained and substantial technical contributions, service to the community, and leadership. With respect to leadership, the committee places particular emphasis on candidates who, in addition to their own individual accomplishments, have supported and enabled others to achieve success.

Nii established the first Internet services in West Africa in 1993. He is the founding chair of AfriNIC, the African numbers registry, and has been convener of the African Network Operators Group (AfNOG) since 2000. Earlier in his career, Nii established the computer science department at the University of Cape Coast in Ghana in 1979. He was awarded a Ph.D. in computer science from the State University of New York at Stony Brook in 1977 and worked at Digital Equipment Corporation from 1977 to 1992. Currently, Nii is chair of Network Computer Systems in Ghana and professor of computer science at the University of Cape Coast.

Nii thanked ISOC and the IETF, saying he felt humbled by the award and by what it represents. “Africa thanks ISOC and the IETF for this recognition. Africa will be very pleased with this contribution.” He thanked his colleagues in Africa who supported his efforts and pushed him along. He also recognised the IETF community for contributing in such areas as how the number and name resources have been defined, which has helped Africa’s underlying understanding and its self-organisation.

Nii plans to use the award of 20,000 USD to establish a new fund for technical engineers in Africa. The fund will be managed by AfriNIC and AfNOG.

Stats and Updates

In his IETF chair report, Russ Housley provided some meeting statistics as well as an update on IANA and RFC activities. He also thanked the team of volunteers responsible for the audio streaming. The IETF received outstanding support from the Network Resource Startup Center and the University of Oregon.

Kurtis Lindqvist, chair of the IETF Administrative Oversight Committee (IAOC), reported that the Association Management Solutions (AMS) secretariat has won the RFP for the IETF Secretariat services. A transition plan is being worked out. The IAOC thanked the staff of NeuStar Secretariat Services. The 2008 IETF budget was submitted to the ISOC Board of Trustees and was approved shortly after IETF 70.

Ray Pelletier, IETF administrative director, announced that contracts with venues are already in place for meetings in 2008. In fact, the entire meeting-planning process is now happening with much longer lead times. After a survey of the IETF community, it was decided to follow a 3-2-1 model with respect to meeting locations: Within two years there will be three meetings in North America, two meetings in Europe, and one meeting in Asia. This seems to be appropriate for meeting the needs of attendees.

At the end of his presentation, Ray acknowledged the ISOC Fellowship Programme, which brings engineers from developing countries to IETF meetings. All costs are covered by ISOC. Each fellow is paired with an experienced IETF participant, who acts as a mentor for that fellow. Many thanks to the mentors and sponsors of the programme. (See page 9 for more information about the ISOC 70 fellows and mentors.)

Open Mic

A short discussion regarding tools development took place at the beginning of the open-mic session, which was directed mainly toward the IAOC. While tools development and maintenance falls within the responsibility of the IETF Secretariat, the volunteer effort to develop tools is still seen as critically important both to save money and because it is a hands-on community effort. Some people would like to see a plan for moving forward with tools development and maintenance and learn more about how the plan will support the community. The IAOC is working on both the plan and a license agreement.

With regard to the IETF Trust, Ray said that while the IAOC has not kept an inventory of the nearly 2,000 RFCs that have by now been signed to the Trust, all names are listed on the IAOC Web site. In addition, businesses have signed their RFCs over to the Trust, which means that all documents that have been published by employees of a company are automatically signed over to the Trust. It is estimated that approximately half of all RFCs are by now signed over to the IETF Trust, which is a positive development.

Discussion on NAT and IPv6 Continues

Network address translation (NAT) was a topic again raised during the plenary session. The behave working group (WG) was chartered to define how NATs can behave more reasonably and according to specification. One of the properties would be incremental deployability. One speaker was concerned that incremental changes to NATs would be the wrong approach. On the other hand, there are currently ongoing discussions within the STUN (Simple Traversal of UDP through NATs) and ICE (Interactive Connectivity Establishment) communities that describe why a general solution will not work. Those discussions will have to be continued by the appropriate working groups.

IPv6 also remains a big topic for the IETF. Even though at this IETF meeting, IPv6 deployment was not officially on the IESG or IAB plenary agenda, there were several discussions both during various working group meetings and in the hallways. IESG member Ross Callon said he hopes that at some point “the pain will be high enough to deploy IPv6.”

Sam Hartman, one of the two security area directors, disagreed, saying, “IPv6 is being incrementally deployed and is catching on where it has value.” He wondered whether it really makes that much sense for everyone, at some point, to switch over to IPv6. Another attendee added that deployment of IPv6 is not always straightforward, leading to agreement that better documentation is needed and that the IETF community could help with that.

Jari Arkko, an IESG member who is active in the area of IPv6, reiterated that the IETF can also help by making sure all the necessary pieces are in place so that users can deploy IPv6. The v6ops WG is looking into whether all transition mechanisms are in place. Outside the IETF, education is needed, and the IETF is working with ISOC to address that issue. Overall, it was expressed that it is necessary to understand that no true transition is possible. IPv4 and IPv6 are disjoint address spaces. Proper mechanisms for moving between the two versions are essential.

What followed was a discussion about what would motivate people to use IPv6 in their networks. Some people believe only a killer application or more features will help. Others disagree, saying nobody is going to develop an application that runs only on an infrastructure that hardly anyone uses yet. The biggest benefit of IPv6 is a much bigger address space. Alain Durand summarised it as follows: “The motto for many years has been ‘Bandwidth, bandwidth, bandwidth.’ Now the motto has changed to ‘Address space, address space, address space.’ It’s as simple as that.” However, the immediate address shortage of IPv4 has been fixed by introducing NAT. Large corporations that need a lot of address space are starting to use IPv6, now that there aren’t enough IPv4 addresses. Adiel Aklogan, CEO of AfriNIC, the African numbers registry, agrees that IPv6 is indeed happening and that the IETF has been doing a lot to encourage people to use it. “All RIRs are working with their communities on that,” he said. “Maybe the IETF can help by sending the right message to the operators community that the protocol is ready.”

Echoing much of the discussion at IETF 69 in Chicago, it was concluded in Vancouver that most of the IETF’s work on IPv6 has been completed. What is left to do is education. And vendors need to be encouraged to implement IPv6. “There are some bugs and issues with IPv6 equipment,” said Jari. “With more users, those will be fixed faster. It’s a matter of time.”

Technical Plenary

The technical part of the plenary session at IETF 70 was devoted primarily to two technical presentations. The first one, called What Makes For a Successful Protocol, was presented by Dave Thaler (see page 20). The authors were applauded by attendees for their excellent work in this area, and the presentation was followed by a constructive and lively discussion. It was suggested that not only the IESG but also working groups need to be paying attention to this work so that newer protocols have a better chance at becoming successful. Economical alignment and deployability are now already being used as criteria for successful protocol design in some WGs. This is a positive development, as Olaf Kolkman noted.

Leslie Daigle warned that technical superiority is not necessarily a factor for becoming successful. As she explained, older protocols that were brought into the IETF could also be more successful because at that time new work was more elastic and more experimental in nature. It was easier to bring in new ideas. “Nowadays we have to check if things are successful before we start, looking at them in order to not bring the whole industry down,” she said. “This is a challenge.”

Dave Thaler emphasised that success often doesn’t become clear until sufficient time has passed. “In hindsight it’s easy to tell what is successful,” he said.

“It’s much more difficult when you’re designing it. Sometimes you just can’t tell. It may be successful later for other reasons,” Bob Hinden added.

Some interesting suggestions were made during the discussion: For example, one could identify those protocols that were developed outside the IETF and what factors influenced the decision to not take them on as IETF work items. Some of them became successes. One could look at those cases and see why they became successful and why they were developed elsewhere.

Another speaker mentioned that one reason protocols are successful outside the IETF could be that protocols have a high turnover rate and suggested that perhaps one should look at how to adopt protocols into the IETF.

There are also cases where efforts were made to kill a protocol but it survived nonetheless. What were the reasons for that? Is it possible that working groups sometimes try too hard to predict what is going to be successful?

Olaf gave DNSSEC some thought in that context. “This has been in the IETF for a long time,” he said. “One success criterion is whether there is a perceived benefit. This is very hard to sell for a security mechanism. There is very little the IETF can do except make the case that some things are important. Then the marketplace decides.”

“There are always various demands on a protocol,” said Mark Crispin in closing. “The processes of the past cannot be applied today. Are there other organisations that have a faster turnaround and the same diversity as we have? Our diversity is our strength.”

The second technical presentation covered a topic that is a bit unusual for the IETF but was received positively. It was called Energy Engineering for Protocols and Networks, and it was presented by Bruce Nordman, a researcher at the Environmental Energy Technologies Division of Lawrence Berkeley National Laboratory.

“Why might we care about energy engineering?” asked Elwyn Davies as he introduced Bruce. “Is there a way-when we design protocols-to keep the amount of total energy use down?”

Bruce presented a number of statistics demonstrating that most energy is used at the edges and that energy use is affected by applications and protocols-and not just hardware. A number of research projects are exploring that topic, including a project called Energy-Efficient Ethernet and one called Network Presence Proxying, which focuses on the significant amount of energy that is used when devices are idle. Bruce is currently working with the industry to draft the content of a proxying standard. For more information. A related initiative is an NSF/FIND project called Energy-Aware Network Architecture.

What can the IETF do to help reduce energy usage? Bruce made a number of suggestions that IETF engineers and network operators could act on to save energy:

  • Facilitate multiple forms of reduced presence (instead of always assuming full presence of all edge devices)
  • Enable optional reduced speeds (instead of assuming network links always run at maximum speed)
  • Expose knowledge of acceptable latencies
  • Determine when and how to facilitate slower acceleration
  • Facilitate powering-down links when capacity is not needed (instead of maximising interconnections)
Sidewalk signage in Vancouver

Sidewalk signage in Vancouver
Photo Credit: Mirjam Kühne, with permission

The IETF can also ask itself which existing and developing protocols have features that inadvertently work against energy savings and, conversely, which protocols facilitate energy savings. “Could there be some guiding principles that might ensure that protocols maximise energy?” he asked. “And could existing protocols be modified to follow those principles in future revisions?” In closing, Bruce suggested the IETF community make energy savings an integral part of protocol design, just like security.

The IETF has two WGs dealing with related issues: 6lowpan (IPv6 over Low-Power WPAN) and, possibly, rl2n (Routing for Low Power and Lossy Networks). The latter was a BoF at IETF 70 and might soon become a WG.