In February 2010, IETF chair Russ Housley announced the launch of a new wiki dealing with “IETF Outcomes.” The wiki, which can be found on the IETF tools site athttp://trac.tools.ietf.org/misc/outcomes, features technologies and services that were developed within the IETF and that represent notable successes and failures. It is the result of a collaborative effort by IETF participants-who are invited to use it to provide feedback about the utility of IETF work-and it is a mechanism for facilitating public understanding of IETF work and its impact.
Dave Crocker at IETF 77 in Anaheim, California
The IETF Journal took the opportunity of IETF 77 in Anaheim, California, to meet with Dave Crocker-a driving force behind the creation of the wiki-to chat about the motivations that gave rise to its development and about expectations for its future.
IETF Journal: What motivated the creation of the wiki?
Dave Crocker: For many of us, the usual measure of success is the publication of an RFC, but we’re in the communications business, and in the real world this involves closing the loop with feedback. The need for assessment was clear; the question was how to do it. I focussed on finding a way to help the community develop an internal sense of accountability. Wikis possess a classic grassroots quality: they are developed by the community, they are transparent, and they permit resolving disagreement through open debate. To get this started, I talked with a few people over the space of about a month. It began as a simple table, but a wiki became the obvious choice once the need arose to support continuing change, provided by the community. In classic Internet terms, it scales better. After the initial group exercise stabilized, I approached the IETF management. As a grassroots, ongoing exercise, the status of the wiki is inherently informal, which nicely matches its placement in the tools.ietf.org portion of the IETF website. There’s a mailing list to go with it, which is there to discuss issues with the wiki in general. We’re slowly but surely seeing people taking the initiative to contribute to it.
IETF Journal: How do you measure the success of a standard in the marketplace? Is it always subjective?
DC: There’s some text in the wiki that describes ratings, but in general it’s pretty subjective. One of the columns of the wiki table is called usage, which might seem an odd term, but ultimately, the reason we make stuff is so it gets used. What does it mean to get used? I don’t think that having software implement a spec qualifies it as a success. I think having somebody use that software makes it a success. The difference is very important. The IETF is driven largely by an industry that produces things, not by an industry that uses those things. The rating system is only a five-point scale, from complete failure to massive success. When you’re doing survey research, that’s as many points as you want for a casual audience. If there’s a lot of debate about the rating for a given standard, then we don’t know enough to rate it.
It’s not only usage that matters; it’s also the extent to which a piece of work prompted derivative works. It turns out you can have something that’s a complete failure but that triggers derivative work of importance. An example of that is PEM [Privacy Enhanced Mail], which generated a lot of useful outcomes, even though the protocol itself was a complete failure. (For a more detailed discussion on this general point, see Leslie Daigle’s article on URNs on page 11.)
IETF Journal: What’s the incentive for somebody to update the wiki with information about a standard that has failed but that may have involved considerable effort to create?
DC: That’s a very interesting question. Frequently, people are brutally honest about their own work. The IETF environment encourages that level of honesty. People don’t beat themselves up in public very often, but within the community there is respect for learning what didn’t work and then using that information. Not surprisingly there is a normal tendency for people to point out others’ failures. So if somebody wants to hurt somebody else’s feelings by putting an entry in the wiki, the only relevant question is: Is the criticism accurate? I am aware of the concerns about the possible social and political downsides. I myself have had concerns about the wiki format, that it could create competition between areas. But what’s bad about that?
One subtlety that has developed as a result relates to some IETF technical efforts that had a number of false starts, such as DNSSEC [Domain Name System Security Extensions]. A number of long-time DNS experts worked on this topic because it’s so important. As a result, we’ve developed multiple entries to try to capture multiple phases of work and differing outcomes.
IETF Journal: Do we need a methodology that is applicable to other standards development organizations?
DC: This methodology for producing outcomes ratings of IETF work is so simple that I’d expect it could be applied to any group; whether groups want to or not is their choice. But note that as a grassroots tool, it does not require the blessing of the organization. It could be interesting to try to generalize to the W3C [World Wide Web Consortium].
IETF Journal: What do you think the IETF might learn from the development of this evaluation tool?
DC: Given that this is done with subjective, coarse-grained data, I hope that all we learn are subjective and coarse-grained things. We may see that some areas of work have better track records than do others. The most interesting thing I hope we’ll learn is some sense of which approaches to doing work tend to be successful and which approaches tend not to be successful. That’s ambitious to hope for, and it requires a lot of effort and thinking, but it would be pretty nice if we could get there.
I think debates over what is the right way to assess a particular effort are very useful because getting clarity about what succeeds and what doesn’t will help the next time. Just getting people to worry about the long term could be the biggest benefit. Engineers tend to project acceptance of their wonderful ideas, but the market doesn’t work that way. In the earlier days of the IETF, market pull was a consideration when chartering new work. Now we measure only whether there are people interested in working on something, so we end up with things being worked on for a long time that don’t always get used. I hope the wiki inspires people to think first about who’s going to use what they’re interested in creating.
The discussion at the IETF plenary-where, among other things, debate concerned workload on the IESG-made me think that while computer networking is about sharing limited resources, we also need to do this for ourselves. Improving quality-control mechanisms can help the IETF leadership decide how to ease its workload. I hope the wiki can be a part of that.
IETF Journal: Thanks for your time, Dave.
This article was posted on 26 June 2010 .