HTTP/HTML versus Gopher. IPv4 versus IPX. Interdomain IP Multicast versus application-layer overlays. As we learned from the more mainstream VHS-versus-Betamax-format war, the reasons that one technology or protocol takes off while another one crashes and burns are obvious only in retrospect. Success may not be easy to predict, but it’s rarely if ever an accident or simply a matter of luck or timing (though timing can be a critical ingredient in achieving success). More often than not, success happens when a problem gets solved or a need gets addressed in a manner that is cost-efficient, easy to deploy, and useful for more than a minute and a half.
Sound simple? It is, as long as it’s understood that simple and easy are not the same things. Even if a formula existed for designing the perfect protocol, the Internet – together with all that is layered on top of it – is too vast, too changeable, and too complex to make any proposed solution or fix a sure thing. Fortunately, though, the sheer number of Internet protocols developed, published, and deployed in the past few decades offers valuable opportunities for determining the factors that
What does it mean for a protocol to be successful? Is a protocol successful if it has met its original goals but is not widely deployed? Perhaps, but for purposes of this article, we define a successful protocol as one that both meets its original goals and is widely deployed. Perhaps the best examples of successful protocols are IPv4 (RFC 791), TCP (RFC 793), HTTP (RFC 2616), and DNS (RFC 1035).
Success, however, is multidimensional. When designed, a protocol is not intended only for some range of purposes; it is also designed to be used on a particular scale. Therefore, the two most important measurements by which a protocol can be evaluated, as shown in Figure 1, are purpose and scale.
According to those metrics, a successful protocol is one that is used for its original purpose and at its originally intended scale. A wildly successful protocol is one that exceeds its original goals either in terms of purpose (it is used in scenarios that extend beyond the initial design) or in terms of scale (it is deployed on a scale much greater than originally envisaged) or in terms of both; that is, the protocol has overgrown its bounds and has ventured out into the wild.
If we apply those definitions, then a protocol such as HTTP is defined as wildly successful because it exceeded its design in both purpose and scale. Another example of a wildly successful protocol is IPv4. Although it was designed for all purposes (“Everything over IP and IP over Everything”), it has been deployed on a far greater scale than it was originally designed to meet. Still another example is ARP (Address Resolution Protocol). Originally designed for a more general purpose (namely, resolving network-layer addresses to link layer addresses regardless of media type or network-layer protocol), ARP was widely deployed for a narrower scope of uses (resolution of IPv4 addresses to Ethernet MAC addresses). More recently, it has been adopted for other uses, such as detecting network attachment (DNAv4 [RFC 4436]). Like IPv4, ARP is deployed on a much greater scale (in terms of number of machines but not in terms of numbers on the same subnet) than originally expected.
As with most success stories, to be wildly successful can be both good and bad. A wildly successful protocol is one that solves more problems or that addresses more scenarios or devices than originally intended or envisioned. When this happens, it may mean it’s time to revise the protocol to better accommodate the new space. However, if a protocol is used for purposes other than the one for which it was designed, there can be undesirable side effects – such as performance problems. The design decisions that are appropriate for the intended purpose may be inappropriate for another purpose. Worse, wildly successful protocols tend to become popular, which means they can be attractive targets for attackers.
When failure becomes an option
Unlike a major motion picture, which can be dubbed a failure at the box office within a week or two of theatrical release, the failure of a protocol can be determined only after a sufficient amount of time has passed – generally 5 to 10 years for an average protocol. To be considered a failure, a protocol must be lacking in three key areas:
- Mainstream implementation (little or no support in hosts, routers, or other classes of relevant devices),
- Deployment (devices that support the protocol are not deployed, or, if they are, the protocol is not enabled), and
- Use (the protocol may be deployed but there are no applications or scenarios that actually use the protocol).
It’s important to note that at the time a protocol is first designed, there is of course no implementation, deployment, or use, which is why it’s important to allow sufficient time to pass before evaluating the success or failure of a protocol.
Identifying success factors
A series of case studies examined by the authors laid the groundwork for determining the key factors that contribute to a successful or a wildly successful protocol as well as the relative importance of those factors. Note that just as a successful protocol may not necessarily include all of the factors, a failed protocol could very well include some of the factors that determine success.
Positive net value (meeting a real need). The success of a protocol depends largely on the notion that the benefits of deploying the protocol (monetary or otherwise) outweigh the costs – such as the costs of hardware, operations, configuration, and management – as well as costs associated with any changes to the business model that might be required. A few key benefits might include pain relief (lower cost than before), opportunities to enable new scenarios (though this type has a higher risk of failure than the other types), and incremental improvements (for example, better video quality).
Success seems more likely when the natural incentive structure is aligned with the deployment requirements – that is, when those who are required to deploy, manage, or configure a protocol are the same as those who gain the most benefit. In other words, it’s best if there is significant positive net value at each organisation where a change is required.
Incremental deployability. A protocol is incrementally deployable if early adopters gain some benefit even if the rest of the Internet does not support the protocol. It also appears that protocols that can be deployed by a single group or team have a greater chance of success than do those that require cooperation across organisations (or, worse, those that require a flag day where everyone has to change simultaneously).
Open code availability. Perhaps the next most important technical consideration is that a protocol have freely available implementation code. This may have been the case when deciding between IPv4 and IPX, the latter of which at the time was, in many ways, the technically superior of the two.
Freedom from usage restrictions. A protocol is far more likely to succeed if anyone who wishes to implement or deploy it can do so without legal or financial hindrance. Within the IETF, this point often comes up when choosing among competing technologies; for example, the one with no known intellectual property restrictions is the one most likely to be chosen even if it’s technically inferior.
Open specification availability. What remains true for all RFCs – and has contributed to the success of protocol specifications both within and outside the IETF – are protocol specifications that are made available to anyone who wishes to use them. This might include worldwide distribution (accessible from anywhere in the world), unrestricted distribution (no legal restrictions on getting the specification), permanence (remains even after the creator is gone), and stability (doesn’t change).
Open maintenance processes. The protocol is maintained by open processes; mechanisms exist for public comment; and participation by all constituencies affected by the protocol is possible.
Good technical design. The protocol follows good design principles that lead to ease of implementation and interoperability.
What makes a protocol wildly successful?
The following factors do not seem to significantly affect initial success, but they can affect whether a protocol is wildly successful.
Extensible. An extensible protocol is one that carries general-purpose payloads and options or easily accommodates the addition of new payload and option types. Such extensibility is desirable for protocols that are intended for application to all purposes, such as IP. However, for protocols designed for a specialised purpose, extensibility should be considered carefully.
No hard scalability bound. Protocols that have no inherent limit near the edge of the originally envisioned scale are more likely to be wildly successful in terms of scale.
Threats sufficiently mitigated. Protocols with security flaws may still become wildly successful provided they are extensible enough to allow the flaws to be addressed in subsequent revisions. However, the combination of security flaws and limited extensibility tends to be deadly.
It can’t be emphasised enough that the most important factor that contributes to the success of a protocol is that the protocol fill a real need. It also helps if the protocol can be deployed incrementally. When there are competing proposals of comparable benefit and deployability, open specifications and code become increasingly significant success factors. Open source availability is initially more important to success than is open specification maintenance.
In most cases, technical quality was not a primary factor with regard to initial success. The initial design of many protocols that have become successful would not pass IESG review today. Technically inferior proposals can win if they are openly available. Factors that do not seem to be significant in determin ing initial success (but that may affect wild success) include good design, security, and having an open-specification-maintenance process.
Many of the case studies we evaluated concern protocols originally developed outside the IETF but that the IETF played a role in in improving after initial success was certain. While the IETF focuses on design quality, which is not a significant factor in determining initial protocol success, once a protocol succeeds, a good technical design may be key to its continuing success. Allowing extensibility in an initial design enables initial shortcomings to be addressed.
Security vulnerabilities do not seem to limit initial success, most likely because vulnerabilities often attract attackers only after the protocol becomes deployed widely enough to become a useful target. Finally, open specification maintenance is not very important to initial success, because many successful protocols were initially developed outside the IETF or other standards bodies; they were, in fact, standardised later.
In light of our conclusions, we recommend that the following questions be asked during the evaluation of a protocol design:
- Does the protocol exhibit the critical initial success factors?
- Are there customers (especially high-profile customers) that are ready to deploy the technology?
- Are there potential niches where the technology is compelling? If so, can complexity be removed to reduce cost?
- Is there a potential killer application? Or can the technology work underneath existing, unmodified applications?
- Is the protocol sufficiently extensible to allow potential deficiencies to be addressed in the future?
If it is not known whether the protocol will be successful, should the market decide first? Or should the IETF work on multiple alternatives and let the market decide among them?
Are there success factors that may predict which among multiple alternatives is most likely to succeed?
In the early stages of protocol design, evaluating the factors that may influence initial success is important in facilitating success. Similarly, efforts to revise or revive unsuccessful protocols should include an evaluation of whether the initial success factors (or enough of them) were present rather than focusing on wild success, which is not yet a problem. For a revision of a successful protocol, on the other hand, focusing on the wild-success factors is more appropriate.