Security

Security Protocol Failures

By: Phillip Hallam-Baker

Date: December 7, 2007

line break image

This article is a condensed version of the argument made in The dotCrime Manifesto: How to Stop Internet Crime, in which the question of how to fix these problems is considered.

 

The Internet is insecure, so what went wrong? Contrary to widely held belief, the reasons for Internet security protocol failure are not primarily technical. Failure to understand the risk model and to meet the actual user requirements are much more significant causes of security failure. The economics of security protocol deployment and security usability engineering are also key: a protocol might as well not exist if it is not used.

Is It Safe?

Is the Internet safe? To paraphrase Douglas Adams, yes, the Internet is perfectly safe: it’s the rest of us who have to worry.

The Internet was built to meet a specific set of needs and be adaptable beyond those needs. Contrary to common assertion, security was a consideration early in the design of Internet architecture and protocols. Saltzer et al. addressed security at some length in their seminal end-to-end-arguments paper of 1981.(1)

There are many reasons why cryptographic security was not embedded into the Internet from the first, not the least of them the limited computing power available. But even if more powerful machines had existed, the risks did not. There were no shops or banks on the primordial Internet. Military secrets were isolated on an essentially separate network – albeit not isolated enough, as subsequent events would prove.(2)

Although the primordial Internet lacked cryptographic security, it did have a strong and effective accountability mechanism. Access to the Internet required access to one of the tiny numbers of computers connected to it. Miscreants faced a real risk of consequences; a visit to the dean’s office, loss of computing privileges, and in extreme cases, expulsion.

The Internet protocols were capable of scaling to support a billion users; the accountability mechanisms were not. At the same time, the Internet became, in Willie Sutton’s infamous phrase “where the money is.” Consequently, the need to urgently retrofit security to the Internet became sharply apparent – in particular, with the rise of the Web beginning in 1993.

Yet here we are, 15 years later. Internet crime is a multibillion-dollar nuisance, and cybersecurity is a campaign issue in the U.S. presidential campaign.(3)

What went wrong?

Systems Failure

According to the traditional view, the first concern in security protocol design is to get the job done right. “Bad security is worse than no security.” But while this may have been true for Mary Queen of Scots and the Rosenbergs – executed as a result of misplaced faith in a faulty cipher – it is certainly not the major cause of Internet security failures.

Mistakes matter rather less than is often supposed. The most elementary of errors – complete lack of any authentication capability – was discovered in SSL (Secure Sockets Layer) 1.0 just 10 minutes into the first public presentation on the design. That error was fixed in SSL 2.0, but this time the designers made no effort to obtain public review prior to release, and further design errors were identified. It wasn’t until the design of SSL 3.0 that an experienced designer of cryptographic protocols was engaged to evaluate the design – but for only 10 days.(4)

Rather more significant than the making of the mistake itself is an architecture that allows the mistake to matter. Lampson’s security reference monitor(5) does not make it less likely that the programmer will make a mistake but does reduce the number of places where a mistake is likely to matter.

Failure Commitment

Fear of making a mistake has frequently led to security protocol design that takes far longer than it should.

Despite breaking every accepted rule of open standards design, SSL and its IETF successor TLS (Transport Layer Security) are the only Internet security protocols to have achieved ubiquitous use. Getting the protocol specification as correct as possible should certainly be the first concern of the protocol designer who wants to find future employment, but nobody is served by the designer who is perpetually unable to commit to a design that can be tried in the real world.

Requirements Failure

The Internet is a work in progress, not an absolute truth. It was not the original purpose of the Internet to provide a communication network; it was to provide an environment for the research and development of computer networks. The World Wide Web was not imagined in 1980, nor were the security requirements for employing the Web as the ubiquitous engine of electronic commerce understood in 1995. It is only with experience of use that these requirements have become better understood.

The IETF has produced four specifications for an end-to-end e-mail security protocol: PEM, MOSS, Open-PGP, and S/MIME. None is widely used. For many years it has been asserted that the lack of use of S/MIME was due to the inadequate deployment base of capable clients – despite the fact that Outlook, Thunderbird, Notes, AOL, and express variants thereof have all supported S/MIME for almost a decade.

It is time to admit that one of the many reasons for this failure is that none of the end-to-end mail security protocols actually met users’ real requirements. Ease of deployment and use were far higher in most users’ list of real requirements than was the theoretical possibility of an attack by the mail server administration. And today, users who are concerned about the need for end-to-end security consider it in terms of the end-to-end life cycle of their confidentiality-sensitive documents.

Political Failure

Another reason for the failure of end-to-end e-mail security is political failure. S/MIME has widespread deployment, but Open-PGP is still the leader in mindshare.

Infrastructure Failure

Another commonly cited reason for the failure of end-to-end e-mail security protocols is the lack of a deployed, public key infrastructure (PKI), but this explanation may confuse cause for effect. There is a large and robust market providing PKI infrastructure for SSL – albeit a commercial, for-profit infrastructure rather than a free one.

A more convincing explanation of the failure to establish an end-user PKI infrastructure is that both Open-PGP and S/MIME resort to what can only be described as hand-waving arguments wherein the question of public key discovery is concerned. If the Open-PGP web of trust is to be taken seriously, we should expect to see a rich maintenance protocol offering features similar to those being discussed in the areas of social networking and Identity 2.0. S/MIME lays the responsibility off onto PKIX, which in turn lays it off onto an entirely underspecified Lightweight Directory Access Protocol (LDAP) instantiation.

The problem, then, is not merely the failure of the necessary PKI infrastructure to deploy, as is often claimed; we lack the necessary S/MIME infrastructure to make use of it even if it did.

Context Failure

Many security failures result from security by analogy. If a security control is adequate in one context, then it should be adequate in another context. If a four-digit PIN is good enough for securing an automatic teller machine (ATM) transaction, then it’s good enough for online banking. If sending passwords in the clear is good enough for FTP, then it’s good enough for HTTP.

The problem with security by analogy is that while it can certainly be effective in identifying possible risks (i.e., if protocol A fails due to X, then look for the potential for X in protocol B), it is rather too easy to overlook significant differences in the context in which the protocols are applied. The PIN is only one factor in a two-factor authentication scheme in ATMs. Moreover, there is a maximum daily limit on withdrawal. In an online brokerage application, the PIN is the only authentication factor, and there is no transaction limit.

The name of the Wired Equivalent Privacy (WEP) protocol used in securing IEEE 802.11b wireless Ethernet demonstrates another form of context failure. The designers of WEP assumed that the principal change in the security context of moving from a wired to a wireless LAN was the risk of disclosure. Consequently, they designed a protocol intended to provide a strong confidentiality protection wherein the authentication component consisted of a single secret key shared by every machine in the network. The practical security implications of this model included terminated employees’ surfing the corporation from the parking lot, among others.

Experience Failure

It is an old but true saying that familiarity breeds contempt. While almost everyone has an Internet security story to tell, the telephone network raises rather fewer concerns than it should. The security posture of the telephone system in virtually every industrialised country is predicated on the assumption that there is a single, monopoly operator whose employees are absolutely trustworthy.

I have a fax machine in my office because some people insist that the Internet is not secure enough. The fax is served by a VoIP connection and forwards the messages to me by e-mail.

Usability Failure

Designing security protocols is not enough. If we wish to secure the Internet, we need people to use those protocols. Until recently, the field of security engineering usability was virtually ignored. Today it is much more widely appreciated as a security protocol that people do not use because people do not use what they cannot either use or understand.

Much has been written about the need for end-to-end security. On the Internet the ends of the communication are the user’s brain and the person or organisation the user is interacting with. To provide end-to-end security, we must secure the last two feet between the user’s eyeballs and the screen. Secure Internet Letterhead(6) was proposed with a view to that end: the customer recognises the bank by the bank’s brand on the ATM, the bank card, the branch, and so on. We should adopt the same cues on the Internet (e.g., via RFC 3709).

Recognising the need for usability is much easier than achieving usability. The entire computing field is facing a usability crisis, and such techniques as exist tend to be designed by and for usability experts. Much work remains to be done before the techniques are part of every security engineer’s tool kit.

Accountability Failure

When the Internet crime wave first hit, a great deal of effort was put into consumer education. Such efforts frequently missed the point that Internet crime is neither the consumers’ fault nor the consumers’ responsibility. The design flaws are in the financial infrastructures and the Internet. Consumers were not responsible for the security design of either.

Responsibility for security must lie with the party best able to provide it. Customers put their money in a bank because they believe that the bank is better able to keep the money safe than they are. If the bank starts telling customers that safety is customers’ responsibility, the bank undermines its own purpose.

We cannot hope to hold a billion Internet users accountable for their actions, but we can hold ISPs accountable for allowing SYN floods and spoofed source address packets onto their networks, just as we now hold them accountable for spam. We cannot hold application providers accountable for every last bug in their systems, but we can hold them accountable for allowing the bugs to matter, and we can hold them accountable for systems whose default behaviour is to automatically run unknown programmes from unknown sources with full user privileges.

Deployment Failure

Perhaps the most common reason for Internet protocol failure is that the protocol is never used. Security specialists have been considering the economics of profit-driven Internet crime for some time. Recently, attention has focused on the economics of protocol deployment.(7) A study of deployment of the SSH protocol by Rosasco and Larochelle(8) concluded that the reason for the protocol’s success lay not in the specific security features supported in the SSH protocol itself but in the additional, nonsecurity functionality that the SSH application made possible – in particular, the ability to tunnel X-Windows sessions using SSH.

Many opportunities for applying this codeployment strategy remain untapped. Establishing an Internet Webcam session in the presence of firewalls and NAT remains a largely futile effort. In The dotCrime Manifesto(9) I make the case that simplified network administration could be the killer application for Default Deny Infrastructure.

Conclusions

Despite our best efforts in applying our core skill sets, the Internet remains unacceptably insecure. Internet crime is a large and growing problem. Neither Internet crime nor the failure to deploy the necessary effective security protocols is an exclusively technical problem. We must therefore look beyond the narrow focus of our own expertise to other communities of experts that can help us.

The list of problems to be addressed is reassuringly large: if we had no idea what the cause of the problem might be, we would have no way to fix it. While each cause of failure is significant, all are readily fixed once the problem is identified. All we need is the will to do so.

Some have objected that these concerns are not engineering and thus lie outside the scope of the IETF. This is not my view. In Europe a person with mere domain expertise is known as a technician. Only once a candidate has demonstrated the ability to combine personal expertise with whatever other expertise is necessary (managerial, legal, commercial) to solve problems does the candidate qualify for the title engineer.

References

  1. Jerome H. Saltzer, David P. Reed, and David D. Clark, s.l. End-to-End Arguments in System Design. IEEE Computer Society, 1981, Proceedings of the 2nd International Conference on Distributed Computing Systems, Paris, 1981, pp. 509-512.
  2. Clifford Stoll. The Cuckoo’s Egg: Tracking a Spy through the Maze of Computer Espionage. s.l.: Pocket, 2000. 0743411463.
  3. Rudolph Giuliani. The Resilient Society: A Blueprint for Homeland Security. Wall Street Journal. January 9, 2008
  4. Paul C. Kocher. Personal communication
  5. Butler Lampson. Hints for Computer System Design. 5, 1983, ACM Operating Systems Review, SIGOPS, vol. 17, pp. 33-48
  6. Phillip Hallam-Baker, Secure Internet Letterhead (PDF). Presented at W3C Workshop on Transparency and Usability of Web Authentication
  7. Ross Anderson and Tyler Moore. Information Security Economics – and Beyond (PDF). August 21, 2007
  8. Nicholas Rosasco and David Larochelle. How and Why More Secure Technologies Succeed in Legacy Markets: Lessons from the Success of SSH (PDF). 2004
  9. Phillip M. Hallam-Baker. The dotCrime Manifesto: How to Stop Internet Crime. s.l. Addison-Wesley Professional, 2008