Revisiting Unwanted Traffic

By: Leslie Daigle

Date: February 7, 2009

line break image

In March 2006, the Internet Architecture Board (IAB) held an invitational workshop to look at the problem of unwanted traffic. The official workshop report was published as RFC 4948, and a more complete discussion of the implications appears in an article by Elwyn Davies that was published in the IETF Journal in December 2007 (Volume 3, Issue 3).

The workshop noted that the primary source of unwanted traffic comes from the so-called underground economy-that is, individuals who make use of the open systems of the Internet by leveraging hacked hosts and routing hardware to carry out activities for financial gain. Many of those activities, such as spam, stretch the bounds of civil behaviour; others are outright illegal, such as selling stolen credit card information.

Recognizing that the development cycle for new technologies was too long for specific development plans, the focus at the time was on mitigating strategies. It was never the plan to stop there, of course. Now, almost three years later, it is valuable to look back and see what pieces of development have occurred in the interim that will address some of the core issues of Internet security and stability.

Some of the vulnerabilities stated in RFC 4948 are the following:

“BGP route hijacking: in a survey conducted by Arbor Networks, route hijacking together with source address spoofing are listed as the two most critical vulnerabilities on the Internet. It has been observed that miscreants hijack bogon prefixes for spam message injections. Such hijacks do not affect normal packet delivery and thus have a low chance of being noticed.”

“Everyone comes from Everywhere: in the earlier life of the Internet it had been possible to get some indication of the authenticity of traffic from a specific sender based for example on the Time To Live (TTL). The TTL would stay almost constant when traffic from a certain sender to a specific host entered an operators network, since the sender will “˜always’ set the TTL to the same value. If a change in the TTL value occurred without an accompanying change in the routing, one could draw the conclusion that this was potential unwanted traffic. However, since hosts have become mobile, they may be roaming within an operator’s network and the resulting path changes may put more (or less) hops between the source and the destination. Thus, it is no longer possible to interpret a change in the TTL value, even if it occurs without any corresponding change in routing, as an indication that the traffic has been subverted.”

“Packet source address spoofing: there has been speculation that attacks using spoofed source addresses are decreasing, due to the proliferation of botnets, which can be used to launch various attacks without using spoofed source addresses. It is certainly true that not all the attacks use spoofed addresses; however, many attacks, especially reflection attacks, do use spoofed source addresses.”

Key areas of routing infrastructure security work are being pursued in the SAVI (Source Address Validation Improvement) and SIDR/RPSEC (Secure InterDomain Routing/Routing Protocol Security) working groups.

SAVI seeks to define a finer-grained mechanism for source IP address validation than ingress filtering. As described in its charter, “Partial solutions exist to prevent nodes from spoofing the IP source address of another node in the same IP link (e.g., the “˜IP source guard’), but are proprietary. The purpose of the [“¦] Working Group is to standardize mechanisms that prevent nodes attached to the same IP link from spoofing each other’s IP addresses.” The potential for mischief with such within-network (“on link”) spoofing should not be discounted: compromised hosts could provide packets masquerading as critical infrastructure responses, such as spoofing gateway Address Resolution Protocol (ARP) packets and injecting false Dynamic Host Configuration Protocol (DHCP) responses, among others. Generally, having a finer granularity for validating source IP addresses and treating validation outcomes would be helpful in a number of situations. As part of network management policy options, depending on the situation, it might be desirable to block spoofed packets or merely log packets that appear to be spoofed. Therefore, the SAVI work is an important piece in the puzzle of preventing and mitigating unwanted traffic.

RPSEC was chartered to document the security requirements for routing systems and, in particular, to produce a document on Border Gateway Protocol (BGP) security requirements. Complementarily, the scope of work in the SIDR working group is to formulate an extensible architecture for an interdomain routing security framework and developing security mechanisms that fulfil requirements that have been agreed on by the RPSEC working group. The first order of business for SIDR is to develop an architecture and framework for a repository to allow formal validation of routing activities. This will take the form of an accessible database of formally verifiable descriptions of who has the right to use particular IP addresses or to announce routes for a given autonomous system. Deploying this will require cooperation among the Regional Internet Registries, network operators, and others. However, it is the necessary foundation for secure interdomain routing, source address verification (beyond individual network boundaries), and other important routing security mechanisms.

These working groups are active, and clearly, their output will be useful in addressing some of the vulnerabilities identified by the IAB workshop. SAVI results could certainly help mitigate issues in address spoofing at all levels. Securing the routing infrastructure would make it considerably harder to inject bogus routes into the global routing fabric. It is hard to overestimate the importance of the eventual win from developing and deploying these technologies: route hijacking, both deliberate and accidental, has happened on a global scale, and the ramifications were felt up through “Layer 9″ (global politics).

Another area of longstanding infrastructure security development is, of course, DNSSEC (DNS Security). While the technology has been available for quite a while, there is now some movement toward deployment. Some ccTLDs, such as .se, have deployed it. One gTLD, .org, has announced plans to deploy it. And there are discussions about whether, or how, to sign the root itself-a critical step toward ensuring the feasibility of a complete DNSSEC infrastructure and permitting authentication of DNS results. While DNSSEC itself does not prevent spam or phishing, it is a critical piece of infrastructure that provides the foundation for reliable, trustable infrastructure that will address those issues more directly.

While it is common to shrug and sigh at the inconvenience of unwanted traffic, it is an important area to address because it goes to the question of the Internet’s evolution. The Internet has been developed, deployed, and built out based on a model of voluntary adherence to open standards and cooperative activity. That makes for an infrastructure that is immune to mandated change, which is both a feature and a challenge. The Internet is certainly no longer treated as the research network it was originally; it has become an integral part of the fabric of day-to-day lives, businesses, and civil organization in all developed-and many developing-countries.

To preserve the characteristics of operation that facilitate innovation at the edges, we need to demonstrate that the Internet can, in fact, evolve to meet the increased requirements: more security, moving from implicit trust in its operators to reasonable expectations of trust of the infrastructure itself.

Three years ago, the IAB workshop identified some key areas for concern. As noted earlier, the IETF’s work has continued to develop more proactive, viable, infrastructure-securing technologies. The next, and perhaps most critical, step is to move toward global adoption and deployment of those technologies to address the network scourge that is unwanted traffic.