Security

Unwanted Traffic

By: Elwyn Davies

Date: December 7, 2007

line break image

The Internet carries a lot of unwanted traffic today. At its most fundamental, unwanted traffic is made up of packets that consume network and computing resources in ways that do not benefit the owners of the resources. To gain a better understanding of the driving forces behind such unwanted traffic and to assess existing countermeasures, the Internet Architecture Board (IAB) organised a workshop in March 2006 called Unwanted Internet Traffic. At the workshop, a number of experts – including operators, vendors, and researchers-exchanged experiences, views, and ideas on this important topic (the full report of the workshop was published in RFC 4948). This article presents the findings of the workshop and looks at some developments that have occurred since the workshop.

The Underground Network Economy

The most important message from the Unwanted Internet Traffic workshop was that the enormous volume of unwanted traffic is a symptom of a vast criminal underground economy that is a parasite on both open technology and the innovative culture of the Internet as it has developed over the past 20 years.

From Anarchy to Criminality

Early in the life of the Internet, unwanted traffic was largely an expensive nuisance. Much of it was generated by so-called script kiddies, who had no clear motive beyond demonstrating to their equally mindless peers their ability to cause mayhem. While the consequences for the networks and hosts that were targeted were generally immediate and catastrophic, often resulting in significant economic loss for the victims, the attackers profited little or, in most cases, not at all.

Over the past few years, the situation has altered dramatically. The anarchic hackers of the past have been harnessed or have been displaced by criminals who seek to use the Internet for illicit gain.

The underground network economy that has developed within the Internet mirrors the underground economy in the physical world: tools of the [criminal] trade are created and sold to other criminals; stolen information is fenced for use in further criminal activity; and routes are created through which the illicit proceeds can be laundered to enable the criminals to benefit from their activities.

The underground network economy has evolved quickly, changing from an initial barter system into a gigantic shopping mall for tools and information. This has led to a rapid shift in the nature of unwanted traffic and the ways in which the traffic affects the network. It is now a fully integrated and persistent subculture that sucks many billions of dollars out of the legitimate network economy by exploiting the commercial growth of e-business. It is no longer in the interests of these types of criminals to destroy or significantly damage the network; as with any parasite, the parties responsible are absolutely dependent on the continued existence and availability of the network to supply their income.

Subverting the Network

The marketplace for the underground network economy is typically hosted on IRC (Internet Relay Chat) servers that provide access to “stores” that sell the tools that are needed to operate in the underground economy. Easily available is strong encryption software for e-mail and other communications tools, both of which allow deals to be closed with little risk of detection. Consequently, it is no longer necessary to be a skilled programmer to be a successful miscreant in the underground economy. The malware, bot code, and access to compromised hosts or Web servers can be bought off the shelf, and some of the profits can be used to finance new tools and to set up “dirty” Internet service providers to host IRC servers and fraudulent Web sites.

The network itself provides the means to turn the available tools and stolen information into real assets. In the simplest case, electronic funds transfer can be used to drain money from online bank accounts directly into short-lived accounts – often in another country, which makes it diffucult to trace or recover the money. More-sophisticated schemes use stolen credit cards to purchase goods that are redirected or resold through money-laundering services that obscure the trail that leads to the beneficiary. The international nature of the Internet, the absence of audit trails, and the ease with which anonymity can be achieved are important features of the network, but they also facilitate misuse.

One of the key weapons used by criminals consists of compromised hosts, also known as bots or zombies. Networks of bots (botnets, for short) are created by exploiting security flaws in networked machines or by inducing naive users to install in their machines certain backdoor remote control capabilities of which they are unaware. Remotely controlled bots can then be used either as means of capturing valuable personal or financial information from the users of the machine or as ways of generating further unwanted traffic, such as e-mail spam or distributed denial-of-service (DDoS) attacks that cannot easily be traced to their true origins. In most cases, bots do not cause major disruption to the hosting machine by either obviously disrupting operations or clogging the machine’s network connection with large amounts of unwanted traffic. The objective in most cases is to provide a resource that can be used by the miscreants for as long as possible. To make a medical analogy, unwanted traffic no longer creates an acute disease in the compromised host; rather, it creates chronic carriers that may go undiagnosed for a long time and that act as sources of infection that can perpetuate the problem.

A major reason that the underground economy is so successful is the ease with which botnets can be created. Miscreants view them as expendable resources, and they are rarely bothered by operators who may see what they’re doing. As long as their cash flow is not significantly impacted, miscreants simply move on to new venues when ISPs take action to clean up bots and protect their customers. However, taking out one of the IRC servers might provoke a severe and ruthless attack on the ISP, typically through the use of botnets to launch a DDoS attack targeting the ISP’s network. In this way, the attackers create an example that might intimidate other ISPs into leaving them alone.

Simplicity and Power versus Vulnerability and Ignorance

The end-to-end architecture of the Internet emphasises the flexibility of implementing new applications in the end system while keeping the network itself as simple as possible. The network neither enhances nor interferes with end system data flows. The success and adaptability of the Internet demonstrate the power of this model but can also make life easy for those who operate in the underground economy.

The concentration of capabilities in a large number of end hosts means there is an enormous field of complex systems available for launching an attack. Inevitably, complex systems are difficult to analyse and protect. Consequently, it is not surprising that the majority of hosts are to a greater or lesser extent vulnerable to compromise. Miscreants maximise the return on their investment by exploting vulnerabilities in the most common platforms, such as Microsoft Windows; the volume of exploits reported is a measure of the system’s market penetration rather than its lack of security. Many of these complex systems are owned and controlled by ordinary people who come from all walks of life and who eagerly jump into the exciting online world but are rarely given the training to fully understand the implications of the systems they own. The operating systems and applications they are using are generally designed to hide the complexities of the system so that the users are not deterred from making use of the system. As a result, a large proportion of users fail to anticipate how such a great invention as the Internet can be readily abused, and they do not understand that their system can be compromised without their being aware.

It is therefore not surprising that the Internet now has a considerable number of compromised hosts where the owners are not aware that a compromise has happened. Although a large percentage of those machines are home PCs, evidence shows that corporate servers or backbone routers – even government firewalls – have also fallen victim to compromise.

Editor’s Note
This article is based on discussions at an IAB workshop held in March 2006. The full report of that workshop has been published as RFC 4948.Work to address the issues here is active and on-going.

The IETF has the following working groups addressing some of the problems identified here:

  • Operational Security (OPSEC) WG
  • Routing Protocol Security (RPSEC) WG
  • Secure Inter-domain Routing (SIDR) WG

Much of it extends well beyond the technical sphere of IETF specifications. See, for example, some efforts by the U.S. Federal Bureau of Investigation. A commentary on that effort is available from Arbor Networks, which is engaged in measuring, researching, and proposing paths forward. More information…

Further information and resources are available from Team Cymru Resources.

Running under the Radar

Although some of the consequences of the flood of unwanted traffic – such as spam e-mails and DDoS attacks – are all too visible, many other types of unwanted traffic are hard to detect and counter.

Hosts are now quietly subverted and linked to botnets while leaving their normal functionality and connectivity essentially unimpaired. Bots and the functions they perform are often hard to detect – especially since owners and operators are oblivious to their presence. And detection may well come too late, because the bot may have already carried out the intended (mal)function.

The presence of large numbers of quiet bots in compromised hosts is a particularly challenging problem for the security of the Internet. Not only does the resulting stolen (financial) information lead to enormous economic losses, but also there does not appear to be a quick fix for the problem. The fix needs to be applied at places that see little or no local benefit from the solution. For example, the owner of a machine infected with a bot may not care about fixing the problem if the bot has negligible impact on the way the machine performs for the owner. As long as the owner can keep playing online games, the owner may not be interested in applying a time-consuming and potentially technically complex fix, even though the public interest is endangered.

Simplicity at the core of the network and the nature of the routing system can also make life easier for attackers. IP is specifically designed to minimise the amount of state information needed in the data plane to forward traffic from one end to the other. The network core does not record audit trails for individual traffic streams unless special measures have been planned in advance, such as when the police request lawful interception of some particular traffic.

A major strength of the Internet is its ability to provide seamless interconnection among an effectively unlimited number of parties and with no constraints on where the parties are located geographically. The simplicity of the core combined with worldwide access means not only that there is essentially no limit on what a host can use the network to do, but also that there is no trace – after the event – of what a host may have done. Currently, there is virtually no effective tool available to provide either problem diagnosis or packet traceback. This makes tracking DDoS attacks and other generators of unwanted traffic launched from multiple compromised hosts labor-intensive, requiring sophisticated skills. Even if the compromised hosts and the controller of the botnet can be located, it is likely that more than one organisation has responsibility for the machines and networks involved, which makes investigation difficult. Compounding the problems associated with the high cost and the lack of incentive to report security attacks (see below) is the fact that attacks are rarely traced to their real roots.

The On-Ramp

The Internet is designed to be both friendly and flexible so that it does not constrain new applications that could be developed for and deployed in end systems. Such a design is, of course, a double-edged sword: capabilities that make it easy to develop useful new applications can be just as easily misused to create unwanted traffic. The aspects of Internet architecture that can be exploited to insinuate unwanted traffic onto the Internet are quite complex. Trying to ensure that the Internet remains open to innovation while denying access to unwanted traffic requires a deep understanding of the ways the Internet is intended to work and of the complex value judgments that need to be applied in order to balance the ease of use with the danger of misuse.

Known Vulnerabilities

According to a survey conducted by Arbor Networks, the first two vulnerabilities discussed here are currently believed to be the most critical for the Internet. Other possibilities certainly exist, and the ones that are most commonly exploited shift in the continuing tussle between miscreants and security experts.

Lying about Traffic Source Addresses. In the past, many attacks on networks using unwanted traffic relied on injecting packets with a forged IP source address. Receivers might then be deceived about the source of questionable packets and might therefore accept packets they would not have accepted if the packets’ true source were known, or they may direct return traffic to the forged source address, making them part of a DDoS attack (reflection attack). This process is called address spoofing. The prevalence of botnets that can launch various attacks using the real address of the bot means that address spoofing is no longer as important a technique as it used to be, but many attacks – especially reflection attacks – still use spoofed addresses.

Hijacking Inter-Domain Routing. Attacks can be launched on the Border Gateway Protocol (BGP), which routes Internet traffic between administrative domains. Various attacks can lead to traffic that gets misrouted, but a particularly insidious attack injects routes for IP addresses that are not in genuine use. Because the existence of these routes provides a measure of acceptability for packets sourced from the bogus IP addresses, attackers can use these addresses to source spam messages. Since the additional routes do not affect normal packet delivery and since careful selection of the address prefix used can hide the bogus route among genuine ones, the bogus routes often have little chance of being noticed.

Misusing Web Protocols. The HTTP (Hypertext Transfer Protocol) used for accessing Web servers is now frequently used as a general-purpose transport protocol for applications that have little or nothing to do with the World Wide Web. The reason is that one of the ways attackers identify vulnerable systems is to perform a port scan. The standard transport protocols – UDP and TCP – used in the Internet identify communication end points on a host with a 16-bit port number. Targeted systems are challenged by trying to start a communication using every possible UDP and TCP port number in turn. If the communication can be started, it may give the attacker a wedge with which to pry open the security on the system. The system management reacts by closing down all unused ports to incoming communications, especially at firewalls. This has, in turn, led to difficulties for new applications that use previously unused ports and that need to have packets traverse firewalls. Applications designers have responded by reusing the HTTP communication channel, which can be pretty much relied on to be open in any firewall. However, transporting everything over HTTP does not block attacks; it simply moves the vulnerability from one place to another, and the miscreants are following.

Everyone Comes from Everywhere. On the Internet it used to be possible to get some indication of the authenticity of traffic coming from a specific sender based, for example, on the number of hops between routers that had been traversed. Each arriving packet contains a Time to Live (TTL) count, and packets that have followed the same route from a static source would have the same original TTL value decremented by the same amount, resulting in an almost constant value of TTL on arrival. A change in the TTL value for a source without a corresponding change in routing could be interpreted as meaning that the traffic with a different TTL was potentially bogus. More recently, hosts have become mobile, and a change in TTL value may simply indicate that the host has moved, with the roaming putting more or fewer hops between the source and the destination. Similarly, multihoming of a network can mean that two or more different values for the TTL are equally valid. Thus, changes in TTL value can no longer be seen as indications that traffic has been subverted, even if the underlying routing is unchanged.

Difficulties Authenticating Identities. Authentication of users and machines attaching to networks as it is used today is far too complex to be feasible for users to use effectively. Consider a scenario in which a customer’s handset is initially on a corporate wireless network. If that customer steps out of the corporate building, the handset may get connected to the corporate network through a GPRS cellular telephone network. The handset may then roam to a wireless LAN when the user enters a public area with a hotspot. The authentication mechanisms are usually tied to the type of data link layer used; the mechanisms use different credentials for each type, with different semantics; and there is little or no linkage between the authentication databases used with the different technologies or with policy databases that control what a user may do when attached to a network. Consequently, we need authentication tools for unifying and simplifying this authentication infrastructure and that can cope with cases when the underlying data link layer technology changes quickly – possibly during a single application session – to ensure that users and applications will not be surprised when operations that are allowed at one moment fail a little later, once the attachment point has changed.

The Scale of the Problem

Unwanted traffic is a major problem for network owners and operators today both because of the volume and because of the ubiquitous adverse impact of the traffic on normal operations. The workshop did not look in any detail at the actual volumes of traffic: a look at almost any e-mail in-box is evidence enough that the volumes of spam alone are very large. This section looks briefly at how specific types of network are affected.

Everywhere Is Affected

There are a variety of types of unwanted traffic on the Internet today. The IAB workshop concentrated on DDoS and spam. The impact of unwanted traffic depends on the nature of the network domain through which it is flowing, but it affects almost every part of the network adversely.

The global nature of the Internet and the ease of ubiquitous connectivity allow miscreants to originate unwanted traffic from almost anywhere in the network and to target victims who are equally widely distributed. Attackers are interested in finding targets that offer maximal returns with minimal efforts. Regions with lots of high-speed, high-bandwidth user connections but poorly managed end hosts are ideal targets for originating DDoS traffic.

Effects on Specific Domains

Backbone Providers. Backbone providers are primarily in the commodity business of packet forwarding. Since they do not support end users directly, spam and malware are not major concerns. Some times backbone routers become compromised, but this is not currently a major problem. Thus the impact of unwanted traffic is measured chiefly through the effect of DDoS traffic on network availability.

Backbone networks are generally well provisioned with high-capacity links and are therefore not normally affected by DDoS attacks. A 5 Gbps attack that would challenge most access networks can usually be absorbed without noticeable impact. On the other hand, the fact that the backbone can handle this traffic amplifies the effect on the backbone’s access customers. A multihomed customer is highly likely to suffer from aggregated DDoS traffic arriving from all directions through its multiplicity of connections.

Access Providers. From the access providers’ viewpoint, the most severe impact of unwanted traffic is on their customer support load. Access providers have to deal directly with end users. Residential customers in particular see the access provider as their IT help desk, and the competitive nature of the business means that a single call can possibly wipe out any profits the provider might have made from the customer.

Enterprise Networks. Enterprises perceive many different categories of unwanted traffic. Apart from accidentally created traffic resulting from misconfiguration, a large part of the deliberately created unwanted traffic is usually just a background nuisance for enterprises because such traffic absorbs bandwidth, computing, and storage resources. Spam and peer-to-peer traffic that is not related to company business are good examples. Some of the remaining unwanted traffic may have an unknown purpose, but the big problems are caused by what is often a small volume of malicious traffic, such as traffic that spreads malware. The damage that results from undetected malicious traffic can be very costly and can take a lot of highly skilled effort to remedy.

Today, malicious traffic is often stealthy and can be obscured by encryption or can masquerade as legitimate traffic. Existing detection tools may be ineffective against this kind of traffic, and as with bots, stealth worms may open backdoors on hosts but remain dormant for long periods without causing any noticeable detrimental effects. This kind of traffic has the potential to be the gravest threat to an enterprise.

On the other hand, an enterprise may become the target of a DDoS attack, often focusing on its customer-facing Web servers. Such an attack can transform unwanted traffic from a background nuisance to a critical constraint on the enterprise’s ability to do business for the duration of the attack. For civilian businesses, this risks loss of customer confidence and in addition, has longer-term implications for the business, but for infrastructure and government services there can be political or terrorist motivations with the intention to affect the stability of the state. Detecting such an attack and dealing with it as soon as possible can be vital to the survival of the enterprise: advance planning is key to managing a DDoS attack because there is little time to react once an attack starts, and the traffic has to be suppressed before it concentrates on the target resources, which may mean having tools installed by the service providers feeding the enterprise.

Unwanted Traffic and Internet Infrastructure Services

The Internet needs certain infrastructure services – such as provision of the Domain Name System (DNS) – that are potentially vulnerable to DDoS attacks. Participants at the workshop heard reports of increasingly significant DDoS attacks on the servers that handle the root of the domain hierarchy as well as the .com and .net top-level domains.

Those attacks lead to disruption of critical services, and the situation is likely to get worse because the daily peaks of DNS usage have been growing at a much faster rate than the number of Internet users. This trend is expected to continue. The increasing load on the DNS infrastructure has led to an increase in complexity that potentially makes greater targets for attacks.

Defenses: Available but Relatively Ineffective

The Internet is not totally defenseless against the attacks from the underground economy. It is unfortunate that for a variety of reasons, many of the defenses are not as effective as they might be. Many of the reasons are economic and political rather than technical, inpurpose, including lack of resources, a perception that the benefits of deployment are felt by organisations other than those that have to bear the costs, and the need for coordination between competing organisations to achieve best results.

Analysis of the reasons for the ineffectiveness of the Internet’s defenses is critical to the design of future effective approaches to the unwanted-traffic problem.

Problems for Today’s Defenses

Although there are some techniques available to protect against the known vulnerabilities, a number of inadequacies exist in the tools themselves; more critically, a number of the tools that vendors and standards organisations have produced do not get used, and the scale of deployment of the tools of the remainder is inadequate, as is education of users and operators in the secure usage and operation of the Internet.

Generally, operators do not have adequate tools for diagnosing network problems. Current approaches rely primarily on the skills and experience of operators that use time-consuming manual operations. Better and automated tools would help; the same is true for tools that help by mitigating attacks.

Lack of Incentives for Countering Unwanted Traffic

A common theme that runs through the analysis of how unwanted traffic affects networks outside the enterprise is the lack of incentives for network operators to deploy security measures. That lack is due mainly to the low return on investment from what are essentially preventive
measures.

Expressed in the workshop discussion of the underground economy was an unwillingness to report fraud due to commercial sensitivity. That sensitivity also applies to the reporting of security incidents by network operators who fear that their reputations – or the reputations of their customers – would be damaged. Network reputation is key to gaining new customers, and so, minimising the amount of publicity given to security incidents is important to service providers’ survival. As a result, investment in prevention is minimal, and mitigation work tends to be local so as to avoid releasing commercially sensitive information, thereby hamstringing efforts to coordinate responses to attacks or to track malicious activity.

Notwithstanding the inadequacies of the available techniques, the view of the IAB workshop was that a significant reduction of unwanted traffic could be achieved with the limited tools available if those tools were deployed extensively and were operated correctly. Educating users to be more demanding and to lobby for judicious application of government regulation may assist in the incentivisation of providers to deploy the tools.

Available Defensive Techniques

Countering DDoS in the Backbone. At the time of the workshop there was no effective diagnosis and there was only a limited supply of mitigation tools that could help backbone providers fight DDoS attacks. That situation has changed over the past two years, and many providers are offering managed DDoS security services that deliver cleaned traffic to attached customer or lower-level provider sites based on traffic pattern learning, which allows recognition and filtering of abnormal patterns that signal a DDoS attack before they concentrate on the target. On the other hand, these solutions are designed to aid particular customers who are willing to pay for the extra service, and because of the perceived low return on investment, there is still little incentive for the backbone provider to deploy these solutions for every connection.

Know Your Sources. The IETF documented current best practices for filtering out incoming traffic with spoofed-source addresses in BCP 38 (RFC 2827), “Network Ingress Filtering: Defeating Denial of Service Attacks Which Employ IP Source Address Spoofing.” Many routers support this type of filtering as well as the updated version for multihomed networks in BCP 84 (RFC 3704).

Network operators have not deployed these techniques universally – at least partially because of the lack of incentive resulting from the heavy management costs of maintaining the filtering and because of the need to ensure that legitimate traffic is not accidentally filtered out. Although source spoofing is no longer the indispensable tool of the underground economy that it once was, more widespread use of BCP 38 and 84 filtering can still make attacks using spoofed addresses unprofitable and facilitate traceback of attacks.

Managing Access: Customer Behaviour. Access providers routinely offer free security software to customers in the hope of avoiding future help calls after a security break-in. Unfortunately, customers are often not educated about the need to install security software, and even when they are, they may lack the skills to correctly configure a complex system.

Customer behaviour in the face of security breaches is depressing:

  • All customers behave in essentially the same way.
  • Notifying customers that they have a problem has little effect on whether they take action to repair the breach.
  • Patching of breaches works in the same way as radioactive decay. A fixed proportion (about 40 percent) of remaining vulnerable systems are patched every month after the patch becomes available. In the large population of Internet hosts, this leaves a significant number that will be vulnerable for the rest of their working lives.
  • Lack of understanding often leads to compromised systems’ being replaced rather than being repaired, but this ignorance often leads to the occurrence of infections during installation of the replacement.

Maintaining Profitability in Enterprises. Enterprises, particularly large ones, are more willing to investigate security breaches than backbone or access providers are, because they can directly impact the enterprise’s operations and profitability. However, enterprise network operators are very wary of security solutions that generate false-positive alerts, because such alerts can be very costly to the enterprise if parts of the network have to be shut down unnecessarily. Most prefer prevention solutions to detection solutions because of this and are often willing to accept some missed alerts rather than significant false positives.

Enterprises are motivated by potential losses to spend money on security tools. Consequently, a thriving market has emerged to meet the demand. Unfortunately, the tools offered provide mostly reactive solutions, such as regularly updated virus scanner databases for countering newly emerging vulnerability exploits, which leads to an ongoing arms race between security exploits and patching solutions. Workshop participants expressed concerns that this was not a sustainable situation because it does not enable us to get ahead of the attackers.

Over-engineering the Infrastructure. At present, the only effective mitigation strategy for DDoS attacks on critical infrastructure services is over-engineering. There is some concern that the runaway growth of demand especially for DNS services is eroding the safety margins. The expected widespread deployment of IPv6 and deployment of the new DNS security extensions (DNSSEC) in the near future will bring new and potentially flawed software into widespread use that could be abused to generate new DDoS attacks.

Law and Regulation Playing Catch-up

In human society, legal systems provide protection from and deterrence for criminals. Laws and regulations aim to penalise criminal conduct after the fact, but if the likelihood of detection is low, the deterrent effect is also minimal. At present, the development of legal systems aimed at cyberspace crime is lagging behind the development of the crime that the legal systems are intended to deter, and the likelihood of detection of the real criminals is low.

Some of the reasons for the ineffectiveness and slow development of the law of cyberspace include:

  • The international scope of the problem. The Internet spans the globe, and crimes masterminded in one national jurisdiction may be executed by machines in one or more other countries, with victims in yet other jurisdictions. While some countries, particularly in the developed world, criminalise computer fraud and abuse, regulate unauthorised use of government and other critical infrastructure, and prohibit access to confidential information on protected computers, the laws are not uniform, which makes it difficult to prosecute criminals for offences carried out from other jurisdictions. There is also little political incentive to pursue criminals when the victims are not in the same national jurisdiction. Although there is a coalition between countries on collecting evidence of cybercrime worldwide, there is no rigorous way to trace unwanted traffic or to measure the consequences of cybercrime across national borders.
  • Pinning down the responsible organisation. A single episode of unwanted traffic and the botnets that are responsible for much of the traffic can involve many different organisations, such as owners of hosts, enterprise networks, and service providers of various kinds. Many of these organisations would see themselves as innocent parties, and others, such as the owners of compromised hosts, see no incentive to take action. This makes it extremely difficult to either regulate effectively in advance to make life difficult for the criminals or to make any organisation responsible for cleaning up after an attack has been detected.
  • Getting the legal definitions right. Lawmakers are generally unfamiliar with the new world of cyberspace, and therefore they often lack the technical understanding necessary to specify laws precisely and in such a way that they will actually target undesirable acts without limiting legitimate use of the network. As in many areas where there are active innovation and financial incentive, the underground economy will always be seeking to push the limits by using techniques that are borderline legal and conceal evidence through complexity. The lawmakers are inevitably playing catch-up in cyberspace.
  • Quantifying the damage. Investigative authorities are already stretched, and so, active legal action tends to be restricted to cases where the harm caused exceeds a fairly high threshold. In the case of unwanted traffic, this generally means either significant damage to national infrastructure or a large, quantifiable monetary loss. Unfortunately, either (1) it is often difficult to quantify the loss, or, when financial institutions are involved, (2) there is a reluctance to admit the scale of the losses for fear of ongoing commercial damage. Consequently, much cybercrime is either not reported to the authorities or not investigated.
  • Defining unwanted traffic. Creating capabilities to limit unwanted traffic can have unwanted side effects. It needs only a shift in the definition of unwanted to move from constraining the underground economy to facilitating censorship and limiting open access. Countries already differ over what is defined as unwanted traffic; and traffic that would be seen as wholly legitimate in many countries may result in criminal prosecutions elsewhere. There is a trade-off between having audit trails to facilitate forensic analysis and providing the means to enforce censorship. Building monitoring capabilities into the network will surely result in stronger pressure from legislators, requiring that operators actually carry out monitoring.

The workshop also emphasised that, while an effective legal system is necessary to create effective deterrence for and sanctions against the parasites, it is by no means sufficient on its own. It can work only in conjunction with effective user education as well as technical solutions to unwanted traffic prevention and detection. Only a well-informed and motivated user community can collectively establish a defense against unwanted traffic in cyberspace.

Consequences

The consequences of the large volumes of unwanted traffic on the Internet today are highly detrimental. The health of the network presents a picture that is far from rosy.

There are no auditing systems to trace back to the sources of attacks.

  • There are big economic incentives and a rich environment to exploit.
  • There is no specific party to carry responsibility.
  • There are problems of underdeployment of the limited defensive tools that are available.
  • There are no well-established legal regulations to punish offenders.

The combination of these factors inevitably leads to ever-increasing types and volumes of unwanted traffic. However, the real threats are not the bots or DDoS attacks but the parasitic criminals behind them. Unwanted traffic is no longer aiming only for maximal disruption; in many cases, it is now a means to illicit ends, and its specific purpose is to generate financial gains for the miscreants. Their crimes cause huge economic losses, counted in multiple billions of dollars and growing.

IETF Journal editor Mirjam Kühne and friends attend the IETF 70 plenary session.
IETF Journal editor Mirjam Kühne and friends attend the IETF 70 plenary session.
Photo Credit: Peter Lötberg, with permission

The Internet community needs to increase its awareness of the problem of unwanted traffic and take action to make the network less friendly to this type of traffic. And it needs to do so without significantly reducing the flexibility of the network that has been the key factor in the economic success of the Internet.

All Internet stakeholders can potentially contribute to the reduction of unwanted traffic. At a high level, actions should include the following

  • Research into specific problems resulting from unwanted traffic, involving:
    • Sponsoring and funding agencies that prioritise this kind of research
    • Network operators, equipment vendors, and users who can identify the most important problems that require research effort and who can make sure that researchers are aware of them
    • Standards organisations, which should help coordinate communication between researchers and the rest of the community to identify the fundamental problems and standardise any solutions that may be found.
    • Development of a uniform global legal framework that will facilitate successful legal pursuit of the miscreants in the underground network economy across national borders. This work needs to be informed by the best possible technical expertise to ensure that it leaves Internet flexibility intact so far as is possible.
  • Appropriate regulation to require that network operators take action to minimise the effects of unwanted traffic and that they share information that will lead to mitigation of attacks and will drive miscreants out of business
  • Increased deployment of available tools, possibly aided by incentivisation through regulation or customer demand
  • Vendors that provide more-appropriate default security settings in equipment so that end hosts are less vulnerable to subversion from the moment they are deployed and without the need for sophisticated configuration by users
  • Vitally, improved education of users to make them more aware of the risks to their systems, to make them aware of the ways those risks can be mitigated, and to mobilise them so they’ll demand action from network operators when action is needed to support network security in both enterprises and homes
Many thanks to Lixia Zhang, Loa Andersson, and Danny McPherson for their feedback and review.

Above all, the Internet community needs to get ahead of the miscreants. At present, almost all activity for countering unwanted traffic is reactive, by ex post facto identification of malware and retroactive patching of security holes. Recently, there have been improvements in the use of traffic pattern analysis to identify attacks as they happen, but future work needs to be intelligence led, and it must concentrate on eliminating opportunities for miscreants before such opportunities are deployed.