Security

The Perfect Attack

By: Frederico Neves, João Damas

Date: December 7, 2007

line break image

Attacks of all types have existed on the Internet for a long time. They have targeted individual users, servers, client machines or applications, and the infrastructure itself. They have had different degrees of success in achieving their goals, many of which have not always been clear for everyone to see. While attacks initially may have been motivated by clever coders seeking attention, over time the reasons behind attacks have become more varied. The most popular of them seems to be economic gain but not necessarily legal in any jurisdiction.

In this article we visit one type of attack: a type that may not be the most directly profitable for the attackers but one that has proved especially threatening to potential victims. What is especially insidious about this type of attack is that it is often launched for the purpose of attracting the attention of potential customers for the attackers, who initiate these attacks to demonstrate their skill. In fact, there is general consensus within the Internet security community that attracting business is the key motivator behind attacks on root servers, since such attacks always make headlines even if the damage is minimal.

It is extremely difficult, if not impossible, to defend oneself from this type of attack because the traffic it generates appears to be nearly identical to legitimate traffic; therefore, a direct defense is in itself a component of the attacker’s success.

The Perfect Attack

The attack on Internet infrastructure that we are describing here makes use of the Domain Name System (DNS). The attack uses some of DNS’s necessary features for the profit of the attacker or its customer.

The DNS is designed to use the UDP part of TCP/IP as its main transport. UDP is a perfect match for DNS due to the short question/answer exchange that is involved in most DNS queries and usually completed with the involvement of only one packet from client to server and one packet from server to client. This feature is one of the characteristics of DNS that makes it especially scalable, as there is usually none of the overhead of a session establishment during a DNS query/answer interaction. Because there is no state referring to the client in a DNS authoritative server, it’s possible for an authoritative DNS server to answer a high rate of incoming queries. Recursive servers – those that perform queries on behalf of their clients to walk the DNS tree and find the requested information – do have to maintain state while they issue the various queries that may be required to get the answer the client initially asked. Even with this added burden, a small number of recursive servers can handle large populations of clients because only a small portion will be performing queries at any given time. Recursive servers implement the caching mechanism (as described in DNS RFCs) and therefore can reduce the amount of external traffic required, as long as results from previous queries are readily at hand. This caching mechanism has worked very well over the years.

On the other hand, the connectionless/stateless characteristics of DNS and its preferred underlying UDP transport are, as it happens, Achilles’ heels when it comes to differentiation of traffic arriving at a server. With DNS, a client is required to send a single packet to a server only in order to trigger a response by the server, with the corresponding work on the server side being performed. The effects on the server and network are therefore much more severe than the one-packet interaction of SYN attacks against TCP stacks popular some years ago.

In addition, even though IP packets carry both a destination and a source address as part of their header information, the source address is generally looked at only once the packet arrives at its final destination and is used only as a means to address the reply sent by the destination server, if any is to be sent.

Comic BoF

The combination of these characteristics opens up the possibility for what are now called reflector attacks. In reflector attacks, a series of compromised hosts sends correctly formed DNS queries to recursive resolvers to which they have access over the network, but queries are crafted in such a way that the source IP address of the UDP packet is not that of the actual sending host but that of a victim. When the packet arrives at a recursive resolver, the answer is eventually sent out not to the host that sent the original packet but to the host with the faked source (now destination) address. By using a few of these recursive resolvers spread around the network, one can concentrate the streams of network traffic from the individual recursive resolver onto the victim in a way that can generate traffic levels that are unbearable for the target host or the networks it lives in, thereby causing the service to effectively collapse.

The number of hosts used in the attack and their bandwidth can vary depending on what the attackers have at their disposal – sometimes preferring a few well-connected machines located on campus LANs with good Internet access and other times using botnets of hosts behind domestic broadband links. In all cases, careful distribution of traffic among originating hosts and recursive resolvers can make the traffic levels go undetected until the traffic gets close to the victim’s network and becomes focused on a single point.

The victim always has a hard time because it is usually the network itself that gets saturated, and even if the network administrators were to block incoming traffic at their border routers, it is probably too late; by this point, their incoming links are probably saturated.

The attack becomes a perfect attack when the victim is itself an authoritative name server. The server administrator is then faced with the problem of not being able to distinguish between real queries and attack traffic. Each of the recursive resolvers being used as reflectors is likely to also provide service for a community, and simply blocking traffic from one of the reflectors will render the service unavailable for the entire community served by it.

Mitigating these attacks usually requires collaboration among the organisations responsible for the victim servers and the ISPs that carry the traffic, which attempt to trace traffic back to its origin and to find the command and control centre that coordinated the attacking hosts. So far, the usual reaction has been to increase capacity in order to increase the chances of surviving such an attack.

While there are a number of forms these attacks may generate, only a few measures will be mentioned here that can be taken to at least help with closing the door to some of the possibilities. The first and easiest measure is to have the administrators of recursive resolvers configure their servers so that they provide service only for their intended audience and not the entire Internet. It is still quite common today to see a recursive resolver that will answer to any machine on the Internet – a holdover from older, gentler times. The IETF is trying to make a recommendation for these administrators in hopes of seeing some improvements. The recommendation is also geared toward DNS software vendors to alter their default parameters, so that service is by default provided only for a relevant default population – for example, those machines using the same IP prefixes as the server interfaces. It is good to see vendors taking action in this area already.

The second and more difficult option is for ISPs to check the source IP address of packets in their networks and weed out the ones that shouldn’t be there. This can itself be tricky.

While ISPs use knobs in their routing configurations, referred to as policy routing, which may inspect the source address in the packets for a variety of reasons, these source addresses are used mostly for traffic categorisation – for instance, for sending some source addresses or UDP/TCP ports via connections with more-controlled timing characteristics than other source addresses or ports have.

A less frequent event is the utilisation of router capabilities that look at the source addresses of the packets to verify that they are within the set of addresses that should be seen coming in via a given router interface. In principle, only networks behind that interface should be sending packets to that interface, with their addresses as sources. This is generally the case for customer networks behind their ISP’s access routers or enterprises sending traffic through their office or campus routers to their upstream providers. The picture gets harder when ISPs with multiple peers are involved. When multiple paths are made possible for traffic exchange, asymmetric traffic is a definite possibility. Any such network is likely to see traffic intended for one destination exit via one path and the response enter the network via an entirely different path. This can be due to engineering considerations at the ISP in question or as a result of similar decisions made elsewhere in the network. Whatever the case, this is a feature that provides one of the pillars of resilience to the Internet and therefore should not been seen as a problem.

Encouraging this sort of check and balance at the internal edges of the networks, where the core of the ISP network faces its customers, has the best chance of success. Most current router software for ISPs already includes features that allow this check to be performed without complex configuration and based on dynamic – rather than static and therefore harder-to-maintain – data.

As can be seen, the involvement and cooperation of all parties are required for a complete solution to this sort of problem. While many will think this kind of coordination is a utopia, it is also one of the basic features of the Internet – the interconnected mesh of disparate networks out in the world.