?

Log in

No account? Create an account

fanf

DNS reflection / amplification attacks: security economics, nudge theory, and perverse incentives.

« previous entry | next entry »
11th Apr 2013 | 18:14

In recent months DNS amplification attacks have grown to become a serious problem, the most recent peak being the 300 Gbit/s Spamhaus DDoS attack which received widespread publicity partly due to effective PR by CloudFlare and a series of articles in the New York Times (one two three).

Amplification attacks are not specific to the DNS: any service that responds to a single datagram with a greater number and/or size of reply datagrams can be used to magnify the size of an attack. Other examples are ICMP (as in the smurf attacks that caused problems about 15 years ago) and SNMP (which has not yet been abused on a large scale).

The other key ingredient is the attacker's ability to spoof the source address on packets, in order to direct the amplified response towards the ultimate victim instead of back to the attacker. The attacks can be defeated either by preventing amplification or by preventing spoofed source addresses.

Smurf attacks were stopped by disabling ICMP to broadcast addresses, both in routers (dropping ICMP to local broadcast addresses) and hosts (only responding to directly-addressed ICMP). This fix was possible since there is very little legitimate use for broadcast ICMP. The fix was successfully deployed mainly because vendor defaults changed and equipment was upgraded. Nudge theory in action.

If you can't simply turn off an amplifier, you may be able to restrict it to authorized users, either by IP address (as in recursive DNS servers) and/or by application level credentials (such as SNMP communities). It is easier to police any abuse if the potential abusers are local. Note that if you are restricting UDP services by IP address, you also need to deploy spoofed source address filtering to prevent remote attacks which have both amplifier and victim on your network. There are still a lot of vendors shipping recursive DNS servers that are open by default; this is improving slowly.

But some amplifiers, such as authoritative DNS serves, are hard to close because they exist to serve anonymous clients across the Internet. It may be possible to quash the abuse by suitably clever rate-limiting which, if you are lucky, can be as easy to use as DNS RRL; without sufficiently advanced technology you have to rely on diligent expert operators.

There is a large variety of potential amplifiers which each require specific mitigation; but all these attacks depend on spoofed source addresses, so they can all be stopped by preventing spoofing. This has been recommended for over ten years (see BCP 38 and BCP 84) but it still has not been deployed widely enough to stop the attacks. The problem is that there are not many direct incentives to do so: there is the reputation for having a well-run network, and perhaps a reduced risk of being sued or prosecuted - though the risk is nearly negligible even if you don't filter. Malicious traffic does not usually cause operational problems for the originating network in the way it does for the victims and often also the amplifiers. There is a lack of indirect incentives too: routers do not filter spoofed packets by default.

There are a number of disincentives. There is a risk of accidental disruption due to more complicated configuration. Some networks view transparency to be more desirable than policing the security of their customers. And many networks do not want to risk losing money if the filters cause problems for their customers.

As well as unhelpful externalities there are perverse incentives: source address spoofing has to be policed by an edge network provider that is acting as an agent of the attacker - perhaps unwittingly, but they are still being paid to provide insecure connectivity. There is a positive incentive to corrupt network service providers that allow criminals to evade spoofing filters. The networks that feel the pain are unable to fix the problem.

More speculatively, if we can't realistically guarantee security near the edge, it might be possible to police spoofing throughout the network. In order for this to be possible, we need a comprehensive registry of which addresses are allocated to which networks (a secure whois), and a solid idea of which paths can be used to reach each network (secure BGP). This might be enough to configure useful packet filters, though it will have similar scalability problems as address-based packet forwarding.

So we will probably never be able to eliminate amplification attacks, though we ought to be able to reduce them to a tolerable level. To do so we will have to reduce both datagram amplifiers and source address spoofing as much as possible, but neither of these tasks will ever be complete.

| Leave a comment | Share

Comments {7}

from: Pete Stevens [ex-parrot.com]
date: 11th Apr 2013 18:14 (UTC)

Experience from running a hosting company, probability of having an open resolver scales inversely with amount of money paid for service. Consequently the people with the smallest bandwidth quotas got the largest excess bills and/or very unhappy when we rate limited them to 10Mbps until they fixed it.

Reply | Thread

The Bellinghman

from: bellinghman
date: 11th Apr 2013 22:15 (UTC)

Is it possible to prevent spoofing+amplification in new protocols by insisting on an extra exchange step? So something like:

Client: transmits initial packet to server
Server: returns single packet containing a key which encodes the client IP
Client: sends request packet with key and actual request
Server: replies with multiple packets if source IP matches IP encoded in key

Yes, it increases latency, but my initial assumption is that that extra exchange should be required only once per session

Reply | Thread

Tony Finch

from: fanf
date: 12th Apr 2013 00:05 (UTC)

Yes. This is how TCP works :-)

It makes sense to establish a session if you expect to be transferring a significant amount of data - more than a few packets. That is rare in the DNS. It is possible to force a fallback to TCP in the DNS without amplification, but it massively hurts performance: it triples the latency, and it requires servers to keep a lot more state in RAM for a lot more time. A DNS protocol exchange normally requires one round trip:

Client: sends query
Server: sends reply

With fallback to TCP there are three round trips before the reply, after which there are extra cleanup steps:

Client: sends query
Server: sends minimal reply (so that it does not amplify) with the "truncated" bit set
Client: SYN
Server: SYN ACK
Client: ACK, repeat query
Server: full reply
Server: FIN
Client: FIN ACK
Server: ACK
Server keeps state for TIME_WAIT

TCP has a bad feature: after closing a session there is a timeout during which one end has to keep state in order to prevent interference between distinct sessions that happen to have the same endpoint addresses and ports. In TCP this state has to be kept at the endpoint that initiates the close; in many application protocols this is the server, but it would be better to distribute the timeout state to the clients.

DNS RRL is cunning: only if a client hits the rate limit does it send minimal truncated replies. Thus only clients that are unlucky enough to be mixed up in an attack have to do the high-latency fallback to TCP. It is usually the case that when someone suggests a good attack mitigation strategy for DNS amplification, RRL already includes a better version of it :-)

The DNS is a very weird protocol: there is basically nothing else that has relatively small responses that are cached so that full resolvers do not need to frequently interact with authoritative servers. It is also lucky (apart from misconfigured firewalls) that the protocol includes fallback to TCP. Other datagram protocols (including TCP setup!) have been designed to avoid amplification, partly because one-shot interactions are usually not worth optimising, but also because spoofing has long been known to be a problem. So for twenty years protocol designers have mostly been avoiding DNS-like designs.

Reply | Parent | Thread

The Bellinghman

from: bellinghman
date: 12th Apr 2013 08:21 (UTC)

Ah, thanks. (I'm glad to see my uneducated intuition isn't too ignorant.)

Reply | Parent | Thread

Andrew

from: nonameyet
date: 13th Apr 2013 08:12 (UTC)

The 6 March New Scientist http://www.newscientist.com/article/mg21729075.800-chinas-nextgeneration-internet-is-a-worldbeater.html had an article about how China's internet was ahead of the west because of its progress with a) IPv6 and b) SAVA - Source Address Validation Architecture.

The article appears to be based on a Royal Society paper http://rsta.royalsocietypublishing.org/content/371/1987/20120387 which mentions rfc5210.

This all sounds like they are trying to address spoofing too.

Reply | Thread

cozminsky

from: cozminsky
date: 3rd May 2013 02:02 (UTC)

Could the RIRs be used as a tool to enforce BCP 38/84. e.g. being audited for compliance with these practises on production networks is a requirement for being give ip address space with a specific date in the future where their ip space would be yanked?

Reply | Thread

Tony Finch

from: fanf
date: 3rd May 2013 11:29 (UTC)

I don't think the RIRs have much real ability to punish ISPs for breaking the rules.

Reply | Parent | Thread