Log in

No account? Create an account



« previous entry | next entry »
21st Feb 2013 | 13:00

Dan Bernstein recently published slides for a talk criticizing DNSSEC. The talk is framed as a description of the fictitious protocol HTTPSEC, which is sort of what you would get if you applied the DNSSEC architecture to HTTP. This seems to be a rhetorical device to make DNSSEC look stupid, which is rather annoying because it sometimes makes his points harder to understand, and if his arguments are strong they shouldn't need help from extra sarcasm.

The analogy serves to exaggerate the apparent foolishness because there are big differences between HTTP and the DNS. HTTP deals with objects thousands of times larger than the DNS, and (in simple cases) a web site occupies a whole directory tree whereas a DNS zone is just a file. Signing a file isn't a big deal; signing a filesystem is troublesome.

There are also differences in the way the protocols rely on third parties. Intermediate caches are ubiquitous in the DNS, and relatively rare in HTTP. The DNS has always relied a lot on third party secondary authoritative servers; by analogy HTTP has third party content delivery networks, but these are a relatively recent innovation and the https security model was not designed with them in mind.

The DNS's reliance on third parties (users relying on their ISP's recursive caches; master servers relying on off-site slaves) is a key part of the DNSSEC threat model. It is designed to preserve the integrity and authenticity of the data even if these intermediaries are not reliable. That is, DNSSEC is based on data security rather than channel security.

I like to use email to explain this distinction. When I connect to my IMAP server over TLS, I am using a secure channel: I know I am talking to my server and that no-one can falsify the data I receive. But the email I download over this channel could have reached the server from anywhere, and it can contain all sorts of fraudulent messages. But if I get a message signed with PGP or S/MIME I can be sure that data is securely authentic.

DJB uses the bad analogy with HTTP to mock DNSSEC, describing the rational consequences of its threat model as "not a good approach" and "stupid". I would prefer to see an argument that tackles the reasons for DNSSEC's apparently distasteful design. For example, DJB prefers an architecture where authoritative servers have private keys used for online crypto. So if you want outsourced secondary service you have to make a rather difficult trust trade-off. It becomes even harder when you consider the distributed anycast servers used by the root and TLDs: a lot of the current installations cannot be upgraded to the level of physical security that would be required for such highly trusted private keys. And there is the very delicate political relationship between ICANN and the root server operators.

So the design of DNSSEC is based on an assessment that the current DNS has a lot of outsourcing to third parties that we would prefer not to have to trust, but at the same time we do not want to fix this trust problem by changing the commercial and political framework around the protocol. You might legitimately argue that this assessment is wrong, but DJB does not do so.

What follows is a summary of DJB's arguments, translated back to DNSSEC as best I can, with my commentary. The PDF has 180 pages because of the way the content is gradually revealed, but there are less than 40 substantive slides.

Paragraphs starting with a number are my summary of the key points from that page of the PDF. Paragraphs marked with a circle are me expanding or clarifying DJB's points. Paragraphs marked with a square are my corrections and counter-arguments.

  1. HTTPSEC motivation
  2. Standard security goals
  3. HTTPSEC "HTTP Security"
    responses signed with PGP
    • DNSSEC uses its own signature format: RRSIG records.

  4. Signature verification
    chain of trust
    • Internet Central Headquarters -> ICANN.

  5. Root key management
    • The description on this slide is enormously simplified, which is fair because DNSSEC root key management involves a lot of paranoid bureaucracy. But it gets some of these tedious details slightly wrong; fortunately this has no bearing on the argument.

    • Access to the DNSSEC root private key HSM requires three out of seven Crypto Officers. There are also seven Recovery Key Share officers, five of whom can reconstruct the root private key. And there are separate people who control physical access to the HSM, and people who are there to watch everything going according to plan.

    • Root zone signatures last a week, but the root zone is modified several times a day and each modification requires signing, using a private key (ZSK) that is more easily accessed than the root key (KSK) which only comes out every three months when the public keys are signed.

  6. Verifying the chain of trust
  7. HTTPSEC performance
    Precomputed signatures and no per-query crypto.
    • This design decision is not just about performance. It is also driven by the threat model.

  8. Clients don't share the work of verifying signatures
    Crypto primitives chosen for performance
    Many options
    • Another consideration is the size of the signatures: smaller is better when it needs to fit into a UDP packet, and when the signatures are so much bigger than the data they cover.

    • Elliptic curve signatures are now an excellent choice for their small size and good performance, but they are relatively new and have been under a patent cloud until fairly recently. So DNSSEC mostly uses good old RSA which has been free since the patent expired in 2000. If crypto algorithm agility works, DNSSEC will be able to move to something better than RSA, though it will probably take a long time.

    • Compare TLS crypto algorithm upgrades.

  9. Breakable choices such as 640-bit RSA
    Staggering complexity
    • Any fixed choice of crypto primitive is going to be broken at some point, so there must be some way to upgrade the crypto over time. DNSSEC relies on signatures which in turn rely on hashes, and hash algorithms have generally had fairly short lifetimes.

    • Compare SSL/TLS's history of weak crypto. Both protocols date back to the mid 1990s.

    • The complexity of DNSSEC is more to do with the awkward parts of the DNS, such as wildcards and aliases, and not so much the fact of crypto algorithm agility.

  10. HTTPSEC does not provide confidentiality
    • Yes this is a bit of a sore point with DNSSEC. But observe that in pretty much all cases, immediately after you make a DNS query you use the result to connect to a server, and this reveals the contents of the DNS reply to a snooper. On the other hand there are going to be uses of the DNS which are not so straight-forward, and this will become more common as more people use the security properties of DNSSEC to put more interesting data in the DNS which isn't just related to IP connectivity.

  11. HTTPSEC data model
    signatures alongside data
    • This slide makes DNSSEC signing sound a lot more fiddly than it actually is. HTTP deals with sprawling directory trees whereas most DNS master files are quite small, e.g. 33Mbytes for a large-ish signed zone with 51k records. DNSSEC tools deal with zones as a whole and don't make the admin worry about individual signatures.

  12. HTTPSEC purists say "answers should always be static"
    • In practice DNSSEC tools fully support dynamic modification of zones, e.g. the .com zone is updated several times a minute. In many cases it is not particularly hard to add DNSSEC signing as a publication stage between an existing DNS management system and the public authoritative servers, and it often doesn't require any big changes to that system.

    • Static data is supported so that it is possible to have offline keys like the root KSK, but that does not prevent dynamic data. (For a funny example, see the DNSSEC reverse polish calculator.) In a system that requires dynamic signatures static data and offline keys are not possible.

  13. Expiry times stop attackers replaying old signatures
    Frequent resigning is an administrative disaster
    • This is true for some of the early unforgiving DNSSEC tools, but there has been a lot of improvement in usability and reliability in the last couple of years.

  14. HTTPSEC suicide examples
    • These look like they are based on real DNSSEC cockups.

    • Many problems have come from the US government DNSSEC deployment requirement of 2008, so many .gov sites set it up using the early tools with insufficient expertise. It has not been a very good advertisement for the technology.

  15. Nonexistent data - how not to do it
  16. NSEC records for authenticated denial of existence
  17. NSEC allows zone enumeration
  18. DNS data is public
    an extreme notion and a public-relations problem
    • The other problem with NSEC is that it imposes a large overhead during the early years of DNSSEC deployment where TLDs mostly consist of insecure delegations. Every update requires fiddling with NSEC records and signatures even though this provides no security benefits. NSEC3 opt-out greatly reduces this problem.

  19. NSEC3 hashed denial of existence
  20. NSEC3 does not completely prevent attackers from enumerating a zone
    • This is true but in practice most sites that want to keep DNS data private use hidden zones that are only accessible on their internal networks.

    • Alternatively, if your name server does on-demand signing rather using pre-generated signatures, you can use dynamic minimally covering NSEC records or empty NSEC3 records.

    • So there are ways to deal with the zone privacy problem if static NSEC3 isn't strong enough for you.

  21. DNS uses UDP
  22. DNS can be used for amplification attacks
    • A lot of other protocols have this problem too. Yes, DNSSEC makes it particularly bad. DNS software vendors are implementing response rate limiting which eliminates the amplification effect of most attacks. Dealing with spam and criminality is all rather ugly.

    • A better fix would be for network providers to implement ingress filtering (RFC 2827, BCP 38), but sadly this seems to be an impossible task, so higher-level protocols have to mitigate vulnerabilities in the network.

  23. DNSSEC provides no protection against denial of service attacks
    • This quote comes from RFC 4033 section 4. I think (though it isn't entirely clear) that what the authors had in mind was the fact that attackers can corrupt network traffic or stop it, and DNSSEC can do nothing to prevent this. See for example section 4.7 of RFC 4035 which discusses how resolvers might mitigate this kind of DoS attack. So the quote isn't really about reflection attacks.

    • (Other protocols have similar problems; for instance, TLS kills the whole connection when it encounters corruption, so it relies on the difficulty of breaking TCP connections which are not normally hardened with crypto - though see the TCP MD5 signature option in RFC 2385.)

  24. The worst part of HTTPSEC

    The data signed by HTTPSEC doesn’t actually include the web pages that the browser shows to the user.

    HTTPSEC signs only routing information: specifically, 30x HTTP redirects.
    • I can't easily understand this slide because the analogy between DNS and HTTP breaks down. According to the analogy, HTTP redirects are DNS referrals, and web pages are leaf RRsets. But DNSSEC does sign leaf RRsets, so the slide can't be talking about that.

    • Perhaps it is being more literal and it is talking about actual web pages. The DNSSEC answer is that you use records such as SSHFP or TLSA to link the DNSSEC authentication chain to your application protocol.

    • I asked DJB about this on Twitter, and he confirmed the latter interpretation. But he complained that the transition from signed DNSSEC data to an encrypted application channel destroys the advantages of signed DNSSEC data, because of the different security models behind signed data and encrypted channels.

    • But in this situation we are using DNSSEC as a PKI. The X.509 PKI is also based on statically signed data (the server certificate) which is used to authenticate a secure channel.

  25. redirect example
  26. redirect example
  27. redirect example
  28. If final web page is signed, what is the security benefit of signing the redirects? Attacker can’t forge the page.
    • The answer to this question is how you trust the signature on the web page. At the moment we rely on X.509 which is inadequate in a lot of ways. DNSSEC is a new PKI which avoids some of the structural problems in the X.509 PKI. This is the reason I think it is so important.

    • The X.509 PKI was designed to follow the structure of the X.500 directory. When it got re-used for SSL and S/MIME it became decoupled from its original name space. Because of this, any CA can authenticate any name, so every name is only as strong as the weakest CA.

    • DNSSEC follows the structure of the Internet's name space. Its signing authorities are the same as DNS zone authorities, and they can only affect their subdomains. A British .uk name cannot be harmed by the actions of the Libyan .ly authorities.

    • What other global PKIs are there? PGP? Bueller?

  29. Deployment is hard
    • If you look at countries like Sweden, Czech, Netherlands, Brazil, there is a lot more DNSSEC than elsewhere. They have used financial incentives (domain registration discounts for signed domains) to make it more popular. Is DNSSEC worth this effort? See above.

    • It's amusing to consider the relative popularity of DNSSEC and PGP and compare their usage models.

  30. HTTPS is good
    • But it relies on the X.509 PKIX which is a disaster. Peter Gutmann wrote a series of articles with many examples of why: I, II, III; and the subsequent mailing list discussion is also worth reading.

  31. The following quotes are straw-man arguments for why an https-style security model isn't appropriate for DNSSEC.
  32. “HTTPS requires keys to be constantly online.”
    • So does DNSSEC in most setups. You can fairly easily make DNSSEC keys less exposed than a TLS private key, using a hidden master. This is a fairly normal non-sec DNS setup so it's nice that DNSSEC can continue to use this structure to get better security.

  33. “HTTPS requires servers to use per-query crypto.”
    • So does NSEC3.

  34. “HTTPS protects only the channel, not the data. It doesn’t provide end-to-end security.”
    Huh? What does this mean?
    • See the discussion in the introduction about the DNSSEC threat model and the next few notes.

  35. Why is the site owner putting his data on an untrusted server?
    • Redundancy, availability, diversity, scale. The DNS has always had third-party secondary authoritative name servers. HTTP also does so: content delivery networks. The difference is that with DNSSEC your outsourced authoritative servers can only harm you by ceasing to provide service: they cannot provide false service; HTTP content delivery networks can mess up your data as much as they like, before serving it "securely" to your users with a certificate bearing your name.

  36. “HTTPS destroys the caching layer. This Matters.”
    Yeah, sure it does. Film at 11: Internet Destroyed By HTTPS.
    • Isn't it nicer to get an answer in 3ms instead of 103ms?

    • Many networks do not provide direct access to DNS authoritative servers: you have to use their caches, and their caches do not provide anything like the web proxy HTTP CONNECT method - or at least they are not designed to provide anything like that. A similar facility in the DNS would have to be an underhanded crypto-blob-over-DNS tunnel hack: a sort of anarchist squatter's approach to protocol architecture.

    • To be fair, a lot of DNS middleboxes have crappy DNSSEC-oblivious implementations, and DNSSEC does not cope with them at all well. Any security upgrade to the DNS probably can't be done without upgrading everything.

  37. The DNS security mess
  38. referral example
  39. HTTPSEC was all a horrible dream analogy

So to conclude, what DJB calls the worst part of DNSSEC - that it secures the DNS and has flexible cryptographic coupling to other protocols - is actually its best part. It is a new global PKI, and with sufficient popularity it will be better than the old one.

I think it is sad that someone so prominent and widely respected is discouraging people from deploying and improving DNSSEC. It would be more constructive to (say) add his rather nice Curve25519 algorithm to DNSSEC.

If you enjoyed reading this article, you might also like to read Dan Kaminsky's review of DJB's talk at 27C3 just over two years ago.

| Leave a comment |

Comments {6}

Ian Eiloart

Abysmal slides

from: IanEiloart
date: 21st Feb 2013 16:18 (UTC)

What an abysmal set of slides. At least the discouragement is so obscure as to be virtually impenetrable!

Reply | Thread

Colm MacCrthaigh

Some points ...

from: colmmacc
date: 21st Feb 2013 17:33 (UTC)

Firstly - great post, it's a lot of good details. For disclosure: My own personal opinion on DNSSEC lies somewhere between DJB's and yours. I do think that DNSSEC does have design errors, but I also think that those design errors are not so serious that they are over-come-able or that they should hold back the benefits of DNSSEC.

I think it's fair to say that DNSSEC is about maintaining the integrity of DNS responses. A lack of checksums in the original DNS protocol is a large over-sight, a simple bit-flip due to cosmic rays can cause an invalid DNS response to be cached for days - denying service. Any addition of integrity data is to be welcomed, even just for that reason.

Of course, attackers can use collision attacks and MITM attacks to do even worse things that break integrity. For simple collision attacks there are probably more effective mitigations today. Case-randomisation of query names adds sufficient entropy to regular queries as to make collisions impractical - and nearly all DNS servers echo case correctly. [*]

For true man-in-the-middle attacks e.g. the tampering a government or ISP may do, DNSSEC can certainly help detect it. In point 20 and 24, you mention the root server keys, but DJB referred to "an american corporation" in his slides, I took it to mean verisign or other maintainers of TLDs. For DNSSEC we must trust the root zone operators, and the intermediary zone operators (e.g. ".com") - but is there really any reason to? Even with the laborious procedures that are used on the root zone, we must still trust that the HSMs have not been backdoored with clever side-channels ; but that style of attack is how the NSA does business (http://en.wikipedia.org/wiki/Crypto_AG#Back-doored_machines). Still, it's hard to use a root key or TLD key to form a successful attack that will go undetected (because the attacker must change the downstream key, and that is detectable). It means that DNSSEC is probably about as secure from government interference as classic CAs are, but no better. (example; could the Chinese authorities abuse the key for .cn?).

On 160; NSEC3 doesn't imply per-query crypto. I think there are two other significant points DJB made which you don't mention though;

  • A significant portion of the world's DNS is not served from files - but is dynamic. Some "dynamic" answers are pre-computable, but some are not. Signing DNS responses on the fly is simply not tenable for even a modest DDOS - it represents 100s of times as much computation as is ordinarily necessary to answer a DNS query. Implementing DNSSEC - with on-the-fly-signing - would represent a very very serious availability risk for many operators. Many of these organisations are moving to pre-computed pre-signable response patterns instead - but it's something that could have been made much simpler with better design in DNSSEC.

  • DDOS amplification is a very serious concern - the bandwidth multiplication problems are well known, but the mitigations are not great. DNSSEC makes a very hard problem even worse. The current "RRL" effort is itself an availability risk (what happens when someone spoofs comcast's resolver IP?), and so mitigations tend to be more involved and take significant justification.

[*] Minor rant: If even 1% of the money and energy invested in DNSSEC by governments were instead directed to maintaining open up-to-date lists of which authoritative DNS servers echo case correctly, and - for what it's worth - which DNS resolvers are associated with a working set of IPv6 clients, then much much more progress would be made on both collision-prevention and IPv6 adoption. DNS resolvers could make random-case queries on a case-by-case basis, and authoritative DNS servers could return AAAA only when it's unlikely to cause user brokenness. These kinds of simple pragmatic "works now, solves a problem" solutions work well for technologically clueful organisations with sufficient resources, while the DNS "community" (to generalise) would think them impure and in the case of the selective IPv6 responses, "lies".

Reply | Thread

Tony Finch

Re: Some points ...

from: fanf
date: 21st Feb 2013 19:39 (UTC)

Thanks for the kind comments.

The root zone is a peculiar situation. I would be surprised if there are any other zones where the KSK and ZSK are the responsibility of different organizations - that is, ICANN and Verisign respectively. Although Verisign has the technical ability to put anything it wants into the root zone, there are some very stringent contractual and procedural controls over updates: basically, Verisign is required to do what ICANN tells it to, and changes have to be vetted by the US DoC (which understandibly upsets a lot of people).

The amount of trust we have to put in the root is actually quite constrained. Because resolvers and validators are aware of zone cuts and starts of authority, the root zone cannot steal names in subdomains without stealing the entire TLD in which the name falls. This is not a particularly easy thing to do, and probably impossible to do stealthily. The same is true for zone cuts further from the root, though to a lesser extent.

The Chinese authorities control the authority servers and keys for .cn so I'm not sure what it would mean for them to abuse the keys. But whatever they do, they can't screw up a .uk domain etc. which is a massive improvement in security compared to PKIX.

Re NSEC3, AIUI in order to return a negative reply the authority server has to hash the name in order to identify which NSEC3 records to return. Hence the NSEC3PARAM record which tells slave servers the relevant parameters for the zone's NSEC3 chain.

Regarding dynamic data, see slide 57, and if you want some more realistic examples of dynamic signers have a look at Phreebird and PowerDNS.

RRL is specifically designed NOT to be an availability risk. If someone spoofs Comcast's resolver IP then it still has about a 90% chance of receiving a minimal truncated reply in response to one of its three query attempts, at which point it will retry over TCP and get an answer.

Regarding AAAA breakage, I think the big sites have stopped using IPv6 whitelists, which implies to me that the problem is much smaller now than it was a couple of years ago.

Regarding QNAME 0x20 randomization, there is at least one resolver which uses the technique (Nominum CNS, perhaps?) because I see it in my logs. Paul Vixie listed its problems on a mailing list some time ago but I can't find the article right now. The consensus seems to be that port randomization is a good enough stop-gap until DNSSEC is adopted.

Edited at 2013-02-21 07:42 pm (UTC)

Reply | Parent | Thread

Re: Some points ...

from: Pete Stevens [ex-parrot.com]
date: 22nd Feb 2013 00:22 (UTC)

We run IPv6 nameservers and hosting on our shared servers. Since Google / Facebook implemented v6 by default we now reply to the very occasional 'IPv6 breaks my connectivity' with 'this isn't experimental any more, fuck off it's your problem' which we couldn't really do two years ago.

So that barrier to IPv6 has now gone. DNSSEC is still not yet implemented but having spent a good fraction of January cleaning up peoples open resolvers that were being used for DNS amplification and still receiving >20Mbps of DNS traffic because the DoS kiddies don't catch on to patches very quickly I'm not enthusiastic about anything that makes a DoS easier.

Reply | Parent | Thread

from: mdw [distorted.org.uk]
date: 22nd Feb 2013 00:52 (UTC)

How depressing.

I'm extremely aware of DNSsec's failings, but its authentication of the data rather than the communication is most definitely not one of them. Indeed, it's more or less the entire point, and it's a very sensible one.

I'm actually pretty happy with DNSsec operationally. I run validating resolvers on all of my servers; and I sign my main zones.

On the subject of validating resolvers, I'm slightly surprised that djb missed an opportunity for some valid gripes about the poorly secured gap between validating recursing resolvers (which can usually provide TSIG-authenticated answers) and dumb stub resolvers (which usually don't have anything like this capablity, even in static configurations, and this an obvious disaster when you bring in DHCP). But running local instances of unbound seems a plausible compromise, even if it is a bit of a memory hog (for a Raspberry Pi).

In the matter of signing authoritative data, I'm very happy. I sign my static zones nightly, out of line from the nameserver proper, as part of my general policy of rapid expiry rather than revocation, which DNSaec (sensibly) doesn't even try to handle anyway. I learn about and fix failures rapidly, and little harm is done.

I'm not quite sure why djb made so much about the lack of secrecy in DNS. The idea that DNS records might be sort-of secret seemed absurd to me in the 1990s, before I even heard of DNSsec, and still seems to now.

The comparison with HTTPS is particularly irksome, since SSL/TLS is pretty much wrong in every detail: its cryptography is old-fashioned and has been needlessly patching vulnerabilities rather than adopting new good ideas; its key-management was always abysmal in ways which were always obvious to me but for some reason are only slowly becoing well-known; and it plays -- and has always played -- badly operationally (e.g., its failure to deal with named virtual hosts prior to SNI, and its unnecessarily heavy loading of the server rather than clients).

Really, DNSsec has its problems, and some of them aren't at all pretty. But, amazingly, djb has largely missed them and attacked its good points instead. Rather odd.

Reply | Thread

(Deleted comment)