Tony Finch

Wednesday 15th October 2014

POP, IMAP, SMTP, and the POODLE SSLv3.0 vulnerability.

A lot of my day has been spent on the POODLE vulnerability. For details see the original paper, commentary by Daniel Franke, Adam Langley, Robert Graham, and the web page of stats and recommendations.

One thing I have been investigating is to what extent mail software uses SSLv3. The best stats we have come from our message submission server,, which logs TLS versions and cipher suites and (when possible) User-Agent and X-Mailer headers. (The logging from our POP and IMAP servers is not so good, partly because we don't request or log user agent declarations, and even if we did most clients wouldn't provide them.)

Nearly 100 of our users are using SSLv3, which is about 0.5% of them. The main culprits seem to be Airmail, Evolution, and most of all Android. Airmail is a modern Mac MUA, so in that case I guess it is a bug or misuse of the TLS API. For Evolution my guess is that it has a terrible setup user interface (all MUAs have terrible setup user interfaces) and users are choosing "SSL 3.0" rather than "TLS 1.0" because the number is bigger. In the case of Android I don't have details of version numbers because Android mail software doesn't include user-agent headers (unlike practically everything else), but I suspect old unsupported smart-ish phones running bad Java are to blame.

I haven't decided exactly what we will do to these users yet. However we have the advantage that POODLE seems to be a lot less bad for non-web TLS clients.

The POODLE padding oracle attack requires a certain amount of control over the plaintext which the attacker is trying to decrypt. Specifically:

  1. The plaintext plus MAC has to be an exact multiple of the cipher block size;
  2. It must be possible to move the secret (cookie or password) embedded in the plaintext by a byte at a time to scan it past a block boundary.

In the web situation, the attacker can use JavaScript served from anywhere to make repeated POST requests to an arbitrary target host. The JS can manipulate the body of the POST to control the overall length of the request, and can manipulate the request path to control the position of the cookie in the headers.

In the mail situation (POP, IMAP, SMTP), the attacker can make the client retry requests repeatedly by breaking the connection, but they cannot control the size or framing of the client's authentication command.

So I think we have the option of not worrying too much if forced upgrades turn out to be too painful, though I would prefer not to go that route - it makes me feel uncomfortably complacent.

(3 comments | Leave a comment)

Monday 14th July 2014

Data structures and algorithms

"Bad programmers worry about the code. Good programmers worry about data structures and their relationships." - Linus Torvalds

"If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming." - Rob Pike

"Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious." - Fred Brooks
(7 comments | Leave a comment)

Wednesday 14th May 2014

Dilbert feeds

I just noticed that Dilbert had its 25th anniversary last month. I have created an atom feed of old strips to mark this event. To avoid leap year pain the feed contains strips from the same date 24 years ago, rather than 25 years ago. See dilbert_zoom for current strips and dilbert_24 for the old ones.
(Leave a comment)

Tuesday 25th March 2014

Update to SSHFP tutorial

I have updated yesterday's article on how to get SSHFP records, DNSSEC, and VerifyHostKeyDNS=yes to work.

I have re-ordered the sections to avoid interrupting the flow of the instructions with chunks of background discussion.

I have also added a section discussing the improved usability vs weakened security of the RRSET_FORCE_EDNS0 patch in Debian and Ubuntu.

(Leave a comment)

Monday 24th March 2014

SSHFP tutorial: how to get SSHFP records, DNSSEC, and VerifyHostKeyDNS=yes to work.

One of the great promises of DNSSEC is to provide a new public key infrastructure for authenticating Internet services. If you are a relatively technical person you can try out this brave new future now with ssh.

There are a couple of advantages of getting SSHFP host authentication to work. Firstly you get easier-to-use security, since you longer need to rely on manual host authentication, or better security for the brave or foolhardy who trust leap-of-faith authentication. Secondly, it becomes feasible to do host key rollovers, since you only need to update the DNS - the host's key is no longer wired into thousands of known_hosts files. (You can probably also get the latter benefit using ssh certificate authentication, but why set up another PKI if you already have one?)

In principle it should be easy to get this working but there are a surprising number of traps and pitfalls. So in this article I am going to try to explain all the whys and wherefores, which unfortunately means it is going to be long, but I will try to make it easy to navigate. In the initial version of this article I am just going to describe what the software does by default, but I am happy to add specific details and changes made by particular operating systems.

The outline of what you need to do on the server is:

  • Sign your DNS zone. I will not cover that in this article.
  • Publish SSHFP records in the DNS

The client side is more involved. There are two versions, depending on whether ssh has been compiled to use ldns or not. Run ldd $(which ssh) to see if it is linked with libldns.

  • Without ldns:
    • Install a validating resolver (BIND or Unbound)
    • Configure the stub resolver /etc/resolv.conf
    • Configure ssh
  • With ldns:
    • Install unbound-anchor
    • Configure the stub resolver /etc/resolv.conf
    • Configure ssh

Publish SSHFP records in the DNS

Generating SSHFP records is quite straightforward:

    demo:~# cd /etc/ssh
    demo:/etc/ssh# ssh-keygen -r $(hostname)
    demo IN SSHFP 1 1 21da0404294d07b940a1df0e2d7c07116f1494f9
    demo IN SSHFP 1 2 3293d4c839bfbea1f2d79ab1b22f0c9e0adbdaeec80fa1c0879dcf084b72e206
    demo IN SSHFP 2 1 af673b7beddd724d68ce6b2bb8be733a4d073cc0
    demo IN SSHFP 2 2 953f24d775f64ff21f52f9cbcbad9e981303c7987a1474df59cbbc4a9af83f6b
    demo IN SSHFP 3 1 f8539cfa09247eb6821c645970b2aee2c5506a61
    demo IN SSHFP 3 2 9cf9ace240c8f8052f0a6a5df1dea4ed003c0f5ecb441fa2c863034fddd37dc9

Put these records in your zone file, or you can convert them into an nsupdate script with a bit of seddery:

    ssh-keygen -r $(hostname -f) |
        sed 's/^/update add /;s/ IN / 3600 IN /;/ SSHFP . 1 /d;'

The output of ssh-keygen -r includes hashes in both SHA1 and SHA256 format (the shorter and longer hashes). You can discard the SHA1 hashes.

It includes hashes for the different host key authentication algorithms:

  • 1: ssh-rsa
  • 2: ssh-dss
  • 3: ecdsa

I believe ecdsa covers all three key sizes which OpenSSH gives separate algorithm names: ecdsa-sha2-nistp256, ecdsa-sha2-nistp384, ecdsa-sha2-nistp521.

OpenSSH supports other host key authentication algorithms, e.g. "ed25519", but unfortunately they cannot be authenticated using SSHFP records because they do not have algorithm numbers allocated.

The problem is actually worse than that, because most of the extra algorithms are the same as the three listed above, but with added support for certificate authentication. The ssh client is able to convert certificate to plain key authentication, but a bug in this fallback logic breaks SSHFP authentication.

So I recommend that if you want to use SSHFP authentication your server should only have host keys of the three basic algorithms listed above.

Install a validating resolver

To be safe against active network interception attacks you need to do DNSSEC validation on the same machine as your ssh client. If you don't do this, you can still use SSHFP records to provide a marginal safety improvement for leap-of-faith users. In this case I recommend using VerifyHostKeyDNS=ask to reinforce to the user that they ought to be doing proper manual host authentication.

If ssh is not compiled to use ldns then you need to run a local validating resolver, either BIND or unbound.

If ssh is compiled to use ldns, it can do its own validation, and you do not need to install BIND or Unbound.

Run ldd $(which ssh) to see if it is linked with libldns.

Install a validating resolver - BIND

The following configuration will make named run as a local validating recursive server. It just takes the defaults for everything, apart from turning on validation. It automatically uses BIND's built-in copy of the root trust anchor.


    options {
        dnssec-validation auto;
        dnssec-lookaside auto;

Install a validating resolver - Unbound

Unbound comes with a utility unbound-anchor which sets up the root trust anchor for use by the unbound daemon. You can then configure unbound as follows, which takes the defaults for everything apart from turning on validation using the trust anchor managed by unbound-anchor.


        auto-trust-anchor-file: "/var/lib/unbound/root.key"

Install a validating resolver - dnssec-trigger

If your machine moves around a lot to dodgy WiFi hot spots and hotel Internet connections, you may find that the nasty middleboxes break your ability to validate DNSSEC. In that case you can use dnssec-trigger, which is a wrapper around Unbound which knows how to update its configuration when you connect to different networks, and which can work around braindamaged DNS proxies.

Configure the stub resolver - without ldns

If ssh is compiled without ldns, you need to add the following line to /etc/resolv.conf; beware your system's automatic resolver configuration software, which might be difficult to persuade to leave resolv.conf alone.

    options edns0

For testing purposes you can add RES_OPTIONS=edns0 to ssh's environment.

On some systems (including Debian and Ubuntu), ssh is patched to force EDNS0 on, so that you do not need to set this option. See the section on RRSET_FORCE_EDNS0 below for further discussion.

Configure the stub resolver - with ldns

If ssh is compiled with ldns, you need to run unbound-anchor to maintain a root trust anchor, and add something like the following line to /etc/resolv.conf

    anchor /var/lib/unbound/root.key

Run ldd $(which ssh) to see if it is linked with libldns.

Configure ssh

After you have done all of the above, you can add the following to your ssh configuration, either /etc/ssh/ssh_config or ~/.ssh/config

    VerifyHostKeyDNS yes

Then when you connect to a host for the first time, it should go straight to the Password: prompt, without asking for manual host authtentication.

If you are not using certificate authentication, you might also want to disable that. This is because ssh prefers the certificate authentication algorithms, and if you connect to a host that offers a more preferred algorithm, ssh will try that and ignore the DNS. This is not very satisfactory; hopefully it will improve when the bug is fixed.

    HostKeyAlgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dss


Check that your resolver is validating and getting a secure result for your host. Run the following command and check for "ad" in the flags. If it is not there then either your resolver is not validating, or /etc/resolv.conf is not pointing at the validating resolver.

    $ dig +dnssec <hostname> sshfp | grep flags
    ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
    ; EDNS: version: 0, flags: do; udp: 4096

See if ssh is seeing the AD bit. Use ssh -v and look for messages about secure or insecure fingerprints in the DNS. If you are getting secure answers via dig but ssh is not, perhaps you are missing "options edns0" from /etc/resolv.conf.

    debug1: found 6 secure fingerprints in DNS
    debug1: matching host key fingerprint found in DNS

Try using specific host key algorithms, to see if ssh is trying to authenticate a key which does not have an SSHFP record of the corresponding algorithm.

    $ ssh -o HostKeyAlgorithms=ssh-rsa <hostname>


What I use is essentially:


    options {
        dnssec-validation auto;
        dnssec-lookaside auto;


    options edns0


    VerifyHostKeyDNS yes

Background on DNSSEC and non-validating stub resolvers

When ssh is not compiled to use ldns, it has to trust the recursive DNS server to validate SSHFP records, and it trusts that the connection to the recursive server is secure. To find out if an SSHFP record is securely validated, ssh looks at the AD bit in the DNS response header - AD stands for "authenticated data".

A resolver will not set the AD bit based on the security status of the answer unless the client asks for it. There are two ways to do that. The simple way (from the perspective of the DNS protocol) is to set the AD bit in the query, which gets you the AD bit in the reply without other side-effects. Unfortunately the standard resolver API makes it very hard to do this, so it is only simple in theory.

The other way is to add an EDNS0 OPT record to the query with the DO bit set - DO stands for "DNSSEC OK". This has a number of side-effects: EDNS0 allows large UDP packets which provide the extra space needed by DNSSEC, and DO makes the server send back the extra records required by DNSSEC such as RRSIGs.

Adding "options edns0" to /etc/resolv.conf only tells it to add the EDNS0 OPT record - it does not enable DNSSEC. However ssh itself observes whether EDNS0 is turned on, and if so also turns on the DO bit.

Regarding "options edns0" vs RRSET_FORCE_EDNS0

At first it might seem annoying that ssh makes you add "options edns0" to /etc/resolv.conf before it will ask for DNSSEC results. In fact on some systems, ssh is patched to add a DNS API flag called RRSET_FORCE_EDNS0 which forces EDNS0 and DO on, so that you do not need to explicitly configure the stub resolver. However although this seems more convenient, it is less safe.

If you are using the standard portable OpenSSH then you can safely set VerifyHostKeyDNS=yes, provided your stub resolver is configured correctly. The rule you must follow is to only add "options edns0" if /etc/resolv.conf is pointing at a local validating resolver. SSH is effectively treating "options edns0" as a signal that it can trust the resolver. If you keep this rule you can change your resolver configuration without having to reconfigure ssh too; it will automatically fall back to VerifyHostKeyDNS=ask when appropriate.

If you are using a version of ssh with the RRSET_FORCE_EDNS0 patch (such as Debian and Ubuntu) then it is sometimes NOT SAFE to set VerifyHostKeyDNS=yes. With this patch ssh has no way to tell if the resolver is trustworthy or if it should fall back to VerifyHostKeyDNS=ask; it will blindly trust a remote validating resolver, which leaves you vulnerable to MitM attacks. On these systems, if you reconfigure your resolver, you may also have to reconfigure ssh in order to remain safe.

Towards the end of February there was a discussion on an IETF list about stub resolvers and DNSSEC which revolved around exactly this question of how an app can tell if it is safe to trust the AD bit from the recursive DNS server.

One proposal was for the stub resolver to strip the AD bit in replies from untrusted servers, which (if it were implemented) would allow ssh to use the RRSET_FORCE_EDNS0 patch safely. However this proposal means you have to tell the resolver if the server is trusted, which might undo the patch's improved convenience. There are ways to avoid that, such as automatically trusting resolvers running on the local host, and perhaps having a separate configuration file listing trusted resolvers, e.g. those reachable over IPSEC.

(1 comment | Leave a comment)

Wednesday 19th February 2014

Relative frequency of initial letters of TLDs

Compare Wikipedia's table of the relative frequencies of initial letters in English.

$ dig axfr . |
  perl -ne '
	next unless /^(([a-z])[a-z0-9-]+)[.][  ].*/;
	$label{$1} = 1; $letter{$2}++; $total++;
	END {
		for my $x (sort keys %letter) {
			my $p = 100.0*$letter{$x}/$total;
			printf "<tr><td>$x</td>
				<td align=right>%5.2f</td>
				<td><span style=\"
					display: inline-block;
					background-color: gray;
					width: %d;\">
			    $p, 32*$p;
a 5.24 
b 5.96 
d 2.45 
e 3.28 
f 2.43 
g 5.03 
h 1.78 
i 3.38 
j 1.14 
k 2.71 
l 3.74 
m 6.69 
n 4.26 
o 0.72 
p 5.68 
q 0.65 
r 2.81 
s 6.15 
t 6.02 
u 1.91 
v 2.66 
w 1.94 
y 0.41 
z 0.75 
(6 comments | Leave a comment)

Wednesday 29th January 2014

Diffing dynamic raw zone files in git with BIND 9.10

On my toy nameserver my master zones are configured with a directory for each zone. In this directory is a "conf" file which is included by the nameserver's main configuration file; a "master" file containing the zone data in the raw binary format; a "journal" file recording changes to the zone from dynamic UPDATEs and re-signing; and DNSSEC and TSIG keys. The "conf" file looks something like this:

    zone {
        type master;
        file "/zd/";
        journal "/zd/";
        key-directory "/zd/";
        masterfile-format raw;
        auto-dnssec maintain;
        update-policy local;

I must have been having a fit of excessive tidyness when I was setting this up, because although it looks quite neat, the unusual file names cause some irritation - particularly for the journal. Some of the other BIND tools assume that journal filenames are the same as the master file with a .jnl extension.

I keep the name server configuration in git. This is a bit awkward because the configuration contains precious secrets (DNSSEC private keys), and the zone files are constantly-changing binary data. But it is useful for recording manual changes, since the zone files don't have comments explaining their contents. I don't make any effort to record the re-signing churn, though I commit it when making other changes.

To reduce the awkwardness I configured git to convert zone files to plain text when diffing them, so I had a more useful view of the repository. There are three parts to setting this up.

  • Tell git that the zone files require a special diff driver, which I gave the name "bind-raw".

    All the zone files in the repository are called "master", so in the .gitattributes file at the top of the repository I have the line

        master diff=bind-raw
  • The diff driver is part of the repository configuration. (Because the implementation of the driver is a command it isn't safe to set it up automatically in a repository clone, as is the case for .gitattributes settings, so it has to be configured separately.) So add the following lines to .git/config
        [diff "bind-raw"]
    	textconv = etc/raw-to-text
  • The final part is the raw-to-text script, which lives in the repository.

This is where journal file names get irritating. You can convert a raw master file into the standard text format with named-compilezone, which has a -j option to read the zone's journal, but this assumes that the journal has the default file name with the .jnl extension. So it doesn't quite work in my setup.

(It also doesn't quite work on the University's name servers which have a directory for master files and a directory for journal files.)

So in September 2012 I patched BIND to add a -J option to named-compilezone for specifying the journal file name. I have a number of other small patches to my installation of BIND, and this one was very simple, so in it went. (It would have been much more sensible to change my nameserver configuration to go with the flow...)

The patch allowed me to write the raw-to-text script as follows. The script runs named-compilezone twice: the first time with a bogus zone name, which causes named-compilezone to choke with an error message. This helpfully contains the real zone name, which the script extracts then uses to invoke named-compilezone correctly.

    command="named-compilezone -f raw -F text -J journal -o /dev/stdout"
    zone="$($command . "$file" 2>&1)"
    zone="${zone#*: }"
    $command "$zone" "$file" 2>/dev/null

I submitted the -J patch to the ISC and got a favourable response from Evan Hunt. At that time (16 months ago) BIND was at version 9.9.2; since this option was a new feature (and unimportant) it was added to the 9.10 branch. Wind forward 14 months to November 2013 and the first alpha release of 9.10 came out, with the -J option, so I was able to retire my patch. Woo! There will be a beta release of 9.10 in a few weeks.

In truth, you don't need BIND 9.10 to use this git trick, if you use the default journal file names. The key thing to make it simple is to give all your master files the same name, so that you don't have to list them all in your .gitattributes.

(Leave a comment)

Tuesday 3rd December 2013

A weird BIND DNSSEC resolution bug, with a fix.

The central recursive DNS servers in Cambridge act as stealth slaves for most of our local zones, and we recommend this configuration for other local DNS resolvers. This has the slightly odd effect that the status bits in answers have AD (authenticated data) set for most DNSSEC signed zones, except for our local ones which have AA (authoritative answer) set. This is not a very big deal since client hosts should do their own DNSSEC validation and ignore any AD bits they get over the wire.

It is a bit more of a problem for the toy nameserver I run on my workstation. As well as being my validating resolver, it is also the master for my personal zones, and it slaves some of the Cambridge zones. This mixed recursive / authoritative setup is not really following modern best practices, but it's OK when I am the only user, and it makes experimental playing around easier. Still, I wanted it to validate answers from its authoritative zones, especially because there's no security on the slave zone transfers.

I had been procrastinating this change because I thought the result would be complicated and ugly. But last week one of the BIND developers, Mark Andrews, posted a description of how to validate slaved zones to the dns-operations list, and it turned out to be reasonably OK - no need to mess around with special TSIG keys to get queries from one view to another.

The basic idea is to have one view that handles recursive queries and which validates all its answers, and another view that holds the authoritative zones and which only answers non-recursive queries. The recursive view has "static-stub" zone configurations mirroring all of the zones in the authoritative view, to redirect queries to the local copies.

Here's a simplified version of the configuration I tried out. To make it less annoying to maintain, I wrote a script to automatically generate the static-stub configurations from the authoritative zones.

  view rec {
    match-recursive-only yes;
    zone         { type static-stub; server-addresses { ::1; }; };
    zone { type static-stub; server-addresses { ::1; }; };

  view auth {
    recursion no;
    allow-recursion { none; };
    zone         { type slave; file "cam";  masters { ucam; }; };
    zone { type slave; file "priv"; masters { ucam; }; };

This seemed to work fine, until I tried to resolve names in - then I got a server failure. In my logs was the following (which I have slightly abbreviated):

  client ::1#55687 view rec: query: IN A +E (::1)
  client ::1#60344 view auth: query: IN A -ED (::1)
  client ::1#54319 view auth: query: IN DS -ED (::1)
  resolver: DNS format error from ::1#53 resolving
    Name (SOA) not subdomain of zone -- invalid response
  lame-servers: error (FORMERR) resolving '': ::1#53
  lame-servers: error (no valid DS) resolving '': ::1#53
  query-errors: client ::1#55687 view rec:
    query failed (SERVFAIL) for at query.c:7435

You can see the original recursive query that I made, then the resolver querying the authoritative view to get the answer and validate it. The situation here is that is an unsigned zone, so a DNSSEC validator has to check its delegation in the parent zone and get a proof that there is no DS record, to confirm that it is OK for to be unsigned. Something is going wrong with BIND's attempt to get this proof of nonexistence.

When BIND gets a non-answer it has to classify it as a referral to another zone or an authoritative negative answer, as described in RFC 2308 section 2.2. It is quite strict in its sanity checks, in particular it checks that the SOA record refers to the expected zone. This check often discovers problems with misconfigured DNS load balancers which are given a delegation for but which think their zone is, leading them to hand out malformed negative responses to AAAA queries.

This negative answer SOA sanity check is what failed in the above log extract. Very strange - the resolver seems to be looking for the DS record in the zone, not the zone, so when it gets an answer from the zone it all goes wrong. Why is it looking in the wrong place?

In fact the same problem occurs for the zone itself, but in this case the bug turns out to be benign:

  client ::1#16276 view rec: query: IN A +E (::1)
  client ::1#65502 view auth: query: IN A -ED (::1)
  client ::1#61409 view auth: query: IN DNSKEY -ED (::1)
  client ::1#51380 view auth: query: IN DS -ED (::1)
  security: client ::1#51380 view auth: query (cache) '' denied
  lame-servers: error (chase DS servers) resolving '': ::1#53

You can see my original recursive query, and the resolver querying the authoritative view to get the answer and validate it. But it sends the DS query to itself, not to the name servers for the zone. When this query fails, BIND re-tries by working down the delegation chain from the root, and this succeeds so the overall query and validation works despite tripping up.

This bug is not specific to the weird two-view setup. If I revert to my old configuration, without views, and just slaving and, I can trigger the benign version of the bug by directly querying for the DS record:

  client ::1#30447 ( query: IN DS +E (::1)
  lame-servers: error (chase DS servers) resolving '':

In this case the resolver sent the upstream DS query to one of the authoritative servers for, and got a negative response from the zone apex per RFC 4035 section This did not fail the SOA sanity check but it did trigger the fall-back walk down the delegation chain.

In the simple slave setup, queries for do not fail because they are answered from authoritative data without going through the resolver. If you change the zone configurations from slave to stub or static-stub then the resolver is used to answer queries for names in those zones, and so queries for explode messily as BIND tries really hard (128 times!) to get a DS record from all the available name servers but keeps checking the wrong zone.

I spent some time debugging this on Friday evening, which mainly involved adding lots of logging statements to BIND's resolver to work out what it thought it was doing. Much confusion and headscratching and eventually understanding.

BIND has some functions called findzonecut() which take an option to determine whether it wants the child zone or the parent zone. This works OK for dns_db_findzonecut() which looks in the cache, but dns_view_findzonecut() gets it wrong. This function works out whether to look for the name in a locally-configured zone, and if so which one, or otherwise in the cache, or otherwise work down from the root hints. In the case of a locally-configured zone it ignores the option and always returns the child side of the zone cut. This causes the resolver to look for DS records in the wrong place, hence all the breakage described above.

I worked out a patch to fix this DS record resolution problem, and I have sent details of the bug and my fix to And I now have a name server that correctly validates its authoritative zones :-)

(1 comment | Leave a comment)

Wednesday 13th November 2013

Temporum: secure time: a paranoid fantasy

Imagine that...

Secure NTP is an easy-to-use and universally deployed protocol extension...

The NTP pool is dedicated to providing accurate time from anyone to everyone, securely...

NIST, creators of some of the world's best clocks and keepers of official time for the USA, decide that the NTP pool is an excellent project which they would like to help. They donatate machines and install them around the world and dedicate them to providing time as part of the NTP pool. Their generous funding allows them to become a large and particularly well-connected proportion of the pool.

In fact NIST is a sock-puppet of the NSA. Their time servers are modified so that they are as truthful and as accurate as possible to everyone, except those who the US government decides they do not like.

The NSA has set up a system dedicated to replay attacks. They cause occasional minor outages and screwups in various cryptographic systems - certificate authorities, DNS registries - which seem to be brief and benign when they happen, but no-one notices that the bogus invalid certificates and DNS records all have validity periods covering a particular point in time.

Now the NSA can perform a targeted attack, in which they persuade the victim to reboot, perhaps out of desperation because nothing works and they don't understand denial-of-service attacks. The victim's machine reboots, and it tries to get the time from the NTP pool. The NIST sock-puppet servers all lie to it. The victim's machine believes the time is in the NSA replay attack window. It trustingly fetches some crucial "software update" from a specially-provisioned malware server, which both its DNSSEC and X.509 PKIs say is absolutely kosher. It becomes comprehensively pwned by the NSA.

How can we provide the time in a way that is secure against this attack?

(Previously, previously)

(15 comments | Leave a comment)

Monday 11th November 2013

Security considerations for temporum: quorate secure time

The security of temporum is based on the idea that you can convince yourself that several different sources agree on what the time is, with the emphasis on different. Where are the weaknesses in the way it determines if sources are different?

The starting point for temporum is a list of host names to try. It is OK if lots of them fail (e.g. because your device has been switched off on a shelf for years) provided you have a good chance of eventually getting a quorum.

The list of host names is very large, and temporum selects candidates from the list at random. This makes it hard for an attacker to target the particular infrastructure that temporum might use. I hope your device is able to produce decent random numbers immediately after booting!

The list of host names is statically configured. This is important to thwart Sybil attacks: you don't want an attacker to convince you to try a list of apparently-different host names which are all under the attacker's control. Question: can the host list be made dynamic without making it vulnerable?

Hostnames are turned into IP addresses using the DNS. Temporum uses the TLS X.509 PKI to give some assurance that the DNS returned the correct result, about which more below. The DNS isn't security-critical, but if it worries you perhaps temporum could be configured with a list of IP addresses instead - but maybe that will make the device-on-shelf less likely to boot successfully.

Temporum does not compare the IP addresses of "different" host names. This might become a problem once TLS SNI makes large-scale virtual hosting easier. More subtly, there is a risk that temporum happens to query lots of servers that are hosted on the same infrastructure. This can be mitigated by being careful about selecting which host names to include in the list - no more than a few each of Blogspot, Tumblr, Livejournal, GoDaddy vhosts, etc. More than one of each is OK since it helps with on-shelf robustness.

The TLS security model hopes that X.509 certification authorities will only hand out certificates for host names to the organizations that run the hosts. This is a forlorn hope: CAs have had their infrastructure completely compromised; they have handed out intermediate signing certificates to uncontrolled third parties; they are controlled by nation states that treat our information security with contempt.

In the context of temporum, we can reduce this problem by checking that the quorum hosts are authenticated by diverse CAs. Then an attacker would have to compromise multiple CAs to convince us of an incorrect time. Question: are there enough different CAs used by popular sites that temporum can quickly find a usable set?

(Leave a comment)

Saturday 9th November 2013

nsdiff 1.47

I have done a little bit of work on nsdiff recently.

You can now explicitly manage your DNSKEY RRset, instead of leaving it to named. This is helpful when you are transferring a zone from one operator to another: you need to include the other operator's zone signing key in your DNSKEY RRset to ensure that validation works across the transfer.

There is now support for bump-in-the-wire signing, where nsdiff transfers the new version of the zone from a back-end hidden master server and pushes the updates to a signing server which feeds the public authoritative servers.

Get nsdiff from

(Edit: I decided to simplify the -u option so updated from version 1.46 to 1.47.)

(Previously, previously, previously, previously, previously.)

(Leave a comment)

Tuesday 29th October 2013

Temporum: Quorate secure time

There are essentially two ways to find out what the time is: ask an authoritative source and trust the answer, or ask several more or less unreliable sources and see what they agree on. NTP is based on the latter principle, but since the protocol isn't secured, a client also has to trust the network not to turn NTP responses into lies.

NTP's lack of security causes a bootstrapping problem. Many security protocols rely on accurate time to avoid replay attacks. So nearly the first thing a networked device needs to do on startup is get the time, so that it can then properly verify what it gets from the network - DNSSEC signatures, TLS certificates, software updates, etc. This is particularly challenging for cost-constrained devices that do not have a battery-backed real time clock and so start up in 1970.

When I say NTP isn't secured, I mean that the protocol has security features but they have not been deployed. I have tried to understand NTP security, but I have not found a description of how to configure it for the bootstrap case. What I want is for a minimally configured client to be able to communicate with some time servers and get responses with reasonable authenticity and integrity. Extra bonus points for a clear description of which of NTP's half dozen identity verification schemes is useful for what, and which ones are incompatible with NATs and rely on the client knowing its external IP address.

In the absence of usable security from NTP, Jacob Appelbaum of the Tor project has written a program called tlsdate. In TLS, the ClientHello and ServerHello messages include a random nonce which includes a Unix time_t value as a prefix. So you can use any TLS server as a secure replacement for the old port 37 time service.

Unlike NTP, tlsdate gets time from a single trusted source. It would be much better if it were able to consult multiple servers for their opinions of the time: it would be more robust if a server is down or has the wrong time, and it would be more secure in case a server is compromised. There is also the possibility of using multiple samples spread over a second or two to obtain a more accurate time than the one second resolution of TLS's gmt_unix_time field.

The essential idea is to find a quorum of servers that agree on the time. An adversary or a technical failure would have to break at least that many servers for you to get the wrong time.

In statistical terms, you take a number of samples and find the mode, the most common time value, and keep taking samples until the frequency at the mode is greater than the quorum.

But even though time values are discrete, the high school approach to finding the mode isn't going to work because in many casees we won't be able to take all the necessary samples close enough together in time. So it is better to measure the time offset between a server and the client at each sample, and treat these as a continuous distribution.

The key technique is kernel density estimation. The mode is the point of peak density in the distribution estimated from the samples. The kernel is a function that is used to spread out each sample; the estimated distribution comes from summing the spread-out samples.

NTP's clock select algorithm is basically kernel density estimation with a uniform kernel.

NTP's other algorithms are based on lengthy observations of the network conditions between the client and its servers, whereas we are more concerned with getting a quick result from many servers. So perhaps we can use a simpler, more well-known algorithm to find the mode. It looks like the mean shift algorithm is a good candidate.

For the mean shift algorithm to work well, I think it makes sense to use a smooth kernel such as the Gaussian. (I like exponentials.) The bandwidth of the kernel should probably be one second (the precision of the timestamp) plus the round trip time.

Now it's time to break out the editor and write some code... I think I'll call it "temporum" because that rhymes with "quorum" and it means "of times" (plural). Get temporum from my git server.

(2 comments | Leave a comment)

Wednesday 23rd October 2013

My circular commute

We moved to new offices a month ago and I have settled in to the new route to work. Nico's nursery was a bit out of the way when I was working in the city centre, but now it is about a mile in the wrong direction.

But this is not so bad, since I have decided on a -Ofun route [1] which is almost entirely on cycle paths and very quiet roads, and the main on-road section has good cycle lanes (at least by UK standards). There is lots of park land, a bit of open country, and it goes through the world heritage site in the city centre :-) And it's fairly easy to stop off on the way if I need to get supplies.

[1] optimize for fun!

My route to work on gmap-pedometer

I don't have to wrangle children on the way home, so I take the direct route past the Institute of Astronomy and Greenwich House (where Rachel previously worked and where the Royal Greenwich Observatory was wound down).

My route home on gmap-pedometer

So far it has been pleasantly free of the adrenaline spikes I get from seeing murderous and/or suicidally stupid behaviour. Much better than going along Chesterton Lane and Madingley Road!

(7 comments | Leave a comment)

Tuesday 8th October 2013

Maintaining a local patch set with git

We often need to patch the software that we run in order to fix bugs quickly rather than wait for an official release, or to add functionality that we need. In many cases we have to maintain a locally-developed patch for a significant length of time, across multiple upstream releases, either because it is not yet ready for incorporation into a stable upstream version, or because it is too specific to our setup so will not be suitable for passing upstream without significant extra work.

I have been experimenting with a git workflow in which I have a feature branch per patch. (Usually there is only one patch for each change we make.) To move them on to a new feature release, I tag the feature branch heads (to preserve history), rebase them onto the new release version, and octopus merge them to create a new deployment version. This is rather unsatisfactory, because there is a lot of tedious per-branch work, and I would prefer to have branches recording the development of our patches rather than a series of tags.

Here is a git workflow suggested by Ian Jackson which I am trying out instead. I don't yet have much experience with it; I am writing it down now as a form of documentation.

There are three branches:

  • upstream, which is where public releases live
  • working, which is where development happens
  • deployment, which is what we run

Which branch corresponds to upstream may change over time, for instance when we move from one stable version to the next one.

The working branch exists on the developer's workstation and is not normally published. There might be multiple working branches for work-in-progress. They get rebased a lot.

Starting from an upstream version, a working branch will have a number of mature patches. The developer works on top of these in commit-early-commit-often mode, without worrying about order of changes or cleanliness. Every so often we use git rebase --interactive to tidy up the patch set. Often we'll use the "squash" command to combine new commits with the mature patches that they amend. Sometimes it will be rebased onto a new upstream version.

When the working branch is ready, we use the commands below to update the deployment branch. The aim is to make it look like updates from the working branch are repeatedly merged into the deployment branch. This is so that we can push updated versions of the patch set to a server without having to use --force, and pulling updates into a checked out version is just a fast-forward. However this isn't a normal merge since the tree at the head of deployment always matches the most recent good version of working. (This is similar to what stg publish does.) Diagramatically,

     | \
     |  `A---B-- 1.1-patched
     |    \       |
     |     \      |
     |      `C-- 1.1-revised
     |            |
    2.0           |
     | \          |
     |  `-C--D-- 2.0-patched
     |            |
    3.1           |
     | \          |
     |  `-C--E-- 3.1-patched
     |            |
  upstream        |

The horizontal-ish lines are different rebased versions of the patch set. Letters represent patches and numbers represent version tags. The tags on the deployment branch are for the install scripts so I probably won't need one on every update.

Ideally we would be able to do this with the following commands:

    $ git checkout deployment
    $ git merge -s theirs working

However there is an "ours" merge strategy but not a "theirs" merge strategy. Johannes Sixt described how to simulate git merge -s theirs in a post to the git mailing list in 2010. So the commands are:

    $ git checkout deployment
    $ git merge --no-commit -s ours working
    $ git read-tree -m -u working
    $ git commit -m "Update to $(git describe working)"

Mark Wooding suggested the following more plumbing-based version, which unlike the above does not involve switching to the deployment branch.

    $ d="$(git rev-parse deployment)"
    $ w="$(git rev-parse working)"
    $ m="Update deployment to $(git describe working)"
    $ c="$(echo "$m" | git commit-tree -p $d -p $w working^{tree})
    $ git update-ref -m "$m" deployment $c $d
    $ unset c d w

Now to go and turn this into a script...

(Leave a comment)

Sunday 6th October 2013

Bacon and cabbage

Bacon and brassicas are best friends. Here's a very simple recipe which is popular in the Finch household.


  • A Savoy cabbage
  • Two large or three small onions
  • A few cloves of garlic
  • A 200g pack of bacon
  • Oil or butter for frying
  • Soured cream


Chop the onion

Slice the cabbage to make strips about 1cm wide

Cut up the bacon to make lardons


Get a large pan that is big enough to hold all the cabbage. Heat the fat, press in the garlic, then bung in the onion and bacon. Fry over a moderate heat until the bacon is cooked.

Add the cabbage. Stir to mix everything together and keep stirring so it cooks evenly. As the cabbage cooks down and becomes more manageable you can put the heat right up to brown it slightly. Keep stir-frying until the thick ribby parts of the cabbage are soft as you like, usually several minutes. (I haven't timed it since I taste to decide when it is done...)

I serve this as a main dish with just a few dollops of sour cream on top and plenty of black pepper.

(Leave a comment)

Wednesday 14th August 2013

Subverting BIND's SRTT algorithm: derandomizing NS selection

This morning I saw a link on Twitter to a paper that was presented at the USENIX Workshop on Offensive Technologies this week. Although it sounds superficially interesting, the vulnerability is (a) not new and (b) not much of a vulnerability.

The starting point of the paper is query randomization to make cache poisoning harder. They cite RFC 5452, saying:

The most common values which the resolver randomizes are DNS transaction ID (TXID), UDP source port and the IP address of the queried name server.
In this work we present a newly discovered vulnerability in BIND which allows an attacker to determine (derandomize) the IP address of the name server a BIND resolver queries. The attack reduces the amount of information a blind attacker must guess to successfully poison BIND's cache.

The problem is that this exact vulnerability is described in RFC 5452 section 4.4:

Many zones have two or three authoritative nameservers, which make matching the source address of the authentic response very likely with even a naive choice having a double digit success rate. Most recursing nameservers store relative performance indications of authoritative nameservers, which may make it easier to predict which nameserver would originally be queried -- the one most likely to respond the quickest.

This vulnerability reduces the amount of randomness in the query by about one bit, and so it is fairly trivial. If you care that much about query randomness you should implement the dns0x20 hack to randomize the case of the query name.

Of course the real defence against cache poisoning attacks is DNSSEC.

The actual technique they use to exploit the vulnerability is new and quite clever. The real impact is that BIND's SRTT algorithm sometimes calculates bogus values, when an NS RRset has a mixture of lame and working nameservers and overlaps with other NS RRsets. Bogus SRTT values can slightly increase average query latency.

It seems to me to be a stretch to claim this is a security bug rather than a performance bug.

(Leave a comment)

Monday 17th June 2013

Dominoes and dice patterns

Nico has a box of dominoes, and playing with it often consists of me trying to arrange them into nice patterns, and him trying to shuffle them. The box contains the usual 28 dominoes, but has coloured Paddington Bear pictures instead of spots. The dominoes can fit into the box in four stacks of seven; in this arrangement there are eight squares visible but the dominoes only have seven different pictures. There isn't a particularly satisfying choice of which four dominoes get to show their faces.

Traditional dominoes use the same six arrangements of spots as dice, plus blank. They are based on a 3x3 grid, in which the middle spot is present in odd numbers and absent in even numbers, and opposing pairs of spots are added starting with the diagonals. This extends nicely from zero to nine:

I could solve my four-pile problem with a set of 36 dominoes with faces numbered 1 to 8 (which I think is prettier than 0 to 7), or I could make five piles showing squares numbered 0 to 9 if I had a "double-nine" set of 55 dominoes.

Another way to arrange the spots is hexagonally, which also allows you to use a translation from binary to unary. The middle dot represents bit 2^0; two opposing dots represent bit 2^1; and the other four dots in the hexagon represent bit 2^2:

I think this is even more pretty :-) It can also make nice octahedral dice, and the hexagon patterns will fit in the faces particularly well if the corners are rounded off.

ETA: Following the discussion in the comments, I have come up with an extended layout that works up to 31 spots. It fits fairly well in a square, but loses some of the hexagonal symmetry. It is based on the observation that three overlapping hexagonal rings contain 16 spots (3 * 6 - 2 overlapping spots). No great insight that shows how to extend it further, I am afraid. See dice-spot.html which includes the code to draw the diagrams.

(11 comments | Leave a comment)

Sunday 2nd June 2013

Guilty pleasures

Corned beef / bully beef

Not the American thing we call salt beef (which is excellent and one of my non-guilty pleasures) but the stuff that comes in cans from Fray Bentos in South America. Cheap meat and many food miles, but yummy salty fatty.

I tend to have enthusiasms for a particular food, and will eat a lot of it for a few weeks until I tire of it and move on to something else. Recently i have been making sort of ersatz Reuben sandwiches, with bully beef, emmental, sauerkraut, and mustard, often toasted gently in a frying pan to melt the filling.

Heinz Sandwich Spread

Finely diced vegetables in salad cream. Sweet and sour and crunchy. A good alternative to a pickle like Branston, especially before they did a sandwich version.

Salad cream is another of those dubious cost-reduced foods, like mayo but with less egg and oil, more vinegar, and added water and sugar. A stronger sweeter and more vinegary flavour.

(8 comments | Leave a comment)

Thursday 16th May 2013

Mixfix parsing / chain-associative operators

Earlier this evening I was reading about parsing user-defined mixfix operators in Agda.

Mixfix operators are actually quite common though they are usually called something else. The Danielsson and Norell paper describes operators using a notation in which underscores are placeholders for the operands, so (using examples from C) we have

  • postfix operators
    • _ ++
    • _ .member
  • prefix operators
    • ! _
    • & _
  • infix operators
    • _ + _
    • _ == _

There are also circumfix operators, of which the most common example is (_) which boringly does nothing. Mathematical notation has some more fun examples, such as ceiling ⎡_⎤ and floor ⎣_⎦.

Mixfix operators have a combination of fixities. C has a few examples:

  • _ [ _ ]
  • _ ? _ : _
You can also regard function call syntax as a variable-arity mixfix operator :-)

The clever part of the paper is how it handles precedence in a way that is reasonably easy for a programmer to understand when defining operators, and which allows for composing libraries which might have overlapping sets of operator definitions.

One thing that their mixfix parser does not get funky about is associativity: it supports the usual left-, right-, and non-associative operators. One of my favourite syntactic features is chained relational operators, as found in BCPL and Python. (For fun I added the feature to Lua - see this and the following two patches.) You can write an expression like

    a OP b OP c OP d
which is equivalent to
    a OP b && b OP c && c OP d
except that each operand is evaluated at most once. (Though unfortunately BCPL may evaluate inner operands twice.) This is not just short-cutting variable-arity comparison because the operators can differ.

So I wonder, are there other examples of chain-associative operators? They might have a different underlying reduction operator instead of &&, perhaps, which would imply different short-cut behaviour.

Perhaps an answer would come to mind if I understood more of the category-theory algebraic functional programming stuff like bananas and lenses and what have you...

(12 comments | Leave a comment)

Friday 3rd May 2013

Two compelling applications for universal surveillance

Face recognition is pretty good now: you can upload a picture to Facebook and it will automatically identify who is in the frame. Combine this with Google Glass and the idea of lifelogging: what can these technologies do as prosthetic memories?

I am at a party and I meet someone I am sure I have met before, but I can't remember. I could try talking to them until I get enough context to remind myself of their name, or I could just wink to take a photo, which is uploaded and annotated, and then I know their name, employer, marital status, and I have a picture of our last photographed encounter. Oh yes, now I remember! And we have saved a few minutes of embarrassing protocol negotiation and impedance matching.

But why wink? If I am lifelogging, everything I do gets uploaded. So when I see someone, I can be automatically reminded to say happy birthday, or that I need to give them the thing I offered to lend them. Contextual social cues!

How much embarrassment we could avoid! How sooner we could get to the fun part of the conversation!

(This post was inspired by a pub conversation with Ian Jackson and Mark Wooding; Simon Tatham suggested the auto-reminder feature.)
(8 comments | Leave a comment)

Thursday 11th April 2013

DNS reflection / amplification attacks: security economics, nudge theory, and perverse incentives.

In recent months DNS amplification attacks have grown to become a serious problem, the most recent peak being the 300 Gbit/s Spamhaus DDoS attack which received widespread publicity partly due to effective PR by CloudFlare and a series of articles in the New York Times (one two three).

Amplification attacks are not specific to the DNS: any service that responds to a single datagram with a greater number and/or size of reply datagrams can be used to magnify the size of an attack. Other examples are ICMP (as in the smurf attacks that caused problems about 15 years ago) and SNMP (which has not yet been abused on a large scale).

The other key ingredient is the attacker's ability to spoof the source address on packets, in order to direct the amplified response towards the ultimate victim instead of back to the attacker. The attacks can be defeated either by preventing amplification or by preventing spoofed source addresses.

Smurf attacks were stopped by disabling ICMP to broadcast addresses, both in routers (dropping ICMP to local broadcast addresses) and hosts (only responding to directly-addressed ICMP). This fix was possible since there is very little legitimate use for broadcast ICMP. The fix was successfully deployed mainly because vendor defaults changed and equipment was upgraded. Nudge theory in action.

If you can't simply turn off an amplifier, you may be able to restrict it to authorized users, either by IP address (as in recursive DNS servers) and/or by application level credentials (such as SNMP communities). It is easier to police any abuse if the potential abusers are local. Note that if you are restricting UDP services by IP address, you also need to deploy spoofed source address filtering to prevent remote attacks which have both amplifier and victim on your network. There are still a lot of vendors shipping recursive DNS servers that are open by default; this is improving slowly.

But some amplifiers, such as authoritative DNS serves, are hard to close because they exist to serve anonymous clients across the Internet. It may be possible to quash the abuse by suitably clever rate-limiting which, if you are lucky, can be as easy to use as DNS RRL; without sufficiently advanced technology you have to rely on diligent expert operators.

There is a large variety of potential amplifiers which each require specific mitigation; but all these attacks depend on spoofed source addresses, so they can all be stopped by preventing spoofing. This has been recommended for over ten years (see BCP 38 and BCP 84) but it still has not been deployed widely enough to stop the attacks. The problem is that there are not many direct incentives to do so: there is the reputation for having a well-run network, and perhaps a reduced risk of being sued or prosecuted - though the risk is nearly negligible even if you don't filter. Malicious traffic does not usually cause operational problems for the originating network in the way it does for the victims and often also the amplifiers. There is a lack of indirect incentives too: routers do not filter spoofed packets by default.

There are a number of disincentives. There is a risk of accidental disruption due to more complicated configuration. Some networks view transparency to be more desirable than policing the security of their customers. And many networks do not want to risk losing money if the filters cause problems for their customers.

As well as unhelpful externalities there are perverse incentives: source address spoofing has to be policed by an edge network provider that is acting as an agent of the attacker - perhaps unwittingly, but they are still being paid to provide insecure connectivity. There is a positive incentive to corrupt network service providers that allow criminals to evade spoofing filters. The networks that feel the pain are unable to fix the problem.

More speculatively, if we can't realistically guarantee security near the edge, it might be possible to police spoofing throughout the network. In order for this to be possible, we need a comprehensive registry of which addresses are allocated to which networks (a secure whois), and a solid idea of which paths can be used to reach each network (secure BGP). This might be enough to configure useful packet filters, though it will have similar scalability problems as address-based packet forwarding.

So we will probably never be able to eliminate amplification attacks, though we ought to be able to reduce them to a tolerable level. To do so we will have to reduce both datagram amplifiers and source address spoofing as much as possible, but neither of these tasks will ever be complete.

(7 comments | Leave a comment)

Thursday 14th March 2013

It is hard to test a DNSSEC root trust anchor rollover

ICANN are running a consultation on their plans for replacing the DNSSEC root key. I wondered if there was any way to be more confident that the new key is properly trusted before the old key is retired. My vague idea was to have a test TLD (along the lines of the internationalized test domains) whose delegation is only signed by the new root key, not the normal zone-signing keys used for the production TLDs. However this won't provide a meaningful test: the prospective root key becomes trusted because it is signed by the old root key, just like the zone-signing keys that sign the rest of the root zone. So my test TLD would ultimately be validated by the old root key; you can't use a trick like this to find out what will happen when the old key is removed.

So it looks like people who run validating resolvers will have to use some out-of-band diagnostics to verify that their software is tracking the key rollover correctly. In the case of BIND, I think the only way to do this currently is to cat the managed-keys.bind pseudo-zone, and compare this with the root DNSKEY RRset. Not user-friendly.

(Leave a comment)

Wednesday 6th March 2013

DoS-resistant password hashing

In the office this morning we had a discussion about password hashing, in particular what we could do to improve the security of web servers around the university. (You might remember the NullCrew attacks on Cambridge University a few months ago.)

One of the points the standard password storage advice (which boils down to "use scrypt, or bcrypt, or PBKDF2, in decreasing order of preference") is that a decent iteration count should make a password hash take tens of milliseconds on a fast computer. This means your server is incapable of servicing more than a few dozen login attempts per second, and so your login form becomes quite a tempting denial of service target.

One way to mitigate this is to make the client perform a proof-of-work test that is more expensive than the password hash which brings you a little closer to fairness in the arms race. On Twitter, Dan Sheppard tersely suggested some refinements:

Improve client-side puzzles: bypassable with uid-tied unexpiring cookie. If supplied & not recently & not blacklisted, relax puzzles. Under DOS require this cookie to go into 1st queue, otherwise into 2nd with DOS. 5s at first login on new machine w/ spinner is ok.

But for real DoS resistance you need the client to be doing a lot more work than the server (something like 3 to 10 binary orders of magnitude), not roughly the same amount. So I wondered, can you move some of the work of verifying a password from the server to the client? And require that an attacker with a copy of the password file should have to do both the client's and server's work for each trial decrypt.

I thought you could use two hash functions, "hard" and "easy". ("Easy" should still be a slow-ish password hash.) The server sends some parameters (salt and work factor) to the client, which executes the "hard" function to crypt the password, and then sends password and crypt to the server. The server uses these as the input for the "easy" function, and checks this matches the stored crypt. The server stores only the first salt and the final output of the process, along with the work factors.

What might be the problems with this idea? (I will not be surprised or upset if it turns out to be completely laughable to cryptographers!)

The output of the "hard" KDF is password-equivalent, so it needs to be well protected. I'm assuming it is treated like an ephemeral key. Don't be tempted to put it in a cookie, for example.

This protocol is not a proof of work like what is described in the Stack Exchange question I linked to above: in my idea the challenge from the server to the client is always the same. (Which is why the result of the work is password-equivalent.) But this is OK for a threat model that is satisfied by basic password authentication.

The "easy" function needs to be resistant to attacks that aim to recover the salt. As I understand it, this is not a normal requirement for password hashes (the salt is usually considered public) so an unusual construction might be needed.

Any other problems? Is this a completely foolish idea?

Of course this is a mostly useless idea, since there are much better authentication protocols. In particular, SRP (Secure Remote Password authentication and session key establishment) does all the work of hashing the password and salt on the client - observe in the summary where Hash(salt, password) happens.

What other password authentication protocols should be mentioned in this context?

(17 comments | Leave a comment)

Wednesday 27th February 2013

ccTLD registry web server survey

There has been some amusement on Twitter about a blog post from Microsoft offering their security auditing services to ccTLDs. This is laughable from the DNS point of view, but fairly plausible if you consider a registry's web front end. So I thought I would do a little survey. I got a list of ccTLDs from my local copy of the root zone, and looked up their web sites in Wikipedia, then asked each one in turn what web server software it uses. Frederic Cambus suggested on Twitter that I should get the registry URLs from the IANA root zone database instead, and this turned out to be much better because of the more consistent formatting.

No Server17
Apache verbose39
Apache (Unix)21
Apache (CentOS)20
Apache (Debian)13
Apache (Ubuntu)12
Apache (FreeBSD)7
Apache (Fedora)4
Apache (Win32)2
Nginx / Varnish1
GlassFish v31
VNNIC IDN Proxy 21
Zope / Plone1
    dig axfr . |
    sed '/^\([A-Za-z][A-Za-z]\)[.].*/!d;
         s||\1\1.html|' |
    sort -u |
    while read t w
        r=$(curl -s $w |
            sed '/.*<b>URL for registration services:<.b> <a href="\([^">.*/!d;
        case $r in
        '') echo "<tr><th>$t</th><td></td><td></td></tr>"
        s=$(curl -s -I $r | sed '/^Server: \([^^M]*\).*/!d;s//\1/')
        echo "<tr><th>$t</th><td>$r</td><td>$s</td></tr>"
All the dataCollapse )
(1 comment | Leave a comment)

Thursday 21st February 2013


Dan Bernstein recently published slides for a talk criticizing DNSSEC. The talk is framed as a description of the fictitious protocol HTTPSEC, which is sort of what you would get if you applied the DNSSEC architecture to HTTP. This seems to be a rhetorical device to make DNSSEC look stupid, which is rather annoying because it sometimes makes his points harder to understand, and if his arguments are strong they shouldn't need help from extra sarcasm.

The analogy serves to exaggerate the apparent foolishness because there are big differences between HTTP and the DNS. HTTP deals with objects thousands of times larger than the DNS, and (in simple cases) a web site occupies a whole directory tree whereas a DNS zone is just a file. Signing a file isn't a big deal; signing a filesystem is troublesome.

There are also differences in the way the protocols rely on third parties. Intermediate caches are ubiquitous in the DNS, and relatively rare in HTTP. The DNS has always relied a lot on third party secondary authoritative servers; by analogy HTTP has third party content delivery networks, but these are a relatively recent innovation and the https security model was not designed with them in mind.

The DNS's reliance on third parties (users relying on their ISP's recursive caches; master servers relying on off-site slaves) is a key part of the DNSSEC threat model. It is designed to preserve the integrity and authenticity of the data even if these intermediaries are not reliable. That is, DNSSEC is based on data security rather than channel security.

I like to use email to explain this distinction. When I connect to my IMAP server over TLS, I am using a secure channel: I know I am talking to my server and that no-one can falsify the data I receive. But the email I download over this channel could have reached the server from anywhere, and it can contain all sorts of fraudulent messages. But if I get a message signed with PGP or S/MIME I can be sure that data is securely authentic.

DJB uses the bad analogy with HTTP to mock DNSSEC, describing the rational consequences of its threat model as "not a good approach" and "stupid". I would prefer to see an argument that tackles the reasons for DNSSEC's apparently distasteful design. For example, DJB prefers an architecture where authoritative servers have private keys used for online crypto. So if you want outsourced secondary service you have to make a rather difficult trust trade-off. It becomes even harder when you consider the distributed anycast servers used by the root and TLDs: a lot of the current installations cannot be upgraded to the level of physical security that would be required for such highly trusted private keys. And there is the very delicate political relationship between ICANN and the root server operators.

So the design of DNSSEC is based on an assessment that the current DNS has a lot of outsourcing to third parties that we would prefer not to have to trust, but at the same time we do not want to fix this trust problem by changing the commercial and political framework around the protocol. You might legitimately argue that this assessment is wrong, but DJB does not do so.

What follows is a summary of DJB's arguments, translated back to DNSSEC as best I can, with my commentary. The PDF has 180 pages because of the way the content is gradually revealed, but there are less than 40 substantive slides.

Paragraphs starting with a number are my summary of the key points from that page of the PDF. Paragraphs marked with a circle are me expanding or clarifying DJB's points. Paragraphs marked with a square are my corrections and counter-arguments.

  1. HTTPSEC motivation
  2. Standard security goals
  3. HTTPSEC "HTTP Security"
    responses signed with PGP
    • DNSSEC uses its own signature format: RRSIG records.

  4. Signature verification
    chain of trust
    • Internet Central Headquarters -> ICANN.

  5. Root key management
    • The description on this slide is enormously simplified, which is fair because DNSSEC root key management involves a lot of paranoid bureaucracy. But it gets some of these tedious details slightly wrong; fortunately this has no bearing on the argument.

    • Access to the DNSSEC root private key HSM requires three out of seven Crypto Officers. There are also seven Recovery Key Share officers, five of whom can reconstruct the root private key. And there are separate people who control physical access to the HSM, and people who are there to watch everything going according to plan.

    • Root zone signatures last a week, but the root zone is modified several times a day and each modification requires signing, using a private key (ZSK) that is more easily accessed than the root key (KSK) which only comes out every three months when the public keys are signed.

  6. Verifying the chain of trust
  7. HTTPSEC performance
    Precomputed signatures and no per-query crypto.
    • This design decision is not just about performance. It is also driven by the threat model.

  8. Clients don't share the work of verifying signatures
    Crypto primitives chosen for performance
    Many options
    • Another consideration is the size of the signatures: smaller is better when it needs to fit into a UDP packet, and when the signatures are so much bigger than the data they cover.

    • Elliptic curve signatures are now an excellent choice for their small size and good performance, but they are relatively new and have been under a patent cloud until fairly recently. So DNSSEC mostly uses good old RSA which has been free since the patent expired in 2000. If crypto algorithm agility works, DNSSEC will be able to move to something better than RSA, though it will probably take a long time.

    • Compare TLS crypto algorithm upgrades.

  9. Breakable choices such as 640-bit RSA
    Staggering complexity
    • Any fixed choice of crypto primitive is going to be broken at some point, so there must be some way to upgrade the crypto over time. DNSSEC relies on signatures which in turn rely on hashes, and hash algorithms have generally had fairly short lifetimes.

    • Compare SSL/TLS's history of weak crypto. Both protocols date back to the mid 1990s.

    • The complexity of DNSSEC is more to do with the awkward parts of the DNS, such as wildcards and aliases, and not so much the fact of crypto algorithm agility.

  10. HTTPSEC does not provide confidentiality
    • Yes this is a bit of a sore point with DNSSEC. But observe that in pretty much all cases, immediately after you make a DNS query you use the result to connect to a server, and this reveals the contents of the DNS reply to a snooper. On the other hand there are going to be uses of the DNS which are not so straight-forward, and this will become more common as more people use the security properties of DNSSEC to put more interesting data in the DNS which isn't just related to IP connectivity.

  11. HTTPSEC data model
    signatures alongside data
    • This slide makes DNSSEC signing sound a lot more fiddly than it actually is. HTTP deals with sprawling directory trees whereas most DNS master files are quite small, e.g. 33Mbytes for a large-ish signed zone with 51k records. DNSSEC tools deal with zones as a whole and don't make the admin worry about individual signatures.

  12. HTTPSEC purists say "answers should always be static"
    • In practice DNSSEC tools fully support dynamic modification of zones, e.g. the .com zone is updated several times a minute. In many cases it is not particularly hard to add DNSSEC signing as a publication stage between an existing DNS management system and the public authoritative servers, and it often doesn't require any big changes to that system.

    • Static data is supported so that it is possible to have offline keys like the root KSK, but that does not prevent dynamic data. (For a funny example, see the DNSSEC reverse polish calculator.) In a system that requires dynamic signatures static data and offline keys are not possible.

  13. Expiry times stop attackers replaying old signatures
    Frequent resigning is an administrative disaster
    • This is true for some of the early unforgiving DNSSEC tools, but there has been a lot of improvement in usability and reliability in the last couple of years.

  14. HTTPSEC suicide examples
    • These look like they are based on real DNSSEC cockups.

    • Many problems have come from the US government DNSSEC deployment requirement of 2008, so many .gov sites set it up using the early tools with insufficient expertise. It has not been a very good advertisement for the technology.

  15. Nonexistent data - how not to do it
  16. NSEC records for authenticated denial of existence
  17. NSEC allows zone enumeration
  18. DNS data is public
    an extreme notion and a public-relations problem
    • The other problem with NSEC is that it imposes a large overhead during the early years of DNSSEC deployment where TLDs mostly consist of insecure delegations. Every update requires fiddling with NSEC records and signatures even though this provides no security benefits. NSEC3 opt-out greatly reduces this problem.

  19. NSEC3 hashed denial of existence
  20. NSEC3 does not completely prevent attackers from enumerating a zone
    • This is true but in practice most sites that want to keep DNS data private use hidden zones that are only accessible on their internal networks.

    • Alternatively, if your name server does on-demand signing rather using pre-generated signatures, you can use dynamic minimally covering NSEC records or empty NSEC3 records.

    • So there are ways to deal with the zone privacy problem if static NSEC3 isn't strong enough for you.

  21. DNS uses UDP
  22. DNS can be used for amplification attacks
    • A lot of other protocols have this problem too. Yes, DNSSEC makes it particularly bad. DNS software vendors are implementing response rate limiting which eliminates the amplification effect of most attacks. Dealing with spam and criminality is all rather ugly.

    • A better fix would be for network providers to implement ingress filtering (RFC 2827, BCP 38), but sadly this seems to be an impossible task, so higher-level protocols have to mitigate vulnerabilities in the network.

  23. DNSSEC provides no protection against denial of service attacks
    • This quote comes from RFC 4033 section 4. I think (though it isn't entirely clear) that what the authors had in mind was the fact that attackers can corrupt network traffic or stop it, and DNSSEC can do nothing to prevent this. See for example section 4.7 of RFC 4035 which discusses how resolvers might mitigate this kind of DoS attack. So the quote isn't really about reflection attacks.

    • (Other protocols have similar problems; for instance, TLS kills the whole connection when it encounters corruption, so it relies on the difficulty of breaking TCP connections which are not normally hardened with crypto - though see the TCP MD5 signature option in RFC 2385.)

  24. The worst part of HTTPSEC

    The data signed by HTTPSEC doesn’t actually include the web pages that the browser shows to the user.

    HTTPSEC signs only routing information: specifically, 30x HTTP redirects.
    • I can't easily understand this slide because the analogy between DNS and HTTP breaks down. According to the analogy, HTTP redirects are DNS referrals, and web pages are leaf RRsets. But DNSSEC does sign leaf RRsets, so the slide can't be talking about that.

    • Perhaps it is being more literal and it is talking about actual web pages. The DNSSEC answer is that you use records such as SSHFP or TLSA to link the DNSSEC authentication chain to your application protocol.

    • I asked DJB about this on Twitter, and he confirmed the latter interpretation. But he complained that the transition from signed DNSSEC data to an encrypted application channel destroys the advantages of signed DNSSEC data, because of the different security models behind signed data and encrypted channels.

    • But in this situation we are using DNSSEC as a PKI. The X.509 PKI is also based on statically signed data (the server certificate) which is used to authenticate a secure channel.

  25. redirect example
  26. redirect example
  27. redirect example
  28. If final web page is signed, what is the security benefit of signing the redirects? Attacker can’t forge the page.
    • The answer to this question is how you trust the signature on the web page. At the moment we rely on X.509 which is inadequate in a lot of ways. DNSSEC is a new PKI which avoids some of the structural problems in the X.509 PKI. This is the reason I think it is so important.

    • The X.509 PKI was designed to follow the structure of the X.500 directory. When it got re-used for SSL and S/MIME it became decoupled from its original name space. Because of this, any CA can authenticate any name, so every name is only as strong as the weakest CA.

    • DNSSEC follows the structure of the Internet's name space. Its signing authorities are the same as DNS zone authorities, and they can only affect their subdomains. A British .uk name cannot be harmed by the actions of the Libyan .ly authorities.

    • What other global PKIs are there? PGP? Bueller?

  29. Deployment is hard
    • If you look at countries like Sweden, Czech, Netherlands, Brazil, there is a lot more DNSSEC than elsewhere. They have used financial incentives (domain registration discounts for signed domains) to make it more popular. Is DNSSEC worth this effort? See above.

    • It's amusing to consider the relative popularity of DNSSEC and PGP and compare their usage models.

  30. HTTPS is good
    • But it relies on the X.509 PKIX which is a disaster. Peter Gutmann wrote a series of articles with many examples of why: I, II, III; and the subsequent mailing list discussion is also worth reading.

  31. The following quotes are straw-man arguments for why an https-style security model isn't appropriate for DNSSEC.
  32. “HTTPS requires keys to be constantly online.”
    • So does DNSSEC in most setups. You can fairly easily make DNSSEC keys less exposed than a TLS private key, using a hidden master. This is a fairly normal non-sec DNS setup so it's nice that DNSSEC can continue to use this structure to get better security.

  33. “HTTPS requires servers to use per-query crypto.”
    • So does NSEC3.

  34. “HTTPS protects only the channel, not the data. It doesn’t provide end-to-end security.”
    Huh? What does this mean?
    • See the discussion in the introduction about the DNSSEC threat model and the next few notes.

  35. Why is the site owner putting his data on an untrusted server?
    • Redundancy, availability, diversity, scale. The DNS has always had third-party secondary authoritative name servers. HTTP also does so: content delivery networks. The difference is that with DNSSEC your outsourced authoritative servers can only harm you by ceasing to provide service: they cannot provide false service; HTTP content delivery networks can mess up your data as much as they like, before serving it "securely" to your users with a certificate bearing your name.

  36. “HTTPS destroys the caching layer. This Matters.”
    Yeah, sure it does. Film at 11: Internet Destroyed By HTTPS.
    • Isn't it nicer to get an answer in 3ms instead of 103ms?

    • Many networks do not provide direct access to DNS authoritative servers: you have to use their caches, and their caches do not provide anything like the web proxy HTTP CONNECT method - or at least they are not designed to provide anything like that. A similar facility in the DNS would have to be an underhanded crypto-blob-over-DNS tunnel hack: a sort of anarchist squatter's approach to protocol architecture.

    • To be fair, a lot of DNS middleboxes have crappy DNSSEC-oblivious implementations, and DNSSEC does not cope with them at all well. Any security upgrade to the DNS probably can't be done without upgrading everything.

  37. The DNS security mess
  38. referral example
  39. HTTPSEC was all a horrible dream analogy

So to conclude, what DJB calls the worst part of DNSSEC - that it secures the DNS and has flexible cryptographic coupling to other protocols - is actually its best part. It is a new global PKI, and with sufficient popularity it will be better than the old one.

I think it is sad that someone so prominent and widely respected is discouraging people from deploying and improving DNSSEC. It would be more constructive to (say) add his rather nice Curve25519 algorithm to DNSSEC.

If you enjoyed reading this article, you might also like to read Dan Kaminsky's review of DJB's talk at 27C3 just over two years ago.

(6 comments | Leave a comment)

Wednesday 30th January 2013

More Oxbridge college name comparisons

So I said you'd have to do this comparison yourselves, but I can't resist making another list, to go with Janet's list of Oxford college domain names, my list of Cambridge college domain names, and my list of colleges with similar names in the two universities, here is a list of colleges with similar domain names:

Corpus Christi College
Jesus College
Magdalen(e) College
Pembroke College
(The) Queen's's's College
St Cath(a/e)rine's College
St John's College
Trinity College
Wolfson College

A few other comparisons occur to me.

Cambridge seems to have more colleges whose names are commonly abbreviated relative to the college's usual name (ignoring their sometimes lengthy full formal names): Caius, Catz, Corpus, Emma, Fitz, Lucy Cav, Sidney / Catz, Corpus, the House, LMH, Univ. (Does "New" count when divorced from "College"?)

Cambridge is more godly than Oxford: Christ's, Corpus, Emmanuel, Jesus, Trinity College and Hall / Christ Church, Corpus, Jesus, Trinity. (Does St Cross count?)

Oxford is more saintly: Magdalene, Peterhouse, St Catharine's, St Edmund's, St John's, (plus Corpus, Jesus, King's, Queens', according to their full names) / Magdalen, St Anne’s, St Antony’s, St Benet’s, St Catherine’s, St Cross, St Edmund, St Hilda’s, St Hugh’s, St John’s, St Peter’s, St Stephen’s, (plus Lincoln, New, according to their full names).

(7 comments | Leave a comment)

Tuesday 29th January 2013

Cambridge college domain names

My friend Janet recently posted an article about the peculiarities of Oxford college domain names. Cambridge has a similar amount of semi-random hysterical raisin difference between its college domain names, so I thought I would enumerate them in a similar way to Janet.

Like Oxford, we have two kinds of colleges, but our kinds are different. We have colleges of the University and colleges in the Cambridge theological federation. The latter are not formally part of the University in the way that the other colleges are, but they are closely affiliated with the University Faculty of Divinity, and the University Computing Service is their ISP. There are also a number of non-collegiate institutions in the theological federation. I've included MBIT in the stats because it's interesting even though it's very borderline - a semi-college, let's say.

The Computing Service's governing syndicate allows colleges (and University departments) to have long domain names if their abbreviated version is considered by them to be too ugly. It's surprising how few of them have taken this up.

The following summary numbers count a few colleges more than once, if they have more than one domain name, or if they are a hall, etc. University colleges: 31; Theological colleges: 4.5; Straightforward (by Janet's criteria): 15.5; Initials: 0.5; First three letters: 7; First four letters: 5; Middle three letters: 2; Some other abbreviation: 5; Long + short: 3: Halls with hall: 3; Halls without hall: 2.

It is perhaps amusing to observe where the Oxbridge colleges with similar names differ in their domain labels in either place - see my next article. I have previously written about colleges with more-or-less similar names in Oxford and Cambridge, in which I list many of the less obvious formal expansions of the colleges' common names.

So, the Colleges of the University of Cambridge (and their domain names):

Christ's College
Churchill College
Clare College
Clare Hall
Corpus Christi College
Darwin College
Downing College
Emmanuel College
Fitzwilliam College
Girton College
Gonville & Caius College
Homerton College
Hughes Hall
Jesus College
King's College
Lucy Cavendish College
Magdalene College
Murray Edwards College (formerly known as New Hall)
Newnham College
Pembroke College
Queens' College
Robinson College
Selwyn College
Sidney Sussex College
St Catharine's College
St Edmund's College
St John's College
Trinity College
Trinity Hall
Wolfson College

And the theological colleges:

Margaret Beaufort Institute for Theology
Ridley Hall
Wesley House
Westcott House
Westminster College
(7 comments | Leave a comment)

Tuesday 4th December 2012

Distributed (micro-) blogging / how many markets does your protocol support?

I have been idly thinking about a distributed Twitter. The blogging technology we already have does a lot of what we want: your stream of updates ought to be just an Atom feed, and you can subscribe to and aggregate the atom feeds of the people you follow. What does this lack compared to Twitter?

  • A nice user interface. Surely just a small matter of programming.
  • Quick updates. For this use pubsubhubbub.
  • Protected feeds. I'm going to ignore this problem and hope the crypto fairy comes and waves her magic wand to make it go away.
  • Notifications that someone you don't know has mentioned you or replied to you.

The last one is crucial, because open communication between strangers is a key feature of Twitter. But if you allow strangers to contact you then you are vulnerable to spam. A centralized system has the advantage of concentrating both information about how spammers behave and the expertise to analyse and counteract them. In a distributed system spam becomes everyone's problem, and gives everyone an awkward dilemma between preserving privacy and collecting data for spam fighting.

An alternative approach, since feeds are generally public, is to view this as a search problem. That is, you rely on a third party to collect together all the feeds it can, cull the spammers, and inform you of items of interest to you - mentions, replies, tags, etc. This is a slightly centralized system, but you a search provider is not involved in communication between people who know each other, and search is open to competition in a way that most social networking services are not.

The system as a whole then has (roughly) three layers: clients that can update, collect, and display Atom feeds; servers that host Atom feeds; and search services that index the servers. All this tied together with pubsubhubbub and HTTP. In a successful system each of these layers should be a competitive market with multiple implementations and service providers.

This three tier structure is reminiscent of the web. But a lot of Internet applications have only a two tier structure. This led me to think about what kinds of systems have different numbers of markets.

Zero markets

These are proprietary systems entirely controlled by a single vendor. For instance, Skype, AOL (in the mid-1990s), many end-user applications. A lot of web applications fall into this category, where the client software is downloaded on demand to run in the browser - treating the web (servers and browsers) as a substrate for the application rather than a part of it.

One market

A proprietary system with a published API. Operating systems typically fall into this category, and programmable application software. A characteristic of web 2.0 is to provide APIs for web services.

Social networking typically falls somewhere between zero and one on this scale, depending on how complete and well supported their API is. Twitter was a one market system for years, but is now restricting its developers to less than that. Google's services are usually somewhere between zero and one, often closer to zero.

Two markets

An open system, with multiple implementations of the provider side of the interface as well as the consumer side. Many Internet applications are in this category: telnet, mail, usenet, IRC, the web, etc. etc.

Unix / POSIX is another good example. Perhaps some operating systems are slightly more open than a pure one-market system: NextSTEP has a clone, OpenSTEP, but it was never popular enough to become an open system and then Mac OS X raced away leaving it behind. Wine is playing permanent catch-up with Microsoft Windows.

Many programming languages are two-market systems: Ada, C, Fortran, Haskell, JavaScript, Pascal. Lua, Python, Ruby, and Tcl to some extent - they have reference implementations and clones rather than an independent open specification. Java has multiple implementations, even though the spec is proprietary. Some successful languages are still only one market systems, such as Perl and Visual Basic.

Three markets

The key feature here seems to be that a two market system doesn't provide enough connectivity; the third tier links the second tier providers together. This is not layering in the sense of the OSI model: for example, many two-tier Internet applications rely on the DNS for server-to-server connections, but this is a sublayer, not implemented as part of the application itself. In the web and in my distributed (micro-) blogging examples, the search tier operates as both client and server of the application protocol.

Linux can be viewed as a three-tier POSIX implementation. Linux's second tier is much more fragmented than traditional Unix; so Linux distributions provide a third tier that ties it together into a coherent system.

Perhaps the Internet itself is in this category. The IP stack in user PCs and application servers are consumers of Internet connectivity; Internet access providers and server hosting providers are the second tier; and the third tier is the backbone network providers. This is obviously a massive oversimplification - many ISPs can't be easily classified in this manner. (But the same is also true of two-market systems: for example, webmail providers act in both the client and server tiers of Internet mail.)


The DNS has an elaborate multi-tiered structure: stub resolvers, recursive resolvers, authoritative servers, registrars, registries, and the root. This is partly due to its hierarchial stricture, but the registrar / registry split is somewhat artificial (though it is similar to manufacturer / dealer and franchising arrangements). Though perhaps it can also be viewed as a three tier system: The user tier includes resolvers, DNS update clients, zone master file editors, and whois clients for querying registries. (Grumpy aside: Unfortunately editing the DNS is usually done with proprietary web user interfaces and non-standard web APIs.) The middle tier comprises authoritative servers and registrars, not just because these services are often bundled, but also because you can't publish a zone without getting it registered. The third tier comprises the registries and the root zone, which provide an index of the authoritative servers. (The interface between the second and third tiers is EPP rather than DNS.)


I think I have found this classification an interesting exercise because a lot of discussion about protocols that I have seen has been about client / server versus peer-to-peer, so I was interested to spot a three-tier pattern that seems to be quite successful. (I haven't actually mentioned peer-to-peer systems much in this article; they seem to have a similar classification except with one less tier.) This is almost in the opposite direction to the flattened anarchism of cypherpunk designs; even so it seems three-tier systems often give users a lot of control over how much they have to trust powerful third parties. Unfortunately the top tier is often a tough market to crack…

(3 comments | Leave a comment)

Monday 3rd December 2012


Last week I was inspired by a tweet about a chilli quesadilla and since then I have cooked the following about three times for Rachel and I.

Filling (enough for four quesadillas):

  • An onion, cut in half and sliced to make strips;
  • Two sweet peppers, cut into strips
  • A couple of cloves of garlic
  • Two fresh chillis, deseeded and sliced
  • Diced chorizo (Aldi sell little packs, two of which is about right)
  • Olive oil
Heat the oil in a pan then bung the lot in and fry until the onion and pepper are cooked - stop before the onion and peppers get too soft. Since Rachel isn't such a fan of spicy food, I keep the chillis on one side and add them when I assemble my quesadillas.

To make a quesadilla, take a plain wheat flour tortilla wrap and fold in half and unfold to make a crease. Cover each half with a single layer of cheese slices (I like Emmental). Put the tortilla in a dry frying pan and put on a moderate heat. Put some filling on one half, and when the cheese has started to melt, fold over the other half and press so the cheese glues in the folling. Cook for a while until the underside is gently browned (less than a minute, I think) then flip over to brown the other side.


(3 comments | Leave a comment)

Friday 30th November 2012

Can't send mail from an Apple Mac via a BT Internet connection.

This morning we got a problem report from a user who couldn't send mail, because our message submission servers were rejecting their MUA's EHLO command because the host name was syntactically invalid. The symptoms were a lot like this complaint on the BT forums except our user was using Thunderbird. In particular the hostname had the same form, unknown-11:22:33:44:55:66.home, where the part in the middle looked like an embedded ethernet address including the colons that triggered the syntax error.

I wondered where this bogus hostname was coming from so I asked the user to look at their Mac's various hostname settings. I had a look in our logs to see if other people were having the same problem. Yes, a dozen or two, and all of them were using BT Internet, and all of the ethernet addresses in their hostnames were allocated to Apple.

A lot of our BT users have hostnames like unknown-11-22-33-44-55-66.home where the colons in the ethernet address have been replaced by hyphens. But it seems that some versions of the BT hub have broken hostname allocation that fails to use the correct syntax. You can change the settings on the hub to allocate a hostname with valid syntax but you have to do this separately for each computer that connects to your LAN. I believe (but I haven't checked) that the hub implements a fake .home TLD so that DNS "works" for computers on the LAN.

When a Mac OS computer connects to a network it likes to automatically adjust its hostname to match the network's idea of its name, so on a LAN managed by a broken BT hub it ends up with a bogus hostname containing colons. You can force the Mac to use a sensible hostname either globally or (as in those instructions) just in a particular location.

Most MUAs construct their EHLO commands using the computer's hostname. Most mail servers reject EHLO commands with invalid syntax (though underscores are often given a pass). So, if you try to send mail from a Mac on a BT connection then it is likely to fail in this way.

This is a stupid and depressing bug, and the workarounds available to the user are all rather horrible. It would be a waste of time to ask the affected users to do their own workarounds and complaints to BT - and it seems most of them are oblivious to the failure.

So I decided to tweak the configuration on our message submission servers to ignore the syntax of the client hostname. In Exim you can do this using helo_accept_junk_hosts = * preceded by a comment explaining why BT Hubs suck.

(6 comments | Leave a comment)

Tuesday 25th September 2012

Large-scale IP-based virtual hosting

Yesterday there was a thread on the NANOG list about IPv6 addressing for web sites. There is a great opportunity here to switch back to having an IP address per site, like we did with IPv4 in the mid-1990s :-) Of course the web is a bit bigger now, and there is more large-scale hosting going on, but the idea of configuring a large CIDR block of addresses on a server is not unprecedented - except back then we were dealing with /18s rather than /64s.

Demon's Homepages service predated widespread browser support for name-based virtual hosting. In fact it provided one of the download sites for IE3, which was the first version that sent Host: headers. So Homepages was based on IPv4 virtual hosting and had IIRC 98,304 IP addresses allocated to it in three CIDR blocks. It was a single web server, in front of which were a few reverse proxy caches that took most of the load, and that also had all the IP addresses. Every cache would accept connections on all the IP addresses, and the load was spread between them by configuring which address ranges were routed to which cache.

The original version of Homepages ran on Irix, and used a cunning firewall configuration to accept connections to all the addresses without stupid numbers of virtual interfaces. Back then there were not many firewall packages that could do this, so when it moved to BSD (first NetBSD then FreeBSD) we used a "NETALIAS" kernel hack which allowed us to use ifconfig to bring up a CIDR block in one go.

Sadly I have never updated the NETALIAS code to support IPv6. But I wondered if any firewalls had caught up with Irix in the last 15 years. It turns out the answer is yes, and the key feature to look for is support for transparent proxying. On FreeBSD you want the ipfw fwd rule. On Linux you want the TPROXY feature. You can do a similar thing with OpenBSD pf, though it requires the server to use a special API to query the firewall, rather than just using getsockname().

On Demon's web servers we stuffed the IP address into the filesystem path name to find the document root, or used various evil hacks to map the IP address to a canonical virtual server host name before stuffing the latter in the path. mod_vhost_alias is very oriented around IPv4 addresses and host names, so probably not great for IPv6, so mod_rewrite is a better choice if you want to break addresses up into a hierarchial layout. But perhaps it is not that ugly to run a name server which is authoritative for the reverse range used by the web server, and map the address to the hostname with UseCanonicalName DNS.

(Leave a comment)

Monday 24th September 2012

New TLDs and RFC 1535

There has been a lengthy discussion on the DNS operations list about new GDLDs with A, AAAA, or MX records at the zone apex, which allows dotless fully-qualified hostnames and mail domains, such as dk and va. This brings back the old spectre of RFC 1535 and unpredictable name resolution.

One particular example that I care about is local unqualified mail domains. We use Exim's widen_domains feature, which (in our configuration) tries appending and to the mail domain in turn if DNS resolution fails. This means that local mail domains such as @careers, @farm, @foundation, @law, @wiki and so forth are newly vulnerable to breakage by changes elsewhere in the DNS. This is not really a new problem (there were amusing problems with in the past) but the scale will become much larger.

So I'm wondering what to do. I could change widen_domains to work a bit more like qualify_single (except without depending on the behaviour of the resolver library), that is to try local names in preference to global ones when there are no dots in the mail domain. Or I could make a break with tradition and drop support for unqualified names - but I would have to wean a few dozen people (including myself) off the habit.


(7 comments | Leave a comment)

Friday 17th August 2012

More random errors

Following yesterday's chrooted /dev/random problem I thought I would try to improve BIND's error logging in this situation.

BIND has a function dst__openssl_toresult() which converts an OpenSSL error into an ISC_R_ error code. In the existing code it doesn't do much, just translating ERR_R_MALLOC_FAILURE into ISC_R_NOMEMORY. I added a case to convert ECDSA_R_RANDOM_NUMBER_GENERATION_FAILED into ISC_R_NOENTROPY. So now when I have a misconfigured /dev/random and I try to sign a zone with ECDSA, named logs "sign_apex:add_sigs: out of entropy" instead of "sign failure". This would probably have been enough to clue me in a lot faster yesterday.

After running with this patch for a while, I got a DNSSEC validation failure trying to resolve Weirdly no-one else was seeing the same problem, so I tried backing out my patch and it worked again. How on earth could ECDSA error handling affect RSA validation?!

There is an ambiguity in the DNSSEC specs to do with canonicalization of signer names in RRSIG records. In the .co TLD, the signer name is upper case, for instance:

    co.  RRSIG  DNSKEY 8 1 518400 20120908022542 20120809021825 2044 CO. (...)
    CO.  RRSIG  DNSKEY 8 1 518400 20120908022542 20120809021825 33228 CO. (...)
When it encounters one of these signatures, named logs this:
    17-Aug-2012 18:42:48.028 general: info:
        sucessfully validated after lower casing signer 'CO'
This message is triggered when named tries to validate the record verbatim, fails, retries in lower case, and succeeds:
	if (ret == DST_R_VERIFYFAILURE && !downcase) {
		downcase = ISC_TRUE;
		goto again;
The DST_R_VERIFYFAILURE code comes from BIND's RSA code, and is passed through dst__openssl_toresult() in case the failure was caused by memory allocation problems. The original error from OpenSSL is RSA_R_BAD_SIGNATURE, which dst__openssl_toresult() ought not to recognise so it should pass on the DST_R_VERIFYFAILURE code unchanged. But OpenSSL's reason codes are not unique:
    openssl/rsa.h:   #define RSA_R_BAD_SIGNATURE                     104
    openssl/ecdsa.h: #define ECDSA_R_RANDOM_NUMBER_GENERATION_FAILED 104
You also have to check the OpenSSL library code (ERR_LIB_ECDSA in this case) to disambiguate its error codes. Sheesh.

Because of this, BIND's validation code was getting ISC_R_NOENTROPY, so it did not do its lower case retry but simply failed to validate.

A little added joy for a postscript: Russian GOST signing also requires random numbers, so it would benefit from the same error logging fix as ECDSA. However OpenSSL's GOST module is dynamically loaded, so it doesn't have a static library code that you can easily check. As an alternative you can do a string comparison against its name, but this is a bit much for a lightweight error code conversion routine. Amusingly, the ECDSA module lacks a library name string (which is a bug) so you can't use similar error identification code for both GOST and ECDSA ...

I believe the next release of BIND 9.9.2 will include this error logging improvement for ECDSA, though not for GOST.

(Leave a comment)

Thursday 16th August 2012

FreeBSD device nodes and chroots

I run a toy nameserver on my FreeBSD workstation, mainly for experimenting with DNSSEC. I run it in a chroot, which I had set up in the traditional way using mknod. This appeared to work OK.

This week I have been trying out BIND 9.9.2b1, which introduces support for ECDSA signatures. One of its promising attributes is an ECDSAP256SHA256 signature is about a half of the size of 1024 bit RSASHA1 signature: 3600 IN RRSIG SOA 5 3 3600 (
                            20120915135936 20120816125936 35583
                                /JknR85zQNZ2YOYJ4QL9HIy1uEuoirpa1fDfr8I= ) 3600 IN RRSIG SOA 13 3 3600 (
                            20120915135936 20120816125936 36584
                                GerFX/C+Pcqecx1rXsZLwWGDJvQ7q7ch8fryt+ImuA== )

When I first tried to sign a zone with ECDSA, named logged a rather uninformative pair of error messages:

    15-Aug-2012 19:56:31.969 general: error: 
                             update_sigs:add_sigs -> sign failure
    15-Aug-2012 19:56:31.970 general: error:
                             sign_apex:update_sigs -> sign failure

I was unable to reproduce this error with a much simpler name server configuration on another machine. I hacked around to trace where the error came from, and it turned out to be inside OpenSSL's ECDSA_do_sign() routine.

One of the notable differences between RSA and DSA is that DSA requires random numbers to create a signature. BIND requires random numbers for lots of other things at run time, including for hard-to-guess query IDs and source port numbers. So I had thought the /dev/random setup in my chroot was working. However, while debugging my ECDSA problem I noticed BIND logging an error at startup:

    Aug 15 20:07:56 black named[13450]: 
                    could not open entropy source /dev/random: unexpected error
    Aug 15 20:07:56 black named[13450]: 
                    using pre-chroot entropy source /dev/random

I then had to learn the correct way to set up device nodes in a chroot on FreeBSD:

	# $T is the chroot directory
	mount -t devfs devfs $T/dev
	# the default ruleset is immutable, so create a new one
	devfs -m $T/dev ruleset 1
	# only a small selection of devices should be visible
	devfs -m $T/dev rule add path random unhide
	devfs -m $T/dev rule add path urandom unhide
	# make it so
	devfs -m $T/dev rule applyset

Having fixed that, BIND cheerfully signed the zone with ECDSA. Evidently BIND is clever enough to recover from a badly set up chroot, but OpenSSL isn't, presumably because it does not try to open /dev/random until the last moment.

(5 comments | Leave a comment)

Thursday 2nd August 2012

Desirable properties of names and Zooko's Triangle

In Zooko's original essay he described the trade-off as "Decentralized, Secure, Human-Meaningful: Choose Two". The problem with this is that these properties are not independent: a central naming authority can rescind or re-assign a name, which is effectively an attack on the association between the name and its referent. So not decentralised implies not secure.

Paul "ciphergoth" Crowley talks about Zooko's Tetrahedron which is a four-way tradeoff between "Memorable, Decentralized, Specific, Transferable". In his essay he gives a weak meaning to "Specific", such that security requires decentralisation and specificity but is not implied by them. However it seems that the Tahoe-LAFS developers use "Specific" with the same meaning that others give to "Secure" - which leads to the same lack of independence as Zooko's version.

In Marc Stiegler's essay on petnames the trichotomy is described as "Memorable, Global, Securely Unique" which can each be chosen independently. This is the version I prefer, though I might use different terms for the concepts - perhaps "User-friendly, Transferable, Secure".

(Leave a comment)

Sunday 15th July 2012

Salads for someone who doesn't like salads

Rachel (@rmc28) doesn't like most salads: she doesn't like vinegary dressings, and she doesn't like peppery salad leaves. But recently we have been finding some salads that she does like.

Based on a suggestion from her mother Ruth, she has become quite fond of a combination of baby spinach, feta, walnuts, and cherry tomatoes - sometimes with romaine hearts and/or avocado.

On a summery day a few weeks ago I made a seasonal lunch with steamed new potatoes, tuna and sweetcorn mayo, and spinach leaves. I made a dressing from three teaspoons each of mustard and honey, juice of a lemon and a lime, about two or three times that amount of olive oil, and a bit of salt and pepper. (I might not have remembered the quantities correctly - adjust to your taste! - and I made far too much for two.) This turned out to be a great success.

Earlier this week I made a chunky salad of diced cucumber, celery, tomatoes, feta, and grated carrot, with the honey/mustard/lemon dressing. This went quite well with pork pies.

If you follow my link log you might have seen an article about oil in salad dressings which says it helps the body absorb fat-soluble nutrients. So it isn't just tasty, it's nutritious too!

(3 comments | Leave a comment)

Wednesday 27th June 2012

Pi time

Yesterday evening I finally did something with my Raspberry Pi.

I bought a few things to go with it: One is a "starter kit" which contains a holder for the RasPi and a breadboard and a few components. I thought this would be a good low-investment way to try out a bit of electronic fiddling. The other is a radio clock receiver. I'm not actually sure if it's an MSF or DCF77 receiver since the supplier seems a bit confused.

The clock receiver needed the ferrite aerial and some jumper leads soldered to the board. Ian Jackson kindly agreed to let me use his workshop and provided helpful advice. The last time I did any soldering was about 20 years ago, so I was quite pleased that I didn't make a complete hash of it. I took pictures of the bottom of the board and the top so you can judge for yourselves.

After returning home I got out the RasPi and plugged a few components into the GPIO pins. I then fired it up and started working out how to drive them. The path of least faff and greatest learning seemed to be to type in the example C driver code from the wiki. Before very long I turned on an LED. Woo!

After a bit more fiddling I got the clock receiver connected and I had a program that echoed its output signal to the LED. There's actually a tiny status LED on the receiver, so the two flash in unison. Unfortunately they aren't flashing a pulse-per-second signal: the receiver is only receiving noise, with maybe a PPS buried in there somewhere. Bums.

(2 comments | Leave a comment)

Tuesday 12th June 2012

The qmail ANY query bugs

The main interop problem with DNSSEC is that it makes large DNS packets a lot more common. This leads to problems with misconfigured firewalls, and with qmail.

When delivering a message, qmail makes an ANY query on the recipient domain. This is not, as some people have speculated, a "clever" attempt to get the MX or fallback A and AAAA records in one go - and in fact if any MTA tried to do that then it wouldn't avoid queries or save time. If a DNS cache already has any records for a domain, an ANY query won't make its resolver fetch the other types. So if there are A records but no MX records in an ANY response, the MTA cannot assume that it should use the fallback-to-A implicit MX logic. It has to make an MX query to verify if MX records exist, so trying the ANY query has not actually reduced the number of queries. The code ends up more complicated and slower than straightforwardly making MX+A+AAAA queries as RFC 5321 specifies.

So what are qmail's ANY queries for? There is exactly one point where it makes this query, which is when it is doing domain canonicalization of the envelope of outgoing messages. This is as specified by RFC 1123 section 5.2.2. However this requirement is obsolete and modern MTAs don't do it. You could fix qmail's ANY query bugs by just deleting the canonicalization code.

There are two bugs in the implementation which turn this unnecessary feature into an interoperability problem.

Originally qmail made a CNAME query in order to look up the canonical version of a domain, but this caused interop problems with BIND 4. This was replaced with an ANY query, which had fewer interop problems but is still wrong. Both of these queries are wrong because they don't trigger alias processing, so if there is a CNAME chain the response will not actually yield the canonical name. Because of this qmail has code that makes a series of queries to follow CNAME chains. If instead qmail made the correct query, an MX query (or A - it doesn't matter which), the response will include all the CNAME RRs that qmail wants to know about, and it would not need its inefficient CNAME chain handling code.

The other problem is that qmail uses a small DNS packet buffer, and does not resize and retry if a response is truncated. ANY queries make it much more likely for truncated-response failures to happen. The simplest fix is just to change the buffer size from 512 to 65536 (which is the maximum DNS message size) and let the virtual memory system do lazy memory allocation. This one-line patch is enough to fix qmail's DNSSEC problems, but it doesn't fix its CNAME chain problem. (Edit: but see the comments for a patch that does fix it by disabling the canonicalization code.)

--- dns.c~      1998-06-15 11:53:16.000000000 +0100
+++ dns.c       2013-01-10 12:33:56.000000000 +0000
@@ -21,7 +21,7 @@
 static unsigned short getshort(c) unsigned char *c;
 { unsigned short u; u = c[0]; return (u << 8) + c[1]; }

-static union { HEADER hdr; unsigned char buf[PACKETSZ]; } response;
+static union { HEADER hdr; unsigned char buf[65536]; } response;
 static int responselen;
 static unsigned char *responseend;
 static unsigned char *responsepos;
(13 comments | Leave a comment)

Friday 8th June 2012

Rate limiting DNS

We have a problem that our authoritative name servers are being used in a traffic amplification attack, along the lines of this SANS ISC report, except we have been sustaining 1000 - 3000 queries per second for nearly a week.

We need a proper solution to DNS amplification attacks. It is not enough to focus on the surface features of the current attack (recursion desired, QTYPE=*) because they are easy for attackers to change. For instance, an ANY query for returns 2KB of data, and an MX query (with DO DNSSEC) returns 1.5KB - not much less.

Some kind of per-client rate limit seems to be in order: legit recursive DNS servers should not be making high volumes of queries to our authoritative servers. Here is an outline of a rate limiting scheme thought up by Ian Jackson, Mark Wooding, and me. It uses integer arithmetic, requires a fixed amount of memory, and has a small per-query overhead.

Set up a Bloom filter, sized to give a suitable false positive rate. (There are about ten million recursive DNS servers on the Internet.) Each bucket contains an integer (rather than a bit) large enough to hold a reasonable per-interval packet count.

When a query arrives, hash its source network (/24 for IPv4, /64 for IPv6) with some random data which you obtained at startup. The hash function should be optimised for speed rather than strength. You need two independent hash values, h1 and h2. Increment buckets in the Bloom filter with indexes h1 + i*h2 for 0 <= i <= n. The client's rate is the minimum of the counts in these buckets. If this rate is too large, refuse the query.

Periodically, scan the Bloom filter and multiply each bucket by some number less than one. This ages out old data. (Or should it be a subtraction for leaky-bucket rate limiting?).

That's it. This has probably been invented before, but I haven't searched the literature yet. If anyone has any pointers I'd be grateful :-)

Edit: Paul Vixie and Vernon Schryver have produced a response rate limiting patch for BIND which deals with this problem rather sooner and more thoroughly :-)

(8 comments | Leave a comment)

Blaming the spam victim

A couple of weeks ago I read a blog post by Terry Zink titled "spammers ruining it for everyone" which annoyed me. If I understand it correctly, Terry is a senior anti-spam person at Microsoft FrontBridge, dealing with their hosted Exchange service - nothing to do with Hotmail.

What annoyed me about his article was that it was blaming the victim. The spam recipients' natural reaction - to block mail from the site that spammed them - was, according to Terry, wrong. It wasn't Microsoft's fault that they spammed these victims: it was an "incident" with one of their customers. Perfectly normal, rather difficult to deal with, but Microsoft are not spammers so it is completely unfair to blame them and cause all this difficulty for their other customers.

Now I hesitate to say the following, because the juxtaposition belittles problems that are much more serious than spam. But I would not be so aware of the victim-blaming pattern of argument if I had not paid attention to the bad consequences that happen when complaints are easily dismissed by "oh, it was just a bit of fun" (no, it was sexual assault) or "sorry, mate, I didn't see you" (whoops! vehicular homicide). The second stage of blaming the victim is "you should dress more modestly" or "you should have bright lights and high-viz clothes" or "you should have better spam filters". Never mind the fact that the person responsible should not have allowed the bad thing to happen in the first place.

I had a discussion about Terry's blog post with some friends after the pub this evening. One of us was arguing in support of Microsoft's position - and more generally: he seemed to say it is wrong to blame a group for the bad behaviour of its members. Instead everyone should assess each individual they deal with separately, regardless of the reputation of others in the same group. The rest of us argued that you should encourage good people to improve the behaviour of their groups and avoid bad ones. Of course this counter-argument only works when the people suffering collateral damage have enough agency to improve or move - and that is the case for Microsoft's email services.

When we argued that people in a position of responsibility need to police bad behaviour, he brought up the vexed question of censorship and universal service obligations. Really this kind of argument is just a distraction unless the so-called censor actually has a monopoly on communications. If there is a market of comms providers (as there is for email) and you want signal rather than noise then you have to moderate bad behaviour - and even if you are being too harsh in your assessment that noise is unwanted, you aren't censoring it by making it go elsewhere.

Being a service provider is a moral quicksand. Your aim is to do a good job for your customers, but this normal human imperative to be helpful is sorely tried when one of your customers turns out to be despicable - and not everyone can stand their ground. It is even harder if everyone around you acts as if bad behaviour is OK.

(7 comments | Leave a comment)

Thursday 7th June 2012

DNSSEC and indirection

What can DNSSEC do, apart from provide full employment for DNS nerds? Most of the work so far has been on securing the DNS itself, which is important but not particularly sexy. Looking back at a DNSSEC article I wrote a year ago, the main changes in the last year have been the very unsexy business of rolling out and improving DNSSEC support. A lot of this work is still some distance from end users - TLD and registrar support, better software tools. I recently registered a domain name with Gandi and had it up and running DNSSEC very quickly - but I had to host it on my own servers since Gandi's hosting services don't support DNSSEC. So there's still a long way to go. Better large-scale DNS management software such as AtomiaDNS should help with deployment at hosting providers.

Infrastructure is interesting to the extent that you can build interesting things on it. Perhaps the most important thing about DNSSEC is that it is a new, very cheap and scalable, public key infrastructure. The IETF DANE working group has been working on a specification for TLSA resource records which use the DNSSEC PKI to constrain the X.509 PKI in order to reduce the problems caused by insecure certification authorities. This spec is now in the last stages of the IETF process before being passed to the RFC editor for publication.

The next job is to tie this into the many application protocols that use TLS. For instance I have submitted a draft describing how to use DNSSEC to secure SMTP between MTAs. and Matt Miller has one on DNSSEC and XMPP. This ought to be straightforward, but there's a knotty question about how to deal with indirection records such as MX and SRV (and to a lesser extent CNAME and DNAME).

The present situation is that our security protocols ignore any indirection in the DNS, because it can't be trusted. So if you connect to the XMPP servers, their certificates authenticate themselves as rather than as the targets of the respective SRV records. This is rather inconvenient for hosting providers, which either need certificates that can authenticate huge numbers of names, or protocols or extensions that allow the client to tell the server which name it is expecting.

DNSSEC can change this. A signed SRV or MX record is a trustworthy delegation of responsibility from a domain to its hosting provider. When it finds a signed indirection, a client only needs to check the target server host name against the certificate it presents. Service providers only need certificates for their server host names, not for every customer domain. My draft for email and Matt Miller's for XMPP allow clients and servers to trust the DNS in this way.

Unfortunately getting from where we are to this shiny future is going to make correct configuration harder, along the lines of Google's cockup that I reported yesterday.

(Leave a comment)

DNSSEC lookaside validation stats

DNSSEC lookaside validation can provide a chain of trust to signed zones whose parents have not yet deployed DNSSEC. DLV validators usually use a registry maintained by the ISC, The DLV spec uses NSEC records to identify empty spans of the registry zone, so a validator does not need to query the registry for names that fall in these spans. A side-effect of this is that you can use the NSEC records to get a list of all the entries in the registry.

  # note the hacky difference in trailing dots
  # to make starting and ending conditions different
  while [ $n != ]
    r=$(dig +short nsec $n)
    n=${r%% *}
    echo $n

There are only 2664 entries in the DLV at the moment, so it is tiny compared to the number of properly delegated DNSSEC zones. The most popular TLDs in the DLV are:

 495 arpa
 436 com
 254 de
 253 org
 237 net
  64 eu
  61 uk
  57 info
  50 hu
  42 nl
  37 fr
  36 ro
  36 ch
  32 cz
  30 ru
  29 us
  24 jp
  23 br
  22 biz
  20 it
  20 co
  20 at
  19 be
  18 name
  18 au
(4 comments | Leave a comment)

Wednesday 6th June 2012

Security error in GMail's TLS setup

RFC 6186 specifies how to use SRV records to auto-configure an MUA given an email address. There is an error with the setup of GMail's RFC 6185 SRV records which means that a correctly implemented client will fail to connect, reporting a certificate mismatch / security failure.

The SRV records are: 84664 IN  SRV   5 0 587      84628 IN  SRV   5 0 993      86400 IN  SRV  20 0 995

These servers all present certificates that only authenticate the corresponding SRV target host name - they do not contain any SubjectAltName fields for They also do not support Server Name Indication to vary their certificate according to the name expected by the client. The zone is not signed with DNSSEC.

RFC 6186 says that certificate verification follows the procedure set out in RFC 6125. The relevant part is RFC 6125 section 6.2.1. In particular,

Each reference identifier in the list SHOULD be based on the source domain and SHOULD NOT be based on a derived domain (e.g., a host name or domain name discovered through DNS resolution of the source domain).
That is, the only acceptable reference identifier for an RFC 6128 client that has been given a address is This does not match the identifiers in any of the certificates.

This error is not a security vulnerability in itself, though it can lead to vulnerabilities when combined with human factors, such as:

  • Users of RFC 6186 clients might turn off certificate verification for in order to make connections work.
  • Authors of RFC 6186 clients might copy the same mistake for interop purposes and write code match certificates against insecurely derived identifiers (SRV target hostnames in non-DNSSEC zones).

(4 comments | Leave a comment)

Wednesday 9th May 2012

Transparently auditable automatic vote counting

Electronic voting is impossible to implement in a secure manner, by which I mean there must be a way to audit the result after the fact in a way that is independent of the electronics. If you cannot perform an independent audit then hidden or compromised mechanisms in the electronics can lie without detection.

This audit requirement essentially means you need to make a paper record of each vote that is verified by the voter. That is, you need ballot papers.

Given you need ballot papers, you might as well keep the traditional method of voting (no voting machines) and only use automation for counting. Ideally the process of counting should be transparent, so that observers can see it proceeding correctly. It is not good enough for the ballot papers to disappear into a black box and a result pop out the other end.

It is possible to do this with mechanical collaters and counters - the kind of device that was used for data processing on punched cards. That way you can see the papers being split into a stack for each candidate, just as happens in the manual process. Observers can riffle through the stacks to verify correctness.

To count votes, why not weigh the stacks with precision scales?

So I wonder if you could make a ballot collating machine cheap enough and reliable enough that it is more cost-effective than manual counting. Could you perhaps use off-the-shelf printer/scanner/copier equipment?

(24 comments | Leave a comment)

Wednesday 2nd May 2012

A couple of interesting networking papers

I read a couple of interesting papers last night, both from last month's USENIX symposium on networked systems design and implementation.

The first (linked to by Wes Felter) is "Jellyfish: Networking Data Centers Randomly" which very thoroughly shows that a random topology outperforms a structured topology, and is more easy to grow incrementally. The challenge is for higher layers to make effective use of a Jellyfish network; key technologies probably include OpenFlow at layer 3 and Multipath TCP at layer 4.

The second is "How Hard Can It Be? Designing and Implementing a Deployable Multipath TCP". This is about the pragmatics of MPTCP - previous papers have described the principles behind its load balancing and congestion control. A few aspects of the paper stood out. Firstly, they gathered a lot of data on the real-world behaviour of TCP-mangling middleboxes in order to work out what changes MPTCP could get away with, and what mechanisms it needed for falling back to traditional TCP. They very frequently found that port 80 was much more likely to be mangled than other ports. They also did a lot of work to mitigate the problems caused by buffer bloat (though they don't use that term) to the extent that an MPTCP connection over a combination of WiFi and 3G has lower latency than a traditional TCP connection over WiFi!

(By coincidence both of these papers have a vague Cambridge connection. The Jellyfish paper cites Béla Bollobás's results on random graphs. The Multipath TCP group at UCL included Damon Wischik who was at Cambridge when I was; his brother Lucian was a fellow ucam.chatterer.)

(Leave a comment)

Wednesday 18th April 2012

Staff seminar on version control systems

This morning Jon Warbrick and I gave a pair of talks to our colleagues about version control systems. Jon did an introduction and overview of the topic aimed at people who aren't particularly technical. I followed with descriptions of the particular version control systems we are using in the Computing Service, organized historically. You can see my slides and my notes.

(2 comments | Leave a comment)

Friday 13th April 2012

Political engagement

I have been perplexed recently by the way my political engagement responds to external stimuli.

Several months ago Rachel persuaded me to go to the Lib Dem spring conference, on the basis of her enjoyment of the autumn conference. I said OK, expecting I could treat it like a fan convention and spend the weekend shooting shit with like-minded people over a few beers. Unfortunately it coincided with the peak of the argument over the NHS bill, probably this government's most important*controversial bit of legislation so far.

I ended up spending the conference utterly disengaged from any meaningful discussion, and not really understanding why. I care deeply about the NHS! I have strong opinions about it! But I felt unable to get involved.

So I have been surprised by my reaction to the CCDP government snooping proposal: much more engaged and active in the arguments against it. Why?

The most obvious thing is that I know more about the technicalities. I can join in wonkish arguments in a way I couldn't over the NHS.

The second thing is that I am surrounded by like-minded people who also want to defend civil liberties and kill CCDP. My MP understands and cares about these kinds of things and has influence over them, being on the home affairs select committee.

So yes, it's much easier for me to get involved and feel like I'm helping to improve the situation. But it doesn't explain the feeling of futility I had about the NHS.

I have been struck by the force of the moral arguments around government snooping. This is quite well illustrated by the Swedish wiretapping law. Their security services made the same argument as ours: "we have permission to invade some people's privacy within this limited technical context, so we should have the ability to invade everyone's privacy without restriction". Discussing the technicalities of what snooping is feasible or sensible is wrong because it implicitly acknowledges that it is morally acceptable to violate privacy and legal process in this way, though we'll only do it a little bit, honest.

The advantage of arguing from the basis of firm political principles came across to me most effectively from one of the older local activists, I think because she is the least like a cipherpunk of anyone I have encountered in this context.

This is all very jolly when you have a shared political framework, and everyone agrees it is reasonable to argue that an individual's rights to privacy and freedom from arbitrary forced search outweigh the MI5 security blanket fear of fear. But what if you don't share such a framework?

Looking back, it seems to me that the arguments over the NHS bill were wonking about the technicalities of how, or how much, or where to outsource health provision. A lot of discussion about improving outcomes, but there was no chance of credibly standing up to say, we should take a public-sector approach to fixing the weaknesses in the NHS.

So I think useful political engagement comes when you agree with the general consensus, and know about the details, and want to help fine-tune; or when you feel the consensus is about to head off-course but can be pushed back into line. That is the realm of mainstream politics.

But if you think the consensus is wrong (drug prohibition! market speculation! crisis austerity!) you have to campaign from the side-lines until the consensus shifts. Fine if some group exists to pursue such a campaign. However, political disengagement, or disaffection, or even extremism come when people disagree with the consensus at a fundamental level and there is no party up there arguing their point of view and representing their opinion.

(1 comment | Leave a comment)

Wednesday 4th April 2012

UK communications monitoring / advance notification to Ofcom

If you follow my link log (also dotaturls) you'll see several links recently about the Home Office's "communications capabilities development programme". This is the rebranding of the "interception modernisation programme" which started under the previous government. The key difference between them is replacing a centralized database of communications metadata with a distributed one, hosted by ISPs.

There are technical questions about what data can reasonably be obtained, especially when the media keep using Skype as an example of what the government want to track, though it isn't technically feasible for ISPs to do so. Much communication is mediated by American businesses (Google, Microsoft, Twitter, Facebook) over encrypted connections so access to those logs would require co-operation from the US. (I suppose that's what the "special relationship" is for.) Despite these technicalities, there are also oddities in the legal coverage, at least in the current law.

For example, the EU communications data retention directive requires public communications providers to retain logs for a year, in case law enforcement agencies want access to them. This does not apply to private services, such as businesses or universities. At Cambridge the central email services keep logs for four weeks. We occasionally (less than once a year) get a request to take a snapshot of a particular account which we retain on DVD in a safe until a warrant requires us to hand it over.

Heading off on a tangent, I was reading bits of legislation about obligations on non-public communications providers when I found section 33 of the communications act 2003:

33 Advance notification to OFCOM
  (1) A person shall not—
    (a) provide a designated electronic communications network,
    (b) provide a designated electronic communications service, or
    (c) make available a designated associated facility,
  unless, before beginning to provide it or to make it available, he has given a notification to OFCOM of his intention to provide that network or service, or to make that facility available.

The definition of electronic communications networks and services is so broad that it covers home networks and personal web sites, for which it would be insane to require advance notification, so I wondered what subset of these Ofcom had designated.

I gather that the overall shape of the Communications Act 2003 was in response to an EU directive that required a technology-neutral regime, hence consolidating Oftel and the Radio Authority into one organisation. However the Ofcom web site is still somewhat divided into technology silos, so there's a section for radio licensing and a section for telephony. Another effect of the 2003 act was to liberalise the telephony licensing regime, replacing it with a General Authorization regime in which no licence is required though providers must follow the regulations. On the surface these so-called General Conditions are couched in technology-neutral terms, but in fact they are specific to telephony (e.g. condition 16 requires support for DTMF dialling). On the other hand, Condition 1 seems to apply to Internet peering within the EU.

Having failed to find anything on the Ofcom website about how section 33 of the act applies to internet services, I decided to make an FOI request asking them to explain what kinds of networks, services, and facilities are designated as requiring advance notification.

(Leave a comment)

Tuesday 27th March 2012


I've been shaving with a traditional double-edge safety razor for about 20 months now. It's good. There are a couple of reasons I switched.

The main one is the insultingly exploitative business model of the shaving gear manufacturers. Since the late 1960s they have been ratcheting up the complexity and cost of razors in order to extract more profit from their customers. It's a classic example of patent-driven innovation: they have to keep coming up with new gimmicks that they can monopolize, and use bulshytt to convince people to buy these "better" razors instead of choosing on the basis of objective value or quality. This process has been obviously ridiculous practically since the introduction of cartridge razors, and has long since passed the stage of blatant self-parody.

When Gillette introduced the Mach 3, I stayed with the Sensor; a few years later I decided to see if a Boots own-brand Contour clone was any good despite being a lot cheaper. It turned out than any difference in the razors was dwarfed by variance in my shaving technique, so I switched to the cheaper one. There are now other options in the "less eye-watering than Gillette" segment of the market, such as the King of Shaves Azor or the Dollar Shave Club, but they still buy into the Trac LXXVI bulshytt.

The secondary reason was that although Boots own-brand razors do the job, they are a bit crappy and ugly. I had a vague desire for something more elegant which involved chucking less plastic in the bin. Partly based on satisfied reports from Tom, I invested £40 in a new old-fashioned razor and some consumables.

The Edwin Jagger DE89L is a lovely object. It is made in Sheffield from chrome-plated brass, and has a nice heft: it weighs 76g which is more than four times as much as my old plastic razor. It has a wonderful economy of design (Occam would approve) with only three parts each of which serves multiple functions. The handle is threaded to screw the blade clamp together; the bottom of the clamp includes a safety guard; and the top of the clamp acts as a guide to the angle of the blade against the skin. It's just the right shape for safely shaving under my nose.

ETA: For blades I'm currently using Feather Japanese blades, mainly on the basis of hearsay and prejudice, er, I mean their reputation for quality and sharpness. I don't think it's possible to make a meaningful comparison without fitting different blades to identical razors and using them both during the same shave, repeatedly. (See above about variability of technique.) And I only have one DE razor at the moment.

I've also switched from using a shaving oil to a shaving cream and badger brush. Taylor of Old Bond St, Court Hairdressers are awfully posh but a 150ml bowl of their shaving cream costs less than £7 and you only need one or two ml for a shave. More fun to use and easier than oil to clean up afterwards.

So now my morning shaves are still inexpensive but much more luxurious. Very satisfying :-)

(9 comments | Leave a comment)

Tuesday 28th February 2012

Path names in a rootless DNS

Names in the DNS always appear as "fully qualified domain names" in queries, answers, and name server configurations, specifying the complete path from the root to the leaf. A surprisingly small change would be enough to make query names relative rather than absolute, and this change would have interesting and far-reaching consequences.

The first change (and the key) is to the resolution algorithm. When given a referral, instead of repeating the same question at the replacement name servers, trim off the leading labels of the query name, leaving everything up to and including the leftmost label of the delegation NS records.

Authoritative servers will have to distinguish zones by just their apex label, because that's all that is available in incoming queries. This means that, unlike at present, a nameserver will not be able to serve different zones for and

This modification means that names now trace paths in a graph, rather than being hierarchial addresses. The graph can be cyclic, for example, if zone A has a delegation to zone B which in turn has a delegation back to A, then names can have an arbitrarily long sequence of A.B.A.B.A cycles round the loop.

How does resolution start in this setting, when there is no root? You (or your ISP) would configure your recursive name server with one or more well-known starting zones, which would function rather like top-level domains.

The key difference between this arrangement and the root zone is that it allows diversity and openness. The decision about which zones are starting points for resolution is dispersed to name server vendors and operators (not concentrated in ICANN and the US DOC) and they need not all choose the same set. They can include extra starting zones that are popular with their users, or omit ones that they disapprove of.

Unlike the hierarchial DNS, you can still resolve names in a zone even if it isn't in your starting set. It will be normal for zones to have delegations from multiple parents, ensuring that everyone can reach a name by relying on redundant links instead of global consistency. So the berlin zone might be generally available as a starting point / TLD in Germany, but if you are in Britain you might have to refer to it as

Instead of a political beauty contest, to establish a new TLD you would probably start by obtaining the same label as a subdomain of many existing TLDs, to establish a brand and presence in your target markets. Then as your sales and marketing efforts make your zone more popylar you can negotiate with ISPs and resolver vendors to promote your zone to a TLD instead. I expect this will force DNS registry business models to be more realistic.

Users may be able to augment their ISP's choice of TLDs by configuring extra search paths in their stub resolvers. However this is likely to lead to exciting RFC 1535 ambiguity problems.

In some respects the relationship between vendors and rootless TLDs is a bit like the situation for X.509 certification authorities. ISPs will have to judge whether DNS registries are operating competently and ethically, instead of relying on ICANN to enforce their regulations.

Trust anchor management cannot rely on policies decided by a central authority, and it will need to cope with a greater failure rate due to the much larger and more diverse population of resolution starting points. Perhaps RFC 5011 automated DNSSEC trust anchor management would be sufficient. Alternatively it might be possible to make use of a zone's redundant delegations as witnesses to changes of key along the lines of a proposal I wrote up last year.

These thoughts are partly inspired by the Unmanaged Internet Architecture's user-relative personal names. And bang paths (in the opposite order) were used to refer to machines in the UUCP network. Some other background is Zooko's Triangle and Clay Shirky's essay on domain names. The PetName system described by Mark Miller is also interesting, and similar in some ways to UIA names.

The rootless DNS doesn't quite reach all the corners of Zooko's triangle. The names are as human-meaningful as a crowded namespace can allow. Names are only global to the extent that network effects promote zones as popular TLDs worldwide - but you can work around this by providing alternate names. Names are secure to the extent that you trust the intermediaries described by the path - and if that doesn't satisfy you, you can promote important names to be trust anchors in your setup.

(13 comments | Leave a comment)
Previous 50
Powered by