Log in


Tony Finch

Friday 5th February 2016

DNS DoS mitigation by patching BIND to support draft-ietf-dnsop-refuse-any

Last weekend one of our authoritative name servers (authdns1.csx.cam.ac.uk) suffered a series of DoS attacks which made it rather unhappy. Over the last week I have developed a patch for BIND to make it handle these attacks better.

The attack traffic

On authdns1 we provide off-site secondary name service to a number of other universities and academic institutions; the attack targeted imperial.ac.uk.

For years we have had a number of defence mechanisms on our DNS servers. The main one is response rate limiting, which is designed to reduce the damage done by DNS reflection / amplification attacks.

However, our recent attacks were different. Like most reflection / amplification attacks, we were getting a lot of QTYPE=ANY queries, but unlike reflection / amplification attacks these were not spoofed, but rather were coming to us from a lot of recursive DNS servers. (A large part of the volume came from Google Public DNS; I suspect that is just because of their size and popularity.)

My guess is that it was a reflection / amplification attack, but we were not being used as the amplifier; instead, a lot of open resolvers were being used to amplify, and they in turn were making queries upstream to us. (Consumer routers are often open resolvers, but usually forward to their ISP's resolvers or to public resolvers such as Google's, and those query us in turn.)

What made it worse

Because from our point of view the queries were coming from real resolvers, RRL was completely ineffective. But some other configuration settings made the attacks cause more damage than they might otherwise have done.

I have configured our authoritative servers to avoid sending large UDP packets which get fragmented at the IP layer. IP fragments often get dropped and this can cause problems with DNS resolution. So I have set

    max-udp-size 1420;
    minimal-responses yes;

The first setting limits the size of outgoing UDP responses to an MTU which is very likely to work. (The ethernet MTU minus some slop for tunnels.) The second setting reduces the amount of information that the server tries to put in the packet, so that it is less likely to be truncated because of the small UDP size limit, so that clients do not have to retry over TCP.

This works OK for normal queries; for instance a cam.ac.uk IN MX query gets a svelte 216 byte response from our authoritative servers but a chubby 2047 byte response from our recursive servers which do not have these settings.

But ANY queries blow straight past the UDP size limit: the attack queries for imperial.ac.uk IN ANY got obese 3930 byte responses.

The effect was that the recursive clients retried their queries over TCP, and consumed the server's entire TCP connection quota. (Sadly BIND's TCP handling is not up to the standard of good web servers, so it's quite easy to nadger it in this way.)


We might have coped a lot better if we could have served all the attack traffic over UDP. Fortunately there was some pertinent discussion in the IETF DNSOP working group in March last year which resulted in draft-ietf-dnsop-refuse-any, "providing minimal-sized responses to DNS queries with QTYPE=ANY".

This document was instigated by Cloudflare, who have a DNS server architecture which makes it unusually difficult to produce traditional comprehensive responses to ANY queries. Their approach is instead to send just one synthetic record in response, like

    cloudflare.net.  HINFO  ( "Please stop asking for ANY"
                              "See draft-jabley-dnsop-refuse-any" )

In the discussion, Evan Hunt (one of the BIND developers) suggested an alternative approach suitable for traditional name servers. They can reply to an ANY query by picking one arbitrary RRset to put in the answer, instead of all of the RRsets they have to hand.

The draft says you can use either of these approaches. They both allow an authoritative server to make the recursive server go away happy that it got an answer, and without breaking odd applications like qmail that foolishly rely on ANY queries.

I did a few small experiments at the time to demonstrate that it really would work OK in the real world (unlike some of the earlier proposals) and they are both pretty neat solutions (unlike some of the earlier proposals).

Attack mitigation

So draft-ietf-dnsop-refuse-any is an excellent way to reduce the damage caused by the attacks, since it allows us to return small UDP responses which reduce the downstream amplification and avoid pushing the intermediate recursive servers on to TCP. But BIND did not have this feature.

I did a very quick hack on Tuesday to strip down ANY responses, and I deployed it to our authoritative DNS servers on Wednesday morning for swift mitigation. But it was immediately clear that I had put my patch in completely the wrong part of BIND, so it would need substantial re-working before it could be more widely useful.

I managed to get back to the patch on Thursday. The right place to put the logic was in the fearsome query_find() which is the top-level query handling function and nearly 2400 lines long! I finished the first draft of the revised patch that afternoon (using none of the code I wrote on Tuesday), and I spent Friday afternoon debugging and improving it.

The result is this patch which adds a minimal-qtype-any option. I'm currently running it on my toy nameserver, and I plan to deploy it to our production servers next week to replace the rough hack.

I have also submitted it to the ISC; hopefully something like it will be included in a future version of BIND.

(Leave a comment)

Monday 25th January 2016

A rant about whois

I have been fiddling around with FreeBSD's whois client. Since I have become responsible for Cambridge's Internet registrations, it's helpful to have a whois client which isn't annoying.

Sadly, whois is an unspeakably crappy protocol. In fact it's barely even a protocol, more like a set of vague suggestions. Ugh.

The first problem...

... is to work out which server to send your whois query to. There are a number of techniques, most of which are necessary and none of which are sufficient.

  1. Rely on a knowledgable user to specify the server.

    Happily we can do better than just this, but the feature has to be available for special queries.

  2. Have a built-in curated mapping from query patterns to servers.

    This is the approach used by Debian's whois client. Sadly in the era of vast numbers of new gTLDs, this requires software updates a couple of times a week.

  3. Use whois-servers.net which publishes the mapping in the DNS.

    This is a brilliant service, particularly good for the wild and wacky two-letter country-class TLDs. Unfortunately it has also failed to keep up with the new gTLDs, even though it only needs a small amount of extra automation to do so.

  4. Try whois.nic.TLD which is the standard required for new gTLDs.

    In practice a combination of (2) and (3) is extremely effective for domain name whois lookups.

  5. Follow referrals from a server with broad but shallow data to one with narrower and deeper data.

    Referrals are necessary for domain queries in "thin" registries, in which the TLD's registry does not contain all the details about registrants (domain owners), but instead refers queries to the registrar (i.e. reseller).

    They are also necessary for IP address lookups, for which ARIN's database contains registrations in North America, plus referrals to the other regional Internet registries for IP address registrations in other parts of the world.

Back in May I added (3) to FreeBSD's whois to fix its support for new gTLDs, and I added a bit more (1).

One motivation for the latter was for looking up ac.uk domains: (4) doesn't work because Nominet's .uk whois server doesn't provide referrals to JANET's whois server; and (2) is a bit awkward, because although there is an entry for ac.uk.whois-servers.net you have to have some idea of when it makes sense to try DNS queries for 2LDs. (whois-servers.net would be easier to use if it had a wildcard for each entry.)

The other motivation for extending the curated server list was to teach it about more NIC handle formats, such as -RIPE and -NICAT handles; and the same mechanism is useful for special-case domains.

Last week I added support for AS numbers, moving them from (0) to (1). After doing that I continued to fiddle around, and soon realised that it is possible to dispense with (3) and (2) and a large chunk of (1), by relying more on (4). The IANA whois server knows about most things you might look up with whois - domain names, IP addresses, AS numbers - and can refer you to the right server.

This allowed me to throw away a lot of query syntax analysis and trial-and-error DNS lookups. Very satisfying.

(I'm not sure if this excellently comprehensive data is a new feature of IANA's whois server, or if I just failed to notice it before...)

The second problem...

... is that the output from whois servers is only vaguely machine-readable.

For example, FreeBSD's whois now knows about 4 different referral formats, two of which occur with varying spacing and casing from different servers. (I've removed support for one amazingly ugly and happily obsolete referral format.)

My code just looks for a match for any referral format without trying to be knowledgable about which servers use which syntax.

The output from whois is basically a set of key: value pairs, but often these will belong to multiple separate objects (such as a domain name or a person or a net block); servers differ about whether blank lines separate objects or are just for pretty-printing a single object. I'm not sure if there's anything that can be done about this without huge amounts of tedious work.

And servers often emit a lot of rubric such as terms and conditions or hints and tips, which might or might not have comment markers. FreeBSD's whois has a small amount of rudimentary rubric-trimming code which works in a lot of the most annoying cases.

The third problem...

... is that the syntax of whois queries is enormously variable. What is worse, some servers require some non-standard complication to get useful output.

If you query Verisign for microsoft.com the server does fuzzy matching and returns a list of dozens of spammy name server names. To get a useful answer you need to ask for domain microsoft.com.

ARIN also returns an unhelpfully terse list if a query matches multiple objects, e.g. a net block and its first subnet. To make it return full details for all matches (like RIPE's whois server) you need to prefix the query with a +.

And for .dk the verbosity option is --show-handles.

The best one is DENIC, which requires a different query syntax depending on whether the domain name is a non-ASCII internationalized domain name, or a plain ASCII domain (which might be a punycode-encoded internationalized domain name). Good grief, can't it just give a useful answer without hand-holding?


That's quite a lot of bullshit for a small program to cope with, and it's really only scratching the surface. Debian's whois implementation has attacked this mess with a lot more sustained diligence, but I still prefer FreeBSD's because of its better support for new gTLDs.

(2 comments | Leave a comment)

Friday 22nd January 2016

Outside Winter Insulation 7-layer clothing model

After being ejected from the pub this evening we had a brief discussion about the correspondance between winter clothing and the ISO Open Systems Interconnection 7 layer model. This is entirely normal.

My coat is OK except it is a bit too large, so even if I bung up the neck with a woolly scarf, it isn't snug enough around the hips to keep the chilly wind out. So I had multiple layers on underneath. My Scottish friend was wearing a t-shirt and a leather jacket – and actually did up the jacket; I am not sure whether that was because of the cold or just for politeness.

So the ISO OSI 7 layer model is:

  1. physical
  2. link
  3. network
  4. transport
  5. presentation and session or
  6. maybe session and presentation
  7. application

The Internet model is:

  1. wires or glass or microwaves or avian carriers
  2. ethernet or wifi or mpls or vpn or mille feuille or
    how the fuck do we make IP work over this shit
  3. IP
  4. TCP and stuff
  5. dunno
  6. what
  7. useful things and pictures of cats

And, no, I wasn't actually wearing OSI depth of clothing; it was more like:

  1. skin
  2. t-shirt
  3. rugby shirt
  4. fleece
    You know, OK for outside "transport" most of the year.
  5. ...
  6. ...    Coat. It has layers but I don't care.
  7. ...
(4 comments | Leave a comment)

Saturday 2nd January 2016

Hammerspoon hooks for better screen lock security on Mac OS X

A couple of months ago we had a brief discussion on an ops list at work about security of personal ssh private keys.

I've been using ssh-agent since I was working at Demon in the late 1990s. I prefer to lock my screen rather than logging out, to maintain context from day to day - mainly my emacs buffers and window layout. So to be secure I need to delete the keys from the ssh-agent when the screen is locked.

Since 1999 I have been using Xautolock and Xlock with a little script which automatically runs ssh-add -D to delete my keys from the ssh-agent shortly after the screen is locked. This setup works well and hasn't needed any changes since 2002.

But when I'm at home I'm using a Mac, and I have not had similar automation. Having to type ssh-add -D manually is very tiresome and because I am lazy I don't do it as often as I should.

At the beginning of December I found out about Hammerspoon, which is a Mac OS X application that provides lots of hooks into the operating system that you can script with Lua. The Hammerspoon getting started guide provides a good intro to the kinds of things you can do with it. (Hammerspoon is a fork of an earlier program called Mjolnir, ho ho.)

It wasn't until earlier this week that I had a brainwave when I realised that I might be able to use Hammerspoon to automatically run ssh-add -D at appropriate times for me. The key is the hs.caffeinate.watcher module, with which you can get a Lua function to run when sleep or wake events occur.

This is almost what I want: I configure my Macs to send the displays to sleep on a short timer and lock them soon after, and I have a hot corner to force them to sleep immediately. Very similar to my X11 screen lock setup if I use Hammerspoon to invoke an ssh-add -D script.

But there are other events that should also cause ssh keys to be deleted from the agent: switching users, switching to the login screen, and (for wasteful people) screen-saver activation. So I had a go at hacking Hammerspoon to add the functionality I wanted, and ended up with a pull request that adds several extra event types to hs.caffeinate.watcher.

At the end of this article is an example Hammerspoon script which uses these new event types. It should work with unpatched Hammerspoon as well, though only the sleep events will have any effect.

There are a few caveats.

I wanted to hook an event corresponding to when the screen is locked after the screensaver starts or the screen sleeps, with the delay that the user specified in the System Preferences. The NSDistributedNotificationCenter event "com.apple.screenIsLocked" sounds like it ought to be what I want, but it actually gets triggered as soon as the screensaver starts or the screen sleeps, without any delay. So a more polished hook script will have to re-implement that feature in a similar way to my X11 screen lock script.

I wondered if locking the screen corresponds to locking the keychain under the hood. I experimented with SecKeychainAddCallback but it was not triggered when the screen locks :-( So I think it might make sense to invoke security lock-keychain as well as ssh-add -D.

And I don't yet have a sensible user interface for re-adding the keys when the screen is unlocked. I can get Terminal.app to run ssh -t my-workstation ssh-add but that is really quite ugly :-)

Anyway, here is a script which you can invoke from ~/.hammerspoon/init.lua using dofile(). You probably also want to run hs.logger.defaultLogLevel = 'info'.

    -- hammerspoon script
    -- run key security scripts when the screen is locked and unlocked

    -- NOTE this requires a patched Hammerspoon for the
    -- session, screen saver, and screen lock hooks.

    local pow = hs.caffeinate.watcher

    local log = hs.logger.new("ssh-lock")

    local function ok2str(ok)
        if ok then return "ok" else return "fail" end

    local function on_pow(event)
        local name = "?"
        for key,val in pairs(pow) do
            if event == val then name = key end
        log.f("caffeinate event %d => %s", event, name)
        if event == pow.screensDidWake
        or event == pow.sessionDidBecomeActive
        or event == pow.screensaverDidStop
            local ok, st, n = os.execute("${HOME}/bin/unlock_keys")
            log.f("unlock_keys => %s %s %d", ok2str(ok), st, n)
        if event == pow.screensDidSleep
        or event == pow.systemWillSleep
        or event == pow.systemWillPowerOff
        or event == pow.sessionDidResignActive
        or event == pow.screensDidLock
            local ok, st, n = os.execute("${HOME}/bin/lock_keys")
            log.f("lock_keys => %s %s %d", ok2str(ok), st, n)


(3 comments | Leave a comment)

Friday 1st January 2016

SFO / San Francisco / Such a Fucking idiOt

On a flight to San Francisco in 2000 or 2001 I had a chat with a British woman sitting next to me. She was a similar age to me, 20-something, also aiming to try her hand at the Silly Valley thing, but, you know, MONTHS behind me. She asked me if I knew of a Days Inn or something like that. There was (is) one in the Tenderloin district literally round the corner from my flat, but that part of the city was such a shithole I was embarrassed to say I lived there. And then (I kid you not) I managed to leave my box of tea behind on the plane, and I lost touch with my seat-mate trying to retrieve it. What a chump.

Another time, I was on a taxi from Cambridge to Heathrow (ridiculous wasteful expense) when I realised my passport had fallen out of my back pocket while I was sitting on my heels during a party. I missed my flight, had to go back to Cambridge to recover my passport, and my employer's travel agent put me on another flight the next day. I felt like a fool; I'm amazed my employer handled that so reasonably (not to mention the other ways I took advantage of them).

My sojourn in San Francisco was not a success. I was amazingly lucky to catch the tail end of the dot-com boom in 2000 but I burned out badly less than a year later. I was stupid in so many ways.

I failed because I overestimated my own capabilities, and I underestimated the importance of my friends. It's enormously difficult to establish a social network in a new place from scratch. I was lucky working for Demon Internet in London (1997-2000) and for Covalent in San Francisco (2000-2001) that in both cases my colleagues were a social bunch. But I was often back to Cambridge for parties with my mates, and there's a big difference between 60 miles and 6000 miles.

And, honestly, I was too arrogant to ask my colleagues for feedback and support. (I'm still crap at that.)


I recovered from the breakdown. Though it took a long time, I moved back closer to my friends, spent my savings writing code for fun, and in the end got a job which has kept body and soul together (and better) for 13 years.

My failure was painful and difficult, but I learned valuable lessons about myself, and it WAS NOT (in the end) a disaster.

This year has brought that time back to me in interesting ways.

A friend of ours went back to work in South America, in a place she knew and loved, in a job that was made for her. But the place had changed - the old friends were no longer there - and the job wasn't as happy as expected. She was back here much sooner than planned. But our mutual friends told her about my crash and burn and recovery, and this helped her recover.

Another friend did the dot-com thing with a much greater success than me: it took him a lot more than a year to burn out. His was a more controlled flight into terrain than mine, but similarly abrupt. However he already knew about my past, and he says he also took strength from my story.

It is enormously touching to know that my friends have seen my failure, seen that it wasn't a catastrophe, and that helped them to get back on their feet.

So, happy new year, and know that if things don't work out as you hoped, if you fucked it up, it isn't the end of the world. Keep talking to the people you love and keep doing your thing.

(6 comments | Leave a comment)

Thursday 3rd December 2015


I have just pushed a new release of unifdef.

This version has improvents to the expression evaluator and to in-place source file modifications.

I have also made some attempts at improving win32 support, but my ability to test that is severely limited.
(Leave a comment)

Tuesday 17th November 2015

C preprocessor expressions

With reference to the C standard, describe the syntax of controlling expressions in C preprocessor conditional directives.

I've started work on a "C preprocessor partial evaluator". My aim is to re-do unifdef with better infrastructure than 1980s line-at-a-time stdio, so that it's easier to implement a more complete C preprocessor. The downside, of course, is that the infrastructure (Lua and Lpeg) is more than ten times bigger than the whole of unifdef.

The main feature I want is macro expansion; the second feature I want is #if expression simplification. The latter leads to the question above: exactly what is allowed in a C preprocessor conditional directive controlling expression? This turns out to be more tricky than I expected.

What actually triggered the question was that I "know" that sizeof doesn't work in preprocessor expressions because "obviously" the preprocessor doesn't know about details of the target architecture, but I couldn't find where in the standard it says so.

My reference is ISO JTC1 SC22 WG14 document n1570 which is very close to the final committee draft of ISO 9899:2011, the C11 standard.

Preprocessor expressions are specified in section 6.10.1 "Conditional inclusion". Paragraph 1 says:

The expression that controls conditional inclusion shall be an integer constant expression except that: identifiers (including those lexically identical to keywords) are interpreted as described below;166) and it may contain [defined] unary operator expressions [...]

166) Because the controlling constant expression is evaluated during translation phase 4, all identifiers either are or are not macro names — there simply are no keywords, enumeration constants, etc.

The crucial part that I missed is the parenthetical "including [identifiers] lexically identical to keywords" - this applies to the sizeof keyword, as footnote 166 obliquely explains.

... A brief digression on "translation phases". These are specified in section, which lists 8 phases. Now, if you have done an undergraduate course in compilers or read the dragon book, you might expect this list to include things like lexing, parsing, symbol tables, something about translation to and optimization of object code, and something about linking separately compiled units. And it does, sort of. But whereas compilers are heavily weighted towards the middle and back end, C standard translation phases focus on lexical and preprocessor matters, to a ridiculous extent. I find this imbalance quite funny (in a rather dry way) - after such a lengthy and detailed build-up, the last two items in the list are almost, "and then a miracle occurs", especially the last sentence in phase 7 which is a bit LOL WTF.

6. Adjacent string literal tokens are concatenated.

7. White-space characters separating tokens are no longer significant. Each preprocessing token is converted into a token. The resulting tokens are syntactically and semantically analyzed and translated as a translation unit.

8. All external object and function references are resolved. Library components are linked to satisfy external references to functions and objects not defined in the current translation. All such translator output is collected into a program image which contains information needed for execution in its execution environment.

So, anyway, what footnote 166 is saying is that in the preprocessor there is no such thing as a keyword - the preprocessor has a sketchy lexer that produces "preprocessing-token"s (as specified in section 6.4) which are a simplified subset of the compiler's "token"s which mostly don't turn up until translation phase 7.

Paragraph 1 said identifiers are interpreted as described below, which refers to this sentence in section 6.10.1 paragraph 4:

After all replacements due to macro expansion and the defined unary operator have been performed, all remaining identifiers (including those lexically identical to keywords) are replaced with the pp-number 0, and then each preprocessing token is converted into a token. The resulting tokens compose the controlling constant expression which is evaluated according to the rules of 6.6.

This means that if you try to use a keyword (such as sizeof) in a preprocessor expression, it gets replaced by zero and (usually) turns into a syntax error. And this is why compilers produce less-than-straightforward error messages like error: missing binary operator before token "(" if you try to use sizeof.

Smash-keyword-to-zero has another big implication which is a bit more subtle. Section 6.6 specifies constant expressions, and paragraphs 3 and 6 are particularly relevant to the preprocessor.

3 Constant expressions shall not contain assignment, increment, decrement, function-call, or comma operators, except when they are contained within a subexpression that is not evaluated.

It is normal for real preprocessor expression parsers to implement a simplified subset of the C expression syntax which simply lacks support for these forbidden operators. So, if you put sizeof(int) in a preprocessor expression, that gets turned into 0(0) before it is evaluated, and you get an error about a missing binary operator. If you write something similar where the compiler expects an integer constant expression, you will get errors complaining that integers are not functions or that function calls are not allowed in integer constant expressions.

6 An integer constant expression shall have integer type and shall only have operands that are integer constants, enumeration constants, character constants, sizeof expressions whose results are integer constants, _Alignof expressions, and floating constants that are the immediate operands of casts.

Re-read this sentence from the point of view of the preprocessor, after identifiers and keywords have been smashed to zero. There aren't any enumeration constants, because they are identifiers zero. Similarly there aren't any sizeof or _Alignof expressions. And there can't be any casts because you can't write a type without at least one identifier. (One situation where smash-keyword-to-zero does not cause a syntax error is an expression like (unsigned)-1 and I bet that turns up in real-world preprocessor expressions.) And since there can't be any casts, there can't be any floating constants.

And therefore the preprocessor does not need any floating point support at all.

I am slightly surprised that such a fundamental simplification requires such a long chain of reasoning to obtain it from the standard. Perhaps (like my original question about sizeof) I have overlooked the relevant text.

Finally, my thanks to Mark Wooding and Brian Mastenbrook for pointing me at the crucial words in the standard.

(Leave a comment)

Thursday 12th November 2015

LOC records in ac.uk

Since September 1995 we have had a TXT record on cam.ac.uk saying "The University of Cambridge, England". Yesterday I replaced it with a LOC record. The TXT space is getting crowded with things like Google and Microsoft domain authentication tokens and SPF records.

When I rebuilt our DNS servers with automatic failover earlier this year, I used the TXT record for a very basic server health check. I wanted some other ultra-stable record for this purpose which is why I added the LOC record.

What other LOC records are there under ac.uk, I wonder?

Anyone used to be able to AXFR the ac.uk zone but that hasn't been possible for a few years now. But there is a public FOI response containing a list of ac.uk domains from 2011 which is good enough.

A bit of hackery with adns and 20 seconds later I have:

abertay.ac.uk.        LOC 56 46  4.000 N 2 57  5.000 W 50.00m   1m 10000m 10m
cam.ac.uk.            LOC 52 12 19.000 N 0  7  5.000 E 18.00m 10000m 100m 100m
carshalton.ac.uk.     LOC 51 22 17.596 N 0  9 58.698 W 33.00m  10m   100m 10m
marywardcentre.ac.uk. LOC 51 31 15.000 N 0  7 19.000 W  0.00m   1m 10000m 10m
midchesh.ac.uk.       LOC 53 14 58.200 N 2 32 15.190 E 47.00m   1m 10000m 10m
psc.ac.uk.            LOC 51  4 17.000 N 1 19 19.000 W 70.00m 200m   100m 10m
rdg.ac.uk.            LOC 51 26 25.800 N 0 56 46.700 W 87.00m   1m 10000m 10m
reading.ac.uk.        LOC 51 26 25.800 N 0 56 46.700 W 87.00m   1m 10000m 10m
ulcc.ac.uk.           LOC 51 31 16.000 N 0  7 40.000 W 93.00m   1m 10000m 10m
wessexsfc.ac.uk.      LOC 51  4 17.000 N 1 19 19.000 W 70.00m 200m   100m 10m
wilberforce.ac.uk.    LOC 53 46 28.000 N 0 16 42.000 W  0.00m   1m 10000m 10m

A LOC record records latitude, longitude, altitude, diameter, horizontal precision, and vertical precision.

The cam.ac.uk LOC record is supposed to indicate the location of the church of St Mary the Great, which has been the nominal centre of Cambridge since a system of milestones was set up in 1725. The precincts of the University are officially the area within three miles of GSM which corresponds to a diameter a bit less than 10km. The 100m vertical precision is enough to accommodate the 80m chimney at Addenbrookes.

There are a couple of anomalies in the other LOC records.

abertay.ac.uk indicates a random spot in the highlands, not the centre of Dundee as it should.

midchesh.ac.uk should be W not E.

A lot of the records use the default size of 1m diameter and precision of 10km horizontal and 10m vertical.

You can copy and paste the lat/long into Google Maps to see where they land :-)

(5 comments | Leave a comment)

Wednesday 11th November 2015

Chemical element symbols that are also ISO 3166 country code abbreviations

Ag Silver	Antigua and Barbuda
Al Aluminum	Albania
Am Americium	Armenia
Ar Argon	Argentina
As Arsenic	American Samoa
At Astatine	Austria
Au Gold		Australia
Ba Barium	Bosnia and Herzegovina
Be Beryllium	Belgium
Bh Bohrium	Bahrain
Bi Bismuth	Burundi
Br Bromine	Brazil
Ca Calcium	Canada
Cd Cadmium	Democratic Republic of the Congo
Cf Californium	Central African Republic
Cl Chlorine	Chile
Cm Curium	Cameroon
Cn Copernicium	China
Co Cobalt	Colombia
Cr Chromium	Costa Rica
Cs Cesium	Serbia and Montenegro
Cu Copper	Cuba
Er Erbium	Eritrea
Es Einsteinium	Spain
Eu Europium	Europe
Fm Fermium	Federated States of Micronesia
Fr Francium	France
Ga Gallium	Gabon
Gd Gadolinium	Grenada
Ge Germanium	Georgia
In Indium	India
Ir Iridium	Iran
Kr Krypton	South Korea
La Lanthanum	Laos
Li Lithium	Liechtenstein
Lr Lawrencium	Liberia
Lu Lutetium	Luxembourg
Lv Livermorium	Latvia
Md Mendelevium	Moldova
Mg Magnesium	Madagascar
Mn Manganese	Mongolia
Mo Molybdenum	Macau
Mt Meitnerium	Malta
Na Sodium	Namibia
Ne Neon		Niger
Ni Nickel	Nicaragua
No Nobelium	Norway
Np Neptunium	Nepal
Os Osmium	Oman
Pa Protactinium	Panama
Pm Promethium	Saint Pierre and Miquelon
Pr Praseodymium	Puerto Rico
Pt Platinum	Portugal
Re Rhenium	Reunion
Ru Ruthenium	Russia
Sb Antimony	Solomon Islands
Sc Scandium	Seychelles
Se Selenium	Sweden
Sg Seaborgium	Singapore
Si Silicon	Slovenia
Sm Samarium	San Marino
Sn Tin		Senegal
Sr Strontium	Suriname
Tc Technetium	Turks and Caicos Islands
Th Thorium	Thailand
Tm Thulium	Turkmenistan
Elemental domain names that exist:
Oh, let's do US states as well:
Al Aluminum     Alabama
Ar Argon        Arkansas
Ca Calcium      California
Co Cobalt       Colorado
Fl Flerovium    Florida
Ga Gallium      Georgia
In Indium       Indiana
La Lanthanum    Louisiana
Md Mendelevium  Maryland
Mn Manganese    Minnesota
Mo Molybdenum   Missouri
Mt Meitnerium   Montana
Nd Neodymium    North Dakota
Ne Neon         Nebraska
Pa Protactinium Pennsylvania
Sc Scandium     South Carolina
(17 comments | Leave a comment)

Wednesday 21st October 2015

Cutting a zone with DNSSEC

This week we will be delegating newton.cam.ac.uk (the Isaac Newtin Institute's domain) to the Faculty of Mathematics, who have been running their own DNS since the very earliest days of Internet connectivity in Cambridge.

Unlike most new delegations, the newton.cam.ac.uk domain already exists and has a lot of records, so we have to keep them working during the process. And for added fun, cam.ac.uk is signed with DNSSEC, so we can't play fast and loose.

In the absence of DNSSEC, it is mostly OK to set up the new zone, get all the relevant name servers secondarying it, and then introduce the zone cut. During the rollout, some servers will be serving the domain from the old records in the parent zone, and other servers will serve the domain from the new child zone, which occludes the old records in its parent.

But this won't work with DNSSEC because validators are aware of zone cuts, and they check that delegations across cuts are consistent with the answers they have received. So with DNSSEC, the process you have to follow is fairly tightly constrained to be basically the opposite of the above.

The first step is to set up the new zone on name servers that are completely disjoint from those of the parent zone. This ensures that a resolver cannot prematurely get any answers from the new zone - they have to follow a delegation from the parent to find the name servers for the new zone. In the case of newton.cam.ac.uk, we are lucky that the Maths name servers satisfy this requirement.

The second step is to introduce the delegation into the parent zone. Ideally this should propagate to all the authoritative servers promptly, using NOTIFY and IXFR.

(I am a bit concerned about DNSSEC software which does validation as a separate process after normal iterative resolution, which is most of it. While the delegation is propagating it is possible to find the delegation when resolving, but get a missing delegation when validating. If the validator is persistent at re-querying for the delegation chain it should be able to recover from this; but quick propagation minimizes the problem.)

After the delegation is present on all the authoritative servers, and old data has timed out of caches, the new child zone can (if necessary) be added to the parent zone's name servers. In our case the central cam.ac.uk name servers and off-site secondaries also serve the Maths zones, so this step normalizes the setup for newton.cam.ac.uk.

(4 comments | Leave a comment)

Monday 19th October 2015

never mind the quadbits, feel the width!

benchmarking wider-fanout versions of qp tries

QP tries are faster than crit-bit tries because they extract up to four bits of information out of the key per branch, whereas crit-bit tries only get one bit per branch. If branch nodes have more children, then the trie should be shallower on average, which should mean lookups are faster.

So if we extract even more bits of information from the key per branch, it should be even better, right? But, there are downsides to wider fanout.

I originally chose to index by nibbles for two reasons: they are easy to pull out of the key, and the bitmap easily fits in a word with room for a key index.

If we index keys by 5-bit chunks we pay a penalty for more complicated indexing.

If we index keys by 6-bit chunks, trie nodes have to be bigger to fit a bigger bitmap, so we pay a memory overhead penalty.

How do these costs compare to the benefits of wider branches?

I have implemented 5-bit and 6-bit versions of qp tries and benchmarked them. For those interested in the extremely nerdy details, the full version of "never mind the quadbits" is on the qp trie home page.

The tl;dr is, 5-bit qp tries are fairly unequivocally better than 4-bit qp tries. 6-bit qp tries are less clear - they are not faster than 5-bit on my laptop and desktop (both Core 2 Duo), but they are faster on a Xeon server that I tried despite using more memory.

(Leave a comment)

Sunday 11th October 2015

prefetching tries

The inner loop in qp trie lookups is roughly

    while(t->isbranch) {
        b = 1 << key[t->index]; // simplified
        if((t->bitmap & b) == 0) return(NULL);
        t = t->twigs + popcount(t->bitmap & b-1);

The efficiency of this loop depends on how quickly we can get from one indirection down the trie to the next. There is quite a lot of work in the loop, enough to slow it down significantly compared to the crit-bit search loop. Although qp tries are half the depth of crit-bit tries on average, they don't run twice as fast. The prefetch compensates in a big way: without it, qp tries are about 10% faster; with it they are about 30% faster.

I adjusted the code above to emphasize that in one iteration of the loop it accesses two locations: the key, which it is traversing linearly with small skips, so access is fast; and the tree node t, whose location jumps around all over the place, so access is slow. The body of the loop calculates the next location of t, but we know at the start that it is going to be some smallish offset from t->twigs, so the prefetch is very effective at overlapping calculation and memory latency.

It was entirely accidental that prefetching works well for qp tries. I was trying not to waste space, so the thought process was roughly, a leaf has to be two words:

    struct Tleaf { const char *key; void *value; };

Leaves should be embedded in the twig array, to avoid a wasteful indirection, so branches have to be the same size as leaves.

    union Tnode { struct Tleaf leaf; struct Tbranch branch; };

A branch has to have a pointer to its twigs, so there is space in the other word for the metadata: bitmap, index, flags. (The limited space in one word is partly why qp tries test a nibble at a time.) Putting the metadata about the twigs next to the pointer to the twigs is the key thing that makes prefetching work.

One of the inspirations of qp tries was Phil Bagwell's hash array mapped tries. HAMTs use the same popcount trick, but instead of using the PATRICIA method of skipping redundant branch nodes, they hash the key and use the hash as the trie index. The hashes should very rarely collide, so redundant branches should also be rare. Like qp tries, HAMTs put the twig metadata (just a bitmap in their case) next to the twig pointer, so they are friendly to prefetching.

So, if you are designing a tree structure, put the metdadata for choosing which child is next adjacent to the node's pointer in its parent, not inside the node itself. That allows you to overlap the computation of choosing which child is next with the memory latency for fetching the child pointers.

(6 comments | Leave a comment)

Wednesday 7th October 2015

crit-bit tries without allocation

Crit-bit tries have fixed-size branch nodes and a constant overhead per leaf, which means they can be used as an embedded lookup structure. Embedded lookup structures do not need any extra memory allocation; it is enough to allocate the objects that are to be indexed by the lookup structure.

An embedded lookup structure is a data structure in which the internal pointers used to search for an object (such as branch nodes) are embedded within the objects you are searching through. Each object can be a member of at most one of any particular kind of lookup structure, though an object can simultaneously be a member of several different kinds of lookup structure.

The BSD <sys/queue.h> macros are embedded linked lists. They are used frequently in the kernel, for instance in the network stack to chain mbuf packet buffers together. Each mbuf can be a member of a list and a tailq. There is also a <sys/tree.h> which is used by OpenSSH's privilege separation memory manager. Embedded red-black trees also appear in jemalloc.

embedded crit-bit branch node structure

DJB's crit-bit branch nodes require three words: bit index, left child, and right child; embedded crit-bit branches are the same with an additional parent pointer.

    struct branch {
        uint index;
        void *twig[2];
        void **parent;

The "twig" child pointers are tagged to indicate whether they point to a branch node or a leaf. The parent pointer normally points to the relevant child pointer inside the parent node; it can also point at the trie's root pointer, which means there has to be exactly one root pointer in a fixed place.

(An aside about how I have been counting overhead: DJB does not include the leaf string pointer as part of the overhead of his crit-bit tries, and I have followed his lead by not counting the leaf key and value pointers in my crit-bit and qp tries. So by this logic, although an embedded branch adds four words to an object, it only counts as three words of overhead. Perhaps it would be more honest to count the total size of the data structure.)

using embedded crit-bit tries

For most purposes, embedded crit-bit tries work the same as external crit-bit tries.

When searching for an object, there is a final check that the search key matches the leaf. This check needs to know where to find the search key inside the leaf object - it should not assume the key is at the start.

When inserting a new object, you need to add a branch node to the trie. For external crit-bit tries this new branch is allocated; for embedded crit-bit tries you use the branch embedded in the new leaf object.

deleting objects from embedded crit-bit tries

This is where the fun happens. There are four objects of interest:

  • The doomed leaf object to be deleted;

  • The victim branch object which needs to remain in the trie, although it is embedded in the doomed leaf object;

  • The parent branch object pointing at the leaf, which will be unlinked from the trie;

  • The bystander leaf object in which the parent branch is embedded, which remains in the trie.

The plan is that after unlinking the parent branch from the trie, you rescue the victim branch from the doomed leaf object by moving it into the place vacated by the parent branch. You use the parent pointer in the victim branch to update the twig (or root) pointer to follow the move.

Note that you need to beware of the case where the parent branch happens to be embedded in the doomed leaf object.

exercise for the reader

Are the parent pointers necessary?

Is the movement of branches constrained enough that we will always encounter a leaf's embedded branch in the course of searching for that leaf? If so, we can eliminate the parent pointers and save a word of overhead.


I have not implemented this idea, but following Simon Tatham's encouragement I have written this description in the hope that it inspires someone else.

(7 comments | Leave a comment)

Sunday 4th October 2015

qp tries: smaller and faster than crit-bit tries

tl;dr: I have developed a data structure called a "qp trie", based on the crit-bit trie. Some simple benchmarks say qp tries have about 1/3 less memory overhead and are about 10% 30% faster than crit-bit tries.

"qp trie" is short for "quadbit popcount patricia trie". (Nothing to do with cutie cupid dolls or Japanese mayonnaise!)

Get the code from http://dotat.at/prog/qp/.


Crit-bit tries are an elegant space-optimised variant of PATRICIA tries. Dan Bernstein has a well-known description of crit-bit tries, and Adam Langley has nicely annotated DJB's crit-bit implementation.

What struck me was crit-bit tries require quite a lot of indirections to perform a lookup. I wondered if it would be possible to test multiple bits at a branch point to reduce the depth of the trie, and make the size of the branch adapt to the trie's density to keep memory usage low. My initial attempt (over two years ago) was vaguely promising but too complicated, and I gave up on it.

A few weeks ago I read about Phil Bagwell's hash array mapped trie (HAMT) which he described in two papers, "fast and space efficient trie searches", and "ideal hash trees". The part that struck me was the popcount trick he uses to eliminate unused pointers in branch nodes. (This is also described in "Hacker's Delight" by Hank Warren, in the "applications" subsection of chapter 5-1 "Counting 1 bits", which evidently did not strike me in the same way when I read it!)

You can use popcount() to implement a sparse array of length N containing M < N members using bitmap of length N and a packed vector of M elements. A member i is present in the array if bit i is set, so M == popcount(bitmap). The index of member i in the packed vector is the popcount of the bits preceding i.

    mask = 1 << i;
    if(bitmap & mask)
        member = vector[popcount(bitmap & mask-1)]

qp tries

If we are increasing the fanout of crit-bit tries, how much should we increase it by, that is, how many bits should we test at once? In a HAMT the bitmap is a word, 32 or 64 bits, using 5 or 6 bits from the key at a time. But it's a bit fiddly to extract bit-fields from a string when they span bytes.

So I decided to use a quadbit at a time (i.e. a nibble or half-byte) which implies a 16 bit popcount bitmap. We can use the other 48 bits of a 64 bit word to identify the index of the nibble that this branch is testing. A branch needs a second word to contain the pointer to the packed array of "twigs" (my silly term for sub-tries).

It is convenient for a branch to be two words, because that is the same as the space required for the key+value pair that you want to store at each leaf. So each slot in the array of twigs can contain either another branch or a leaf, and we can use a flag bit in the bottom of a pointer to tell them apart.

Here's the qp trie containing the keys "foo", "bar", "baz". (Note there is only one possible trie for a given set of keys.)

[ 0044 | 1 | twigs ] -> [ 0404 | 5 | twigs ] -> [ value | "bar" ]
                        [    value | "foo" ]    [ value | "baz" ]

The root node is a branch. It is testing nibble 1 (the least significant half of byte 0), and it has twigs for nibbles containing 2 ('b' == 0x62) or 6 ('f' == 0x66). (Note 1 << 2 == 0x0004 and 1 << 6 == 0x0040.)

The first twig is also a branch, testing nibble 5 (the least significant half of byte 2), and it has twigs for nibbles containing 2 ('r' == 0x72) or 10 ('z' == 0x7a). Its twigs are both leaves, for "bar" and "baz". (Pointers to the string keys are stored in the leaves - we don't copy the keys inline.)

The other twig of the root branch is the leaf for "foo".

If we add a key "qux" the trie will grow another twig on the root branch.

[ 0244 | 1 | twigs ] -> [ 0404 | 5 | twigs ] -> [ value | "bar" ]
                        [    value | "foo" ]    [ value | "baz" ]
                        [    value | "qux" ]

This layout is very compact. In the worst case, where each branch has only two twigs, a qp trie has the same overhead as a crit-bit trie, two words (16 bytes) per leaf. In the best case, where each branch is full with 16 twigs, the overhead is one byte per leaf.

When storing 236,000 keys from /usr/share/dict/words the overhead is 1.44 words per leaf, and when storing a vocabulary of 54,000 keys extracted from the BIND9 source, the overhead is 1.12 words per leaf.

For comparison, if you have a parsimonious hash table which stores just a hash code, key, and value pointer in each slot, and which has 90% occupancy, its overhead is 1.33 words per item.

In the best case, a qp trie can be a quarter of the depth of a crit-bit trie. In practice it is about half the depth. For our example data sets, the average depth of a crit-bit trie is 26.5 branches, and a qp trie is 12.5 for dict/words or 11.1 for the BIND9 words.

My benchmarks show qp tries are about 10% faster than crit-bit tries. However I do not have a machine with both a popcount instruction and a compiler that supports it; also, LLVM fails to optimise popcount for a 16 bit word size, and GCC compiles it as a subroutine call. So there's scope for improvement.

crit-bit tries revisited

DJB's published crit-bit trie code only stores a set of keys, without any associated values. It's possible to add support for associated values without increasing the overhead.

In DJB's code, branch nodes have three words: a bit index, and two pointers to child nodes. Each child pointer has a flag in its least significant bit indicating whether it points to another branch, or points to a key string.

[ branch ] -> [ 3      ]
              [ branch ] -> [ 5      ]
              [ "qux"  ]    [ branch ] -> [ 20    ]
                            [ "foo"  ]    [ "bar" ]
                                          [ "baz" ]

It is hard to add associated values to this structure without increasing its overhead. If you simply replace each string pointer with a pointer to a key+value pair, the overhead is 50% greater: three words per entry in addition to the key+value pointers.

When I wanted to benchmark my qp trie implementation against crit-bit tries, I trimmed the qp trie code to make a crit-bit trie implementation. So my crit-bit implementation stores keys with associated values, but still has an overhead of only two words per item.

[ 3 twigs ] -> [ 5   twigs ] -> [ 20  twigs ] -> [ val "bar" ]
               [ val "qux" ]    [ val "foo" ]    [ val "baz" ]

Instead of viewing this as a trimmed-down qp trie, you can look at it as evolving from DJB's crit-bit tries. First, add two words to each node for the value pointers, which I have drawn by making the nodes wider:

[ branch ] ->      [ 3    ]
              [ x  branch ] ->      [ 5    ]
              [ val "qux" ]    [ x  branch ] ->       [ 20  ]
                               [ val "foo" ]    [ val "bar" ]
                                                [ val "baz" ]

The value pointers are empty (marked x) in branch nodes, which provides space to move the bit indexes up a level. One bit index from each child occupies each empty word. Moving the bit indexes takes away a word from every node, except for the root which becomes a word bigger.


This code was pretty fun to write, and I'm reasonably pleased with the results. The debugging was easier than I feared: most of my mistakes were simple (e.g. using the wrong variable, failing to handle a trivial case, muddling up getline()s two length results) and clang -fsanitize=address was a mighty debugging tool.

My only big logic error was in Tnext(); I thought it was easy to find the key lexicographically following some arbitrary string not in the trie, but it is not. (This isn't a binary search tree!) You can easily find the keys with a given prefix, if you know in advance the length of the prefix. But, with my broken code, if you searched for an arbitrary string you could end up in a subtrie which was not the subtrie with the longest matching prefix. So now, if you want to delete a key while iterating, you have to find the next key before deleting the previous one.


I have this nice code, but I have no idea what practical use I might put it to!


I have done some simple tuning of the inner loops and qp tries are now about 30% faster than crit-bit tries.

(7 comments | Leave a comment)

Tuesday 22nd September 2015

DNAME for short-prefix classless in-addr.arpa delegation

RFC 2317 describes classless in-addr.arpa delegation. The in-addr.arpa reverse DNS name space allows for delegation at octet boundaries, /8 or /16 or /24, so DNS delegation dots match the dots in IP addresses. This worked OK for the old classful IP address architecture, but classless routing allows networks with longer prefixes, e.g. /26, which makes it tricky to delegate the reverse DNS to the user of the IP address space.

The RFC 2317 solution is a clever use of standard DNS features. If you have been delegated, in the parent reverse DNS zone 144.168.192.in-addr.arpa your ISP sets up CNAME records instead of PTR records for all 64 addresses in the subnet. These CNAME records point into some other zone controlled by you where the actual PTR records can be found.

RFC 2317 suggests subdomain names like 128/ but the / in the label is a bit alarming to people and tools that expect domain names to follow strict letter-digit-hyphen syntax. You can just as well use a subdomain name like 128- or even put the PTR records in a completely different branch of the DNS.

For shorter prefixes it is still normal to delegate reverse DNS at multiple-of-8 boundaries. This can require an awkwardly large number of zones, especially if your prefix is a multiple-of-8 plus one. For instance, our Computer Laboratory delegated half of their /16 to be used by the rest of the University, and we are preparing to use half of to provide address space for our wireless service. Both of these address space allocations would normally require 128 zones to be delegated.

Fortunately there is a way to reduce this to one zone, analogous to the RFC 2317 trick, but using a more modern DNS feature, the DNAME record (RFC 2672 RFC 6672). So RFC 2317 says, for long prefixes you replace a lot of PTR records with CNAME records pointing into another zone. Correspondingly, for short prefixes you replace a lot of delegations (NS and DS records) with DNAME records pointing into another zone.

We are using this technique in so that we only need to have one reverse DNS zone instead of 128 zones. The DNAME records point from (for instance) 255.232.128.in-addr.arpa to 255.232.128.in-addr.arpa.cam.ac.uk. Yes, we are using part of the "forward" DNS namespace to hold "reverse" DNS records! The apex of this zone is in-addr.arpa.cam.ac.uk so we can in principle consolidate any other reverse DNS address space into this same zone.

This works really nicely - DNAME support is sufficiently widespread that it mostly just works, with a few caveats mainly affecting outgoing mail servers.

For we are planning to do use the DNAME trick again. However, because it is private address space we don't want to consolidate it into the public in-addr.arpa.cam.ac.uk zone. The options are to use in-addr.arpa.private.cam.ac.uk (which would allow us to consolidate if we choose) or 128-255.10.in-addr.arpa which would be more similar to usual RFC 2317 style.

Example zone file for short-prefix classless in-addr.arpa delegation:

    $ORIGIN 10.in-addr.arpa.
    $TTL 1h

    @         SOA   ns0.example.com. (
                    1 30m 15m 1w 1h )

              NS    ns1.example.com.
              NS    ns2.example.com.

    0-127     NS    ns1.example.com.
              NS    ns2.example.com.

    128-255   NS    ns1.example.com.
              NS    ns2.example.com.

    $GENERATE 0-127   $ DNAME $.0-127
    $GENERATE 128-255 $ DNAME $.128-255

With classless delegations like that your PTR records have names like

(I should probably write this up as an Internet-Draft but a quick blog post will do for now.)

(Leave a comment)

Friday 4th September 2015

Rachel update

I visited rmc28 this morning. She's having a hard time this week: a secondary infection has given her a high fever. (Something like this is apparently typical at this point in the treatment.) The antibiotics are playing havoc with her guts and her hair is starting to come out. She's very tired and dopey, finding it difficult to think or read or do very much at all. Rough.

Visitors are still welcome, though you'll probably find she's happy to listen but not very talkative.
(3 comments | Leave a comment)

Friday 21st August 2015

Fare thee well

At some point, 10 or 15 years ago, I got into the habit of saying goodbye to people by saying "stay well!"

I like it because it is cheerful, it closes a conversation without introducing a new topic, and it feels meaningful without being stilted (like "farewell") or rote (like "goodbye").


"Stay well" works nicely in a group of healthy people, but it is problematic with people who are ill.

Years ago, before "stay well!" was completely a habit, a colleague got prostate cancer. The treatment was long and brutal. I had to be careful when saying goodbye, but I didn't break the habit.

It is perhaps even worse with people who are chronically ill, because "stay well" (especially when I say it) has a casually privileged assumption that I am saying it to people who are already cheerfully well.

In the last week this phrase has got a new force for me. I really do mean "stay well" more than ever, but I wish I could express it without implying that you are already well or that it is your duty to be well.

(10 comments | Leave a comment)

Monday 17th August 2015


We have not been able to visit Rachel this weekend owing to an outbreak of vomiting virus. Nico had it on Friday evening, I had it late last night and Charles started a few hours after me.

Seems to have a 50h ish incubation time, vomiting for not very long, fever. Nico seems to have constipation now, possibly due to not drinking enough?

It's quite infectious and unpleasant so we are staying in. We have had lovely offers of help from lots of people but in this state I don't feel like we can organise much for a couple of days.

Since we can't visit Rachel we tried Skype briefly yesterday evening, though it was pretty crappy as usual for Skype.

Rachel was putting on a brave face on Friday and asked me to post these pictures:

(3 comments | Leave a comment)

Saturday 15th August 2015

Rachel's leukaemia

Rachel has been in hospital since Monday, and on Thursday they told her she has Leukaemia, fortunately a form that is usually curable.

She started chemotherapy on Thursday evening, and it is very rough, so she is not able to have visitors right now. We'll let you know when that changes, but we expect that the side-effects will get worse for a couple of weeks.

To keep her spirits up, she would greatly appreciate small photos of cute children/animals or beautiful landscapes. Send them by email to rmcf@cb4.eu or on Twitter to @rmc28, or you can send small poscards to Ward D6 at Addenbrookes.

Flowers are not allowed on the ward, and no food gifts please because nausea is a big problem. If you want to send a gift, something small and pretty and/or interestingly tactile is suitable.

Rachel is benefiting from services that rely on donations, so you might also be about to help by giving blood - for instance you can donate at Addenbrookes. Or you might donate to Macmillan Cancer Support.

And, if you have any niggling health problems, or something weird is happening or getting worse, do get it checked out. Rachel's leukaemia came on over a period of about three weeks and would have been fatal in a couple of months if untreated.

(12 comments | Leave a comment)

Tuesday 11th August 2015

What I am working on

I feel like https://www.youtube.com/watch?v=Zhoos1oY404

Most of the items below involve chunks of software development that are or will be open source. Too much of this, perhaps, is me saying "that's not right" and going in and fixing things or re-doing it properly...

Exchange 2013 recipient verification and LDAP, autoreplies to bounce messages, Exim upgrade, and logging

We have a couple of colleges deploying Exchange 2013, which means I need to implement a new way to do recipient verification for their domains. Until now, we have relied on internal mail servers rejecting invalid recipients when they get a RCPT TO command from our border mail relay. Our border relay rejects invalid recipients using Exim's verify=recipient/callout feature, so when we get a RCPT TO we try it on the target internal mail server and immediately return any error to the sender.

However Exchange 2013 does not reject invalid recipients until it has received the whole message; it rejects the entire message if any single recipient is invalid. This means if you have several users on a mailing list and one of them leaves, all of your users will stop receiving messages from the list and will eventually be unsubscribed.

The way to work around this problem is (instead of SMTP callout verification) to do LDAP queries against the Exchange server's Active Directory. To support this we need to update our Exim build to include LDAP support so we can add the necessary configuration.

Another Exchange-related annoyance is that we (postmaster) get a LOT of auto-replies to bounce messages, because Exchange does not implement RCF 3834 properly and instead has a non-standard Microsoft proprietary header for suppressing automatic replies. I have a patch to add Microsoft X-Auto-Response-Suppress headers to Exim's bounce messages which I hope will reduce this annoyance.

When preparing an Exim upgrade that included these changes, I found that exim-4.86 included a logging change that makes the +incoming_interface option also log the outgoing interface. I thought I might be able to make this change more consistent with the existing logging options, but sadly I missed the release deadline. In the course of fixing this I found Exim was running out of space for logging options so I have done a massive refactoring to make them easily expandible.

gitweb improvements

For about a year we have been carrying some patches to gitweb which improve its handling of breadcrumbs, subdirectory restrictions, and categories. These patches are in use on our federated gitolite service, git.csx. They have not been incorporated upstream owing to lack of code review. But at last I have some useful feedback so with a bit more work I should be able to get the patches committed so I can run stock gitweb again.

Delegation updates, and superglue

We now subscribe to the ISC SNS service. As well as running f.root-servers.net, the ISC run a global anycast secondary name service used by a number of ccTLDs and other deserving organizations. Fully deploying the delegation change for our 125 public zones has been delayed an embarrassingly long time.

When faced with the tedious job of updating over 100 delegations, I think to myself, I know, I'll automate it! We have domain registrations with JANET (for ac.uk and some reverse zones), Nominet (other .uk), Gandi (non-uk), and RIPE (other reverse zones), and they all have radically different APIs: EPP (Nominet), XML-RPC (Gandi), REST (RIPE), and, um, CasperJS (JANET).

But the automation is not just for this job: I want to be able to automate DNSSEC KSK rollovers. My plan is to have some code that takes a zone's apex records and uses the relevant registrar API to ensure the delegation matches. In practice KSK rollovers may use a different DNSKEY RRset than the zone's published one, but the principle is to make the registrar interface uniformly dead simple by encapsulating the APIs and non-APIs.

I have some software called "superglue" which nearly does what I want, but it isn't quite finished and at least needs its user interface and internal interfaces made consistent and coherent before I feel happy suggesting that others might want to use it.

But I probably have enough working to actually make the delegation changes so I seriously need to go ahead and do that and tell the hostmasters of our delegated subdomains that they can use ISC SNS too.

Configuration management of secrets

Another DNSSEC-related difficulty is private key management - and not just DNSSEC, but also ssh (host keys), API credentials (see above), and other secrets.

What I want is something for storing encrypted secrets in git. I'm not entirely happy with existing solutions. Often they try to conceal whether my secrets are in the clear or not, whereas I want it to be blatantly obvious whether I am in the safe or risky state. Often they use home-brew crypto whereas I would be much happier with something widely-reviewed like gpg.

My current working solution is a git repo containing a half-arsed bit of shell and a makefile that manage a gpg-encrypted tarball containing a git repo full of secrets. As a background project I have about 1/3rd of a more refined "git privacy guard" based on the same basic principle, but I have not found time to work on it seriously since March. Which is slightly awkward because when finished it should make some of my other projects significantly easier.

DNS RPZ, and metazones

My newest project is to deploy a "DNS firewall", that is, blocking access to malicious domains. The aim is to provide some extra coverage for nasties that get through our spam filters or AV software. It will use BIND's DNS response policy zones feature, with a locally-maintained blacklist and whitelist, plus subscriptions to commercial blacklists.

The blocks will only apply to people who use our default recursive servers. We will also provide alternative unfiltered servers for those who need them. Both filtered and unfiltered servers will be provided by the same instances of BIND, using "views" to select the policy.

This requires a relatively simple change to our BIND dynamic configuration update machinery, which ensures that all our DNS servers have copies of the right zones. At the moment we're using ssh to push updates, but I plan to eliminate this trust relationship leaving only DNS TSIG (which is used for zone transfers). The new setup will use nsnotifyd's simplified metazone support.

I am amused that nsnotifyd started off as a quick hack for a bit of fun but rapidly turned out to have many uses, and several users other than me!

Other activities

I frequently send little patches to the BIND developers. My most important patch (as in, running in production) which has not yet been committed upstream is automatic size limits for zone journal files.

Mail and DNS user support. Say no more.

IETF activities. I am listed as an author of the DANE SRV draft which is now in the RFC editor queue. (Though I have not had the tuits to work on DANE stuff for a long time now.)

Pending projects

Things that need doing but haven't reached the top of the list yet include:

  • DHCP server refresh: upgrade OS to the same version as the DNS servers and combine the DHCP Ansible setup into the main DNS Ansible setup.
  • Federated DHCP log access: so other University institutions that are using our DHCP service have some insight into what is happening on their networks.
  • Ansible management of the ipreg stuff on Jackdaw.
  • DNS records for mail authentication: SPF/DKIM/DANE. (We have getting on for 400 mail domains with numerous special cases so this is not entirely trivial.)
  • Divorce ipreg database from Jackdaw.
  • Overhaul ipreg user interface or replace it entirely. (Multi-year project.)
(3 comments | Leave a comment)

Thursday 23rd July 2015

nsdiff-1.70 now with added nsvi

I have released nsdiff-1.70 which is now available from the nsdiff home page. nsdiff creates an nsupdate script from the differences between two versions of a zone. We use it at work for pushing changes from our IP Register database into our DNSSEC signing server.

This release incorporates a couple of suggestions from Jordan Rieger of webnames.ca. The first relaxes domain name syntax in places where domain names do not have to be host names, e.g. in the SOA RNAME field, which is the email address of the people responsible for the zone. The second allows you to optionally choose case-insensitive comparison of records.

The other new feature is an nsvi command which makes it nice and easy to edit a dynamic zone. Why didn't I write this years ago? It was inspired by a suggestion from @jpmens and @Habbie on Twitter and fuelled by a few pints of Citra.

(Leave a comment)

Thursday 2nd July 2015

nsnotifyd-1.1: prompt DNS zone transfers for stealth secondaries

nsnotifyd is my tiny DNS server that only handles DNS NOTIFY messages by running a command. (See my announcement of nsnotifyd and the nsnotifyd home page.)

At Cambridge we have a lot of stealth secondary name servers. We encourage admins who run resolvers to configure them in this way in order to resolve names in our private zones; it also reduces load on our central resolvers which used to be important. This is documented in our sample configuration for stealth nameservers on the CUDN.

The problem with this is that a stealth secondary can be slow to update its copy of a zone. It doesn't receive NOTIFY messages (because it is stealth) so it has to rely on the zone's SOA refresh and retry timing parameters. I have mitigated this somewhat by reducing our refresh timer from 4 hours to 30 minutes, but it might be nice to do better.

A similar problem came up in another scenario recently. I had a brief exchange with someone at JANET about DNS block lists and response policy zones in particular. RPZ block lists are distributed by standard zone transfers. If the RPZ users are stealth secondaries then they are not going to get updates in a very timely manner. (They might not be entirely stealth: RPZ vendors maintain ACLs listing their customers which they might also use for sending notifies.) JANET were concerned that if they provided an RPZ mirror it might exacerbate the staleness problem.

So I thought it might be reasonable to:

  • Analyze a BIND log to extract lists of zone transfer clients, which are presumably mostly stealth secondaries. (A little script called nsnotify-liststealth)
  • Write a tool called nsnotify-fanout to send notify messages to a list of targets.
  • And hook them up to nsnotifyd with a script called nsnotify2stealth.

The result is that you can just configure your authoritative name server to send NOTIFYs to nsnotifyd, and it will automatically NOTIFY all of your stealth secondaries as soon as the zone changes.

This seems to work pretty well, but there is a caveat!

You will now get a massive thundering herd of zone transfers as soon as a zone changes. Previously your stealth secondaries would have tended to spread their load over the SOA refresh period. Not any more!

The ISC has a helpful page on tuning BIND for high zone transfer volume which you should read if you want to use nsnotify2stealth.

(2 comments | Leave a comment)

Monday 15th June 2015

nsnotifyd: handle DNS NOTIFY messages by running a command

About ten days ago, I was wondering how I could automatically and promptly record changes to a zone in git. The situation I had in mind was a special zone which was modified by nsupdate but hosted on a server whose DR plan is "rebuild from Git using Ansible". So I thought it would be a good idea to record updates in git so they would not be lost.

My initial idea was to use BIND's update-policy external feature, which allows you to hook into its dynamic update handling. However that would have problems with race conditions, since the update handler doesn't know how much time to allow for BIND to record the update.

So I thought it might be better to write a DNS NOTIFY handler, which gets told about updates just after they have been recorded. And I thought that wiring a DNS server daemon would not be much harder than writing an update-policy external daemon.

@jpmens responded with enthusiasm, and a some hours later I had something vaguely working.

What I have written is a special-purpose DNS server called nsnotifyd. It is actually a lot more general than just recording a zone's history in git. You can run any command as soon as a zone is changed - the script for recording changes in git is just one example.

Another example is running nsdiff to propagate changes from a hidden master to a DNSSEC signer. You can do the same job with BIND's inline-signing mode, but maybe you need to insert some evil zone mangling into the process, say...

Basically, anywhere you currently have a cron job which is monitoring updates to DNS zones, you might want to run it under nsnotifyd instead, so your script runs as soon as the zone changes instead of running at fixed intervals.

Since a few people expressed an interest in this program, I have written documentation and packaged it up, so you can download it from the nsnotifyd home page, or from one of several git repository servers.

(Epilogue: I realised halfway through this little project that I had a better way of managing my special zone than updating it directly with nsupdate. Oh well, I can still use nsnotifyd to drive @diffroot!)
(Leave a comment)

Monday 27th April 2015

FizzBuzz with higher-order cpp macros and ELF linker sets

Here are a couple of fun ways to reduce repetitive code in C. To illustrate them, I'll implement FizzBuzz with the constraint that I must mention Fizz and Buzz only once in each implementation.

The generic context is that I want to declare some functions, and each function has an object containing some metadata. In this case the function is a predicate like "divisible by three" and the metadata is "Fizz". In a more realistic situation the function might be a driver entry point and the metadata might be a description of the hardware it supports.

Both implementations of FizzBuzz fit into the following generic FizzBuzz framework, which knows the general form of the game but not the specific rules about when to utter silly words instead of numbers.

    #include <stdbool.h>
    #include <stdio.h>
    #include <err.h>
    // predicate metadata
    typedef struct pred {
        bool (*pred)(int);
        const char *name;
    } pred;
    // some other file declares a table of predicates
    #include PTBL
    static bool putsafe(const char *s) {
        return s != NULL && fputs(s, stdout) != EOF;
    int main(void) {
        for (int i = 0; true; i++) {
            bool done = false;
            // if a predicate is true, print its name
            for (pred *p = ptbl; p < ptbl_end; p++)
                done |= putsafe(p->pred(i) ? p->name : NULL);
            // if no predicate was true, print the number
            if (printf(done ? "\n" : "%d\n", i) < 0)
                err(1, "printf");

To compile this code you need to define the PTBL macro to the name of a file that implements a FizzBuzz predicate table.

Higher-order cpp macros

A higher-order macro is a macro which takes a macro as an argument. This can be useful if you want the macro to do different things each time it is invoked.

For FizzBuzz we use a higher-order macro to tersely define all the predicates in one place. What this macro actually does depends on the macro name p that we pass to it.

    #define predicates(p) \
        p(Fizz, i % 3 == 0) \
        p(Buzz, i % 5 == 0)

We can then define a function-defining macro, and pass it to our higher-order macro to define all the predicate functions.

    #define pred_fun(name, test) \
        static bool name(int i) { return test; }

And we can define a macro to declare a metadata table entry (using the C preprocessor stringification operator), and pass it to our higher-order macro to fill in the whole metadata table.

    #define pred_ent(name, test) { name, #name },
    pred ptbl[] = {

For the purposes of the main program we also need to declare the end of the table.

    #define ptbl_end (ptbl + sizeof(ptbl)/sizeof(*ptbl))

Higher-order macros can get unweildy, especially if each item in the list is large. An alternative is to use a higher-order include file, where instead of passing a macro to another macro, you #define a macro with a particular name, #include a file of macro invocations, then #undef the special macro. This saves you from having to end dozens of lines with backslash continuations.

ELF linker sets

The linker takes collections of definitions, separates them into different sections (e.g. code and data), and concatenates each section into a contiguous block of memory. The effect is that although you can interleave code and data in your C source file, the linker disentangles the little pieces into one code section and one data section.

You can also define your own sections, if you like, by using gcc declaration attributes, so the linker will gather the declarations together in your binary regardless of how spread out they were in the source. The FreeBSD kernel calls these "linker sets" and uses them extensively to construct lists of drivers, initialization actions, etc.

This allows us to declare the metadata for a FizzBuzz predicate alongside its function definition, and use the linker to gather all the metadata into the array expected by the main program. The key part of the macro below is the __attribute__((section("pred"))).

    #define predicate(name, test) \
        static bool name(int i) { return test; } \
        pred pred_##name __attribute__((section("pred"))) \
            = { name, #name }

With that convenience macro we can define our predicates in whatever order or in whatever files we want.

    predicate(Fizz, i % 3 == 0);
    predicate(Buzz, i % 5 == 0);

To access the metadata, the linker helpfully defines some symbols identifying the start and end of the section, which we can pass to our main program.

    extern pred __start_pred[], __stop_pred[];
    #define ptbl    __start_pred
    #define ptbl_end __stop_pred

Source code

git clone https://github.com/fanf2/dry-fizz-buzz

(7 comments | Leave a comment)

Tuesday 24th February 2015

DNSQPS: an alarming shell script

I haven't got round to setting up proper performance monitoring for our DNS servers yet, so I have been making fairly ad-hoc queries against the BIND statistics channel and pulling out numbers with jq.

Last week I changed our new DNS setup for more frequent DNS updates. As part of this change I reduced the TTL on all our records from one day to one hour. The obvious question was, how would this affect the query rate on our servers?

So I wrote a simple monitoring script. The first version did,

    while sleep 1

But the fetch-and-print-stats part took a significant fraction of a second, so the queries-per-second numbers were rather bogus.

A better way to do this is to run `sleep` in the background, while you fetch-and-print-stats in the foreground. Then you can wait for the sleep to finish and loop back to the start. The loop should take almost exactly a second to run (provided fetch-and-print-stats takes less than a second). This is pretty similar to an alarm()/wait() sequence in C. (Actually no, that's bollocks.)

My dnsqps script also abuses `eval` a lot to get a shonky Bourne shell version of associative arrays for the per-server counters. Yummy.

So now I was able to get queries-per-second numbers from my servers, what was the effect of dropping the TTLs? Well, as far as I can tell from eyeballing, nothing. Zilch. No visible change in query rate. I expected at least some kind of clear increase, but no.

The current version of my dnsqps script is:

    while :
        sleep 1 & # set an alarm
        for s in "$@"
	    total=$(curl --silent http://$s:853/json/v1/server |
                    jq -r '.opcodes.QUERY')
            eval inc='$((' $total - tot$s '))'
            eval tot$s=$total
            printf ' %5d %s' $inc $s
        printf '\n'
        wait # for the alarm
(13 comments | Leave a comment)

Monday 16th February 2015

DNS server rollout report

Last week I rolled out my new DNS servers. It was reasonably successful - a few snags but no showstoppers.

Authoritative DNS rollout playbook

I have already written about scripting the recursive DNS rollout. I also used Ansible for the authoritative DNS rollout. I set up the authdns VMs with different IP addresses and hostnames (which I will continue to use for staging/testing purposes); the rollout process was:

  • Stop the Solaris Zone on the old servers using my zoneadm Ansible module;
  • Log into the staging server and add the live IP addresses;
  • Log into the live server and delete the staging IP addresses;
  • Update the hostname.

There are a couple of tricks with this process.

You need to send a gratuitous ARP to get the switches to update their forwarding tables quickly when you move an IP address. Solaris does this automatically but Linux does not, so I used an explicit arping -U command. On Debian/Ubuntu you need the iputils-arping package to get a version of arping which can send gratuitous ARPs (The arping package is not the one you want. Thanks to Peter Maydell for helping me find the right one!)

If you remove a "primary" IPv4 address from an interface on Linux, it also deletes all the other IPv4 addresses on the same subnet. This is not helpful when you are renumbering a machine. To avoid this problem you need to set sysctl net.ipv4.conf.eth0.promote_secondaries=1.

Pre-rollout configuration checking

The BIND configuration on my new DNS servers is rather different to the old ones, so I needed to be careful that I had not made any mistakes in my rewrite. Apart from re-reading configurations several times, I used a couple of tools to help me check.


I used bzl, the BIND zone list tool by JP Mens to get the list of configured zones from each of my servers. This helped to verify that all the differences were intentional.

The new authdns servers both host the same set of zones, which is the union of the zones hosted on the old authdns servers. The new servers have identical configs; the old ones did not.

The new recdns servers differ from the old ones mainly because I have been a bit paranoid about avoiding queries for martian IP address space, so I have lots of empty reverse zones.


I used my tool nsdiff to verify that the new DNS build scripts produce the same zone files as the old ones. (Except for th HINFO records which the new scripts omit.)

(This is not quite an independent check, because nsdiff is part of the new DNS build scripts.)


On Monday I sent out the DNS server upgrade announcement, with some wording improvements suggested by my colleagues Bob Dowling and Helen Sargan.

It was rather arrogant of me to give the expected outage times without any allowance for failure. In the end I managed to hit 50% of the targets.

The order of rollout had to be recursive servers first, since I did not want to swap the old authoritative servers out from under the old recursive servers. The new recursive servers get their zones from the new hidden master, whereas the old recursive servers get them from the authoritative servers.

The last server to be switched was authdns0, because that was the old master server, and I didn't want to take it down without being fairly sure I would not have to roll back.

ARP again

The difference in running time between my recdns and authdns scripts bothered me, so I investigated and discovered that IPv4 was partially broken. Rob Bricheno helped by getting the router's view of what was going on. One of my new Linux boxes was ARPing for a testdns IP address, even after I had deconfigured it!

I fixed it by rebooting, after which it continued to behave correctly through a few rollout / backout test runs. My guess is that the problem was caused when I was getting gratuitous ARPs working - maybe I erroneously added a static ARP entry.

After that all switchovers took about 5 - 15 seconds. Nice.

Status checks

I wrote a couple of scripts for checking rollout status and progress. wheredns tells me where each of our service addresses is running (old or new); pingdns repeatedly polls a server. I used pingdns to monitor when service was lost and when it returned during the rollout process.

Step 1: recdns1

On Tuesday shortly after 18:00, I switched over recdns1. This is our busier recursive server, running at about 1500 - 2000 queries per second during the day.

This rollout went without a hitch, yay!

Afterwards I needed to reduce the logging because it was rather too noisy. The logging on the old servers was rather too minimal for my tastes, but I turned up the verbosity a bit too far in my new configuration.

Step 2a: recdns0

On Wednesday morning shortly after 08:00, I switched over recdns0. It is a bit less busy, running about 1000 - 1500 qps.

This did not go so well. For some reason Ansible appeared to hang when connecting to the new recdns cluster to push the updated keepalived configuration.

Unfortunately my back-out scripts were not designed to cope with a partial rollout, so I had to restart the old Solaris Zone manually, and recdns0 was unavailable for a minute or two.

Mysteriously, Ansible connected quickly outside the context of my rollout scripts, so I tried the rollout again and it failed in the same way.

As a last try, I ran the rollout steps manually, which worked OK although I don't type as fast as Ansible runs a playbook.

So in all there was about 5 minutes downtime.

I'm not sure what went wrong; perhaps I just needed to be a bit more patient...

Step 2b: authdns1

After doing recdns0 I switched over authdns1. This was a bit less stressy since it isn't directly user-facing. However it was also a bit messy.

The problem this time was me forgetting to uncomment authdns1 from the Ansible inventory (its list of hosts). Actually, I should not have needed to uncomment it manually - I should have scripted it. The silly thing is that I had the testdns servers in the inventory for testing the authdns rollout scripts; the testdns servers were causing me some benign irritation (connection failures) when running ansible in the previous week or so. I should not have ignored this irritation and (like I did with the recdns rollout script) automated it away.

Anyway, after a partial rollout and manual rollback, it took me a few ansible-playbook --check runs to work out why Ansible was saying "host not found". The problem was due to the Jinja expansion in the following remote command, where the "to" variable was set to "authdns1.csx.cam.ac.uk" which was not in the inventory.

    ip addr add {{hostvars[to].ipv6}}/64 dev eth0

You can reproduce this with a command like,

    ansible -m debug -a 'msg={{hostvars["funted"]}}' all

After fixing that, by uncommenting the right line in the inventory, the rollout worked OK.

The other post-rollout fix was to ensure all the secondary zones had transferred OK. I had not managed to get all of our masters to add my staging servers to their ACLs, but this was not to hard to sort out using the BIND 9.10 JSON statistics server and the lovely jq command:

    curl http://authdns1.csx.cam.ac.uk:853/json |
    jq -r '.views[].zones[] | select(.serial == 4294967295) | .name' |
    xargs -n1 rndc -s authdns1.csx.cam.ac.uk refresh

After that, I needed to reduce the logging again, because the authdns servers get a whole different kind of noise in the logs!

Lurking bug: rp_filter

One mistake sneaked out of the woodwork on Wednesday, with fortunately small impact.

My colleague Rob Bricheno reported that client machines on (the same subnet as recdns1) were not able to talk to recdns0, I could see the queries arriving with tcpdump, but they were being dropped somewhere in the kernel.

Malcolm Scott helpfully suggested that this was due to Linux reverse path filtering on the new recdns servers, which are multihomed on both subnets. Peter Benie advised me of the correct setting,

    sysctl net.ipv4.conf.em1.rp_filter=2

Step 3: authdns0

On Thursday evening shortly after 18:00, I did the final switch-over of authdns0, the old master.

This went fine, yay! (Actually, more like 40s than the expected 15s, but I was patient, and it was OK.)

There was a minor problem that I forgot to turn off the old DNS update cron job, so it bitched at us a few times overnight when it failed to send updates to its master server. Poor lonely cron job.

One more thing

Over the weekend my email servers complained that some of their zones had not been refreshed recently. This was because four of our RFC 1918 private reverse DNS zones had not been updated since before the switch-over.

There is a slight difference in the cron job timings on the old and new setups: previously updates happened at 59 minutes past the hour, now they happen at 53 minutes past (same as the DNS port number, for fun and mnemonics). Both setups use Unix time serial numbers, so they were roughly in sync, but due to the cron schedul the old servers had a serial number about 300 higher.

BIND on my mail servers was refusing to refresh the zone because it had copies of the zones from the old servers with a higher serial number than the new servers.

I did a sneaky nsupdate add and delete on the relevant zones to update their serial numbers and everything is happy again.

To conclude

They say a clever person can get themselves out of situations a wise person would not have got into in the first place. I think the main wisdom to take away from this is not to ignore minor niggles, and to write rollout/rollback scripts that can work forwards or backwards after being interrupted at any point. I won against the niggles on the ARP problem, but lost against them on the authdns inventory SNAFU.

But in the end it pretty much worked, with only a few minutes downtime and only one person affected by a bug. So on the whole I feel a bit like Mat Ricardo.

(Leave a comment)

Friday 30th January 2015

Recursive DNS rollout plan - and backout plan!

The last couple of weeks have been a bit slow, being busy with email and DNS support, an unwell child, and surprise 0day. But on Wednesday I managed to clear the decks so that on Thursday I could get down to some serious rollout planning.

My aim is to do a forklift upgrade of our DNS servers - a tier 1 service - with negligible downtime, and with a backout plan in case of fuckups.

Solaris Zones

Our old existing DNS service is based on Solaris Zones. The nice thing about this is that I can quickly and safely halt a zone - which stops the software and unconfigures the network interface - and if the replacement does not work I can restart the zone - which brings up the interfaces and the software.

Even better, the old servers have a couple of test zones which I can bounce up and down without a care. These give me enormous freedom to test my migration scripts without worrying about breaking things and with a high degree of confidence that my tests are very similar to the real thing.

Testability gives you confidence, and confidence gives you productivity.

Before I started setting up our new recursive DNS servers, I ran zoneadm -z testdns* halt on the old servers so that I could use the testdns addresses for developing and testing our keepalived setup. So I had the testdns zones in reserve for developing and testing the rollout/backout scripts.

Rollout plans

The authoritative and recursive parts of the new setup are quite different, so they require different rollout plans.

On the authoritative side we will have a virtual machine for each service address. I have not designed the new authoritative servers for any server-level or network-level high availability, since the DNS protocol should be able to cope well enough. This is similar in principle to our existing Solaris Zones setup. The vague rollout plan is to set up new authdns servers on standby addresses, then renumber them to take over from the old servers. This article is not about the authdns rollout plan.

On the recursive side, there are four physical servers any of which can host any of the recdns or testdns addresses, managed by keepalived. The vague rollout plan is to disable a zone on the old servers then enable its service address on the keepalived cluster.

Ansible - configuration vs orchestration

So far I have been using Ansible in a simple way as a configuration management system, treating it as a fairly declarative language for stating what the configuration of my servers should be, and then being able to run the playbooks to find out and/or fix where reality differs from intention.

But Ansible can also do orchestration: scripting a co-ordinated sequence of actions across disparate sets of servers. Just what I need for my rollout plans!

When to write an Ansible module

The first thing I needed was a good way to drive zoneadm from Ansible. I have found that using Ansible as a glorified shell script driver is pretty unsatisfactory, because its shell and command modules are too general to provide proper support for its idempotence and check-mode features. Rather than messing around with shell commands, it is much more satisfactory (in terms of reward/effort) to write a custom module.

My zoneadm module does the bare minimum: it runs zoneadm list -pi to get the current state of the machine's zones, checks if the target state matches the current state, and if not it runs zoneadm boot or zoneadm halt as required. It can only handle zone states that are "installed" or "running". 60 lines of uncomplicated Python, nice.

Start stupid and expect to fail

After I had a good way to wrangle zoned it was time to do a quick hack to see if a trial rollout would work. I wrote the following playbook which does three things: move the testdns1 zone from running to installed, change the Ansible configuration to enable testdns1 on the keepalived cluster, then push the new keepalived configuration to the cluster.

- hosts: helen2.csi.cam.ac.uk
    - zoneadm: name=testdns1 state=installed
- hosts: localhost
    - command: bin/vrrp_toggle rollout testdns1
- hosts: rec
    - keepalived

This is quick and dirty, hardcoded all the way, except for the vrrp_toggle command which is the main reality check.

The vrrp_toggle script just changes the value of an Ansible variable called vrrp_enable which lists which VRRP instances should be included in the keepalived configuration. The keepalived configuration is generated from a Jinja2 template, and each vrrp_instance (testdns1 etc.) is emitted if the instance name is not commented out of the vrrp_enable list.


Ansible does not re-read variables if you change them in the middle of a playbook like this. Good. That is the right thing to do.

The other way in which this playbook is stupid is there are actually 8 of them: 2 recdns plus 2 testdns, rollout and backout. Writing them individually is begging for typos; repeated code that is similar but systematically different is one of the most common ways to introduce bugs.

Learn from failure

So the right thing to do is tweak the variable then run the playbook. And note the vrrp_toggle command arguments describe almost everything you need to know to generate the playbook! (The only thing missing is the mapping from instance name (like testdns1) to parent host (like helen2).

So I changed the vrrp_toggle script into a rec-rollout / rec-backout script, which tweaks the vrrp_enable variable and generates the appropriate playbook. The playbook consists of just two tasks, whose order depends on whether we are doing rollout or backout, and which have a few straightforward place-holder substitutions.

The nice thing about this kind of templating is that if you screw it up (like I did at first), usually a large proportion of the cases fail, probably including your test cases; whereas with clone-and-hack there will be a nasty surprise in a case you didn't test.

Consistent and quick rollouts

In the playbook I quoted above I am using my keepalived role, so I can be absolutely sure that my rollout/backout plan remains consistent with my configuration management setup. Nice!

However the keepalived role does several configuration tasks, most of which are not necessary in this situation. In fact all I need to do is copy across the templated configuration file and tell keepalived to reload it if the file has changed.

Ansible tags are for just this kind of optimization. I added a line to my keepalived.conf task:

    tags: quick

Only one task needed tagging because the keepalived.conf task has a handler to tell keepalived to reload its configuration when that changes, which is the other important action. So now I can run my rollout/backout playbooks with a --tags quick argument, so only the quick tasks (and if necessary their handlers) are run.


Once I had got all that working, I was able to easily flip testdns0 and testdns1 back and forth between the old and new setups. Each switchover takes about ten seconds, which is not bad - it is less than a typical DNS lookup timeout.

There are a couple more improvements to make before I do the rollout for real. I should improve the molly guard to make better use of ansible-playbook --check. And I should pre-populate the new servers' caches with the Alexa Top 1,000,000 list to reduce post-rollout latency. (If you have a similar UK-centric popular domains list, please tell me so I can feed that to the servers as well!)

(Leave a comment)

Saturday 24th January 2015

New release of nsdiff and nspatch version 1.55

I have released version 1.55 of nsdiff, which creates an nsupdate script from differences between DNS zone files.

There are not many changes to nsdiff itself: the only notable change is support for non-standard port numbers.

The important new thing is nspatch, which is an error-checking wrapper around `nsdiff | nsupdate`. To be friendly when running from cron, nspatch only produces output when it fails. It can also retry an update if it happens to lose a race against concurrent updates e.g. due to DNSSEC signing activity.

You can read the documentation and download the source from the nsdiff home page.

My Mac insists that I should call it nstiff...

(1 comment | Leave a comment)

Saturday 17th January 2015

BIND patches as a byproduct of setting up new DNS servers

On Friday evening I reached a BIG milestone in my project to replace Cambridge University's DNS servers. I finished porting and rewriting the dynamic name server configuration and zone data update scripts, and I was - at last! - able to get the new servers up to pretty much full functionality, pulling lists of zones and their contents from the IP Register database and the managed zone service, and with DNSSEC signing on the new hidden master.

There is still some final cleanup and robustifying to do, and checks to make sure I haven't missed anything. And I have to work out the exact process I will follow to put the new system into live service with minimum risk and disruption. But the end is tantalizingly within reach!

In the last couple of weeks I have also got several small patches into BIND.

  • Jan 7: documentation for named -L

    This was a follow-up to a patch I submitted in April last year. The named -L option specifies a log file to use at startup for recording the BIND version banners and other startup information. Previously this information would always go to syslog regardless of your logging configuration.

    This feature will be in BIND 9.11.

  • Jan 8: typo in comment

    Trivial :-)

  • Jan 12: teach nsdiff to AXFR from non-standard ports

    Not a BIND patch, but one of my own companion utilities. Our managed zone service runs a name server on a non-standard port, and our new setup will use nsdiff | nsupdate to implement bump-in-the-wire signing for the MZS.

  • Jan 13: document default DNSKEY TTL

    Took me a while to work out where that value came from. Submitted on Jan 4. Included in 9.10 ARM.

  • Jan 13: automatically tune max-journal-size

    Our old DNS build scripts have a couple of mechanisms for tuning BIND's max-journal-size setting. By default a zone's incremental update journal will grow without bound, which is not helpful. Having to set the parameter by hand is annoying, especially since it should be simple to automatically tune the limit based on the size of the zone.

    Rather than re-implementing some annoying plumbing for yet another setting, I thought I would try to automate it away. I have submitted this patch as RT#38324. In response I was told there is also RT#36279 which sounds like a request for this feature, and RT#25274 which sounds like another implementation of my patch. Based on the ticket number it dates from 2011.

    I hope this gets into 9.11, or something like it. I suppose that rather than maintaining this patch I could do something equivalent in my build scripts...

  • Jan 14: doc: ignore and clean up isc-notes-html.xsl

    I found some cruft in a supposedly-clean source tree.

    This one actually got committed under my name, which I think is a first for me and BIND :-) (RT#38330)

  • Jan 14: close new zone file before renaming, for win32 compatibility
  • Jan 14: use a safe temporary new zone file name

    These two arose from a problem report on the bind-users list. The conversation moved to private mail which I find a bit annoying - I tend to think it is more helpful for other users if problems are fixed in public.

    But it turned out that BIND's error logging in this area is basically negligible, even when you turn on debug logging :-( But the Windows Process Explorer is able to monitor filesystem events, and it reported a 'SHARING VIOLATION' and 'NAME NOT FOUND'. This gave me the clue that it was a POSIX vs Windows portability bug.

    So in the end this problem was more interesting than I expected.

  • Jan 16: critical: ratelimiter.c:151: REQUIRE(ev->ev_sender == ((void *)0)) failed

    My build scripts are designed so that Ansible sets up the name servers with a static configuration which contains everything except for the zone {} clauses. The zone configuration is provisioned by the dynamic reconfiguration scripts. Ansible runs are triggered manually; dynamic reconfiguration runs from cron.

    I discovered a number of problems with bootstrapping from a bare server with no zones to a fully-populated server with all the zones and their contents on the new hidden master.

    The process is basically,

    • if there are any missing master files, initialise them as minimal zone files
    • write zone configuration file and run rndc reconfig
    • run nsdiff | nsupdate for every zone to fill them with the correct contents

    When bootstrapping, the master server would load 123 new zones, then shortly after the nsdiff | nsupdate process started, named crashed with the assertion failure quoted above.

    Mark Andrews replied overnight with the linked patch (he lives in Australia) which fixed the problem. Yay!

    The other bootstrapping problem was to do with BIND's zone integrity checks. nsdiff is not very clever about the order in which it emits changes; in particular it does not ensure that hostnames exist before any NS or MX or SRV records are created to point to them. You can turn off most of the integrity checks, but not the NS record checks.

    This causes trouble for us when bootstrapping the cam.ac.uk zone, which is the only zone we have with in-zone NS records. It also has lots of delegations which can also trip the checks.

    My solution is to create a special bootstrap version of the zone, which contains the apex and delegation records (which are built from configuration stored in git) but not the bulk of the zone contents from the IP Register database. The zone can then be succesfully loaded in two stages, first `nsdiff cam.ac.uk DB.bootstrap | nsupdate -l` then `nsdiff cam.ac.uk zones/cam.ac.uk | nsupdate -l`.

    Bootstrapping isn't something I expect to do very often, but I want to be sure it is easy to rebuild all the servers from scratch, including the hidden master, in case of major OS upgrades, VM vs hardware changes, disasters, etc.

    No more special snowflake servers!

(Leave a comment)

Friday 9th January 2015

Recursive DNS server failover with keepalived --vrrp

I have got keepalived working on my recursive DNS servers, handling failover for testdns0.csi.cam.ac.uk and testdns1.csi.cam.ac.uk. I am quite pleased with the way it works.

It was difficult to get started because keepalived's documentation is TERRIBLE. More effort has been spent explaining how it is put together than explaining how to get it to work. The keepalived.conf man page is a barely-commented example configuration file which does not describe all the options. Some of the options are only mentioned in the examples in /usr/share/doc/keepalived/samples. Bah!

Edit: See the comments to find the real documentation!

The vital clue came from Graeme Fowler who told me about keepalived's vrrp_script feature which is "documented" in keepalived.conf.vrrp.localcheck which I never would have found without Graeme's help.


Keepalived is designed to run on a pair of load-balancing routers in front of a cluster of servers. It has two main parts. Its Linux Virtual Server daemon runs health checks on the back-end servers and configures the kernel's load balancing router as appropriate. The LVS stuff handles failover of the back-end servers. The other part of keepalived is its VRRP daemon which handles failover of the load-balancing routers themselves.

My DNS servers do not need the LVS load-balancing stuff, but they do need some kind of health check for named. I am running keepalived in VRRP-only mode and using its vrrp_script feature for health checks.

There is an SMTP client in keepalived which can notify you of state changes. It is too noisy for me, because I get messages from every server when anything changes. You can also tell keepalived to run scripts on state changes, so I am using that for notifications.

VRRP configuration

All my servers are configured as VRRP BACKUPs, and there is no MASTER. According to the VRRP RFC, the master is supposed to be the machine which owns the IP addresses. In my setup, no particular machine owns the service addresses.

I am using authentication mainly for additional protection against screwups (e.g. VRID collisions). VRRP password authentication doesn't provide any security: any attacker has to be on the local link so they can just sniff the password off the wire.

I am slightly surprised that it works when I set both IPv4 and IPv6 addresses on the same VRRP instance. The VRRP spec says you have to have separate vrouters for IPv4 and IPv6. Perhaps it works because keepalived doesn't implement real VRRP by default: it does not use a virtual MAC address but instead it just moves the virtual IP addresses and sends gratuitous ARPs to update the switches' forwarding tables. Keepalived has a use_vmac option but it seems rather fiddly to get working, so I am sticking with the default.

vrrp_instance testdns0 {
        virtual_router_id 210
        interface em1
        state BACKUP
        priority 50
        notify /etc/keepalived/notify
        authentication {
                auth_type PASS
                auth_pass XXXXXXXX
        virtual_ipaddress {
        track_script {

State change notifications

My notification script sends email when a server enters the MASTER state and takes over the IP addresses. It also sends email if the server dropped into the BACKUP state because named crashed.

    # this is /etc/keepalived/notify
    case $state in
        # do not notify if this server is working
        if /etc/keepalived/named_ok
        then exit 0
        else state=DEAD
    exim -t <<EOF
    To: hostmaster@cam.ac.uk
    Subject: $instance $state on $(hostname)

DNS server health checks and dynamic VRRP priorities

In the vrrp_instance snippet above, you can see that it specifies four vrrp_scripts to track. There is one vrrp_script for each possible priority, so that the four servers can have four different priorities for each vrrp_instance.

Each vrrp_script is specified using the Jinja macro below. (Four different vrrp_scripts for each of four different vrrp_instances is a lot of repetition!) The type argument is "recdns" or "testdns", the num is 0 or 1, and the prio is a number from 1 to 4.

Each script is run every "interval" seconds, and is allowed to run for up to "timeout" seconds. (My checking script should take at most 1 second.)

A positive "weight" setting is added to the vrrp_instance's priority to increse it when the script succeeds. (If the weight is negative it is added to the priority to decrease it when the script fails.)

    {%- macro named_check(type,num,prio) -%}
    vrrp_script named_check_{{type}}{{num}}_{{prio}} {
        script "/etc/keepalived/named_check {{type}} {{num}} {{prio}}"
        interval 1
        timeout 2
        weight {{ prio * 50 }}
    {%- endmacro -%}

When keepalived runs the four tracking scripts for a vrrp_instance on one of my servers, at most one of the scripts will succeed. The priority is therefore adjusted to 250 for the server that should be live, 200 for its main backup, 150 and 100 on the other servers, and 50 on any server which is broken or out of service.

The checking script finds the position of the host on which it is running in a configuration file which lists the servers in priority order. A server can be commented out to remove it from service. The priority order for testdns1 is the opposite of the order for testdns0. So the following contents of /etc/keepalived/priority.testdns specifies that testdns1 is running on recdns-cnh, testdns0 is on recdns-wcdc, recdns-rnb is disabled, and recdns-sby is a backup.


I can update this prioriy configuration file to change which machines are in service, without having to restart or reconfigure keepalived.

The health check script is:


    set -e

    type=$1 num=$2 check=$3

    # Look for the position of our hostname in the priority listing

    name=$(hostname --short)

    # -F = fixed string not regex
    # -x = match whole line
    # -n = print line number

    # A commented-out line will not match, so grep will fail
    # and set -e will make the whole script fail.

    grepout=$(grep -Fxn $name /etc/keepalived/priority.$type)

    # Strip off everything but the line number. Do this separately
    # so that grep's exit status is not lost in the pipeline.

    prio=$(echo $grepout | sed 's/:.*//')

    # for num=0 later is higher priority
    # for num=1 later is lower priority

    if [ $num = 1 ]
        prio=$((5 - $prio))

    # If our priority matches what keepalived is asking about, then our
    # exit status depends on whether named is running, otherwise tell
    # keepalived we are not running at the priority it is checking.

    [ $check = $prio ] && /etc/keepalived/named_ok

The named_ok script just uses dig to verify that the server seems to be working OK. I originally queried for version.bind, but there are very strict rate limits on the server info view so it did not work very well! So now the script checks that this command produces the expected output:

dig @localhost +time=1 +tries=1 +short cam.ac.uk in txt
(2 comments | Leave a comment)

Wednesday 7th January 2015

Network setup for Cambridge's new DNS servers

The SCCS-to-git project that I wrote about previously was the prelude to setting up new DNS servers with an entirely overhauled infrastructure.

The current setup which I am replacing uses Solaris Zones (like FreeBSD Jails or Linux Containers) to host the various name server instances on three physical boxes. The new setup will use Ubuntu virtual machines on our shared VM service (should I call it a "private cloud"?) for the authoritative servers. I am making a couple of changes to the authoritative setup: changing to a hidden master, and eliminating differences in which zones are served by each server.

I have obtained dedicated hardware for the recursive servers. Our main concern is that they should be able to boot and work with no dependencies on other services beyond power and networking, because basically all the other services rely on the recursive DNS servers. The machines are Dell R320s, each with one Xeon E5-2420 (6 hyperthreaded cores, 2.2GHz), 32 GB RAM, and a Dell-branded Intel 160GB SSD.

Failover for recursive DNS servers

The most important change to the recursive DNS service will be automatic failover. Whenever I need to loosen my bowels I just contemplate dealing with a failure of one of the current elderly machines, which involves a lengthy and delicate manual playbook described on our wiki...

Often when I mention DNS and failover, the immediate response is "Anycast?". We will not be doing anycast on the new servers, though that may change in the future. My current plan is to do failover with VRRP using keepalived. (Several people have told me they are successfully using keepalived, though its documentation is shockingly bad. I would like to know of any better alternatives.) There are a number of reasons for using VRRP rather than anycast:

  • The recursive DNS server addresses are (aka recdns0) and (aka recdns1). (They have IPv6 addresses too.) They are on different subnets which are actually VLANs on the same physical network. It is not feasible to change these addresses.
  • The 8 and 12 subnets are our general server subnets, used for a large proportion of our services, most of which use the recdns servers. So anycasting recdns[01] requires punching holes in the server network routing.
  • The server network routers do not provide proxy ARP and my colleagues in network systems do not want to change this. But our Cisco routers can't punch a /32 anycast hole in the server subnets without proxy ARP. So if we did do anycast we would also have to do VRRP to support failover for recdns clients on the server subnets.
  • The server network spans four sites, connected via our own city-wide fibre network. The sites are linked at layer 2: the same Ethernet VLANs are present at all four sites. So VRRP failover gives us pretty good resilience in the face of server, rack, or site failures.

VRRP will be a massive improvement over our current setup, and it should provide us a lot of the robustness that other places would normally need anycast for, but with significantly less complexity. And less complexity means less time before I can take the old machines out of service.

After the new setup is in place, it might make sense for us to revisit anycast. For instance, we could put recursive servers at other points of presence where our server network does not reach (e.g. the Addenbrooke's medical research site). But in practice there are not many situations when our server network is unreachable but the rest of the University data network is functioning, so it might not be worth it.

Configuration management

The old machines are special snowflake servers. The new setup is being managed by Ansible.

I first used Ansible in 2013 to set up the DHCP servers that were a crucial part of the network renumbering we did when moving our main office from the city centre to the West Cambridge site. I liked how easy it was to get started with Ansible. The way its --check mode prints a diff of remote config file changes is a killer feature for me. And it uses ssh rather than rolling its own crypto and host authentication like some other config management software.

I spent a lot of December working through the configuration of the new servers, starting with the hidden master and an authoritative server (a staging server which is a clone of the future live servers). It felt like quite a lot of elapsed time without much visible progress, though I was steadily knocking items off the list of things to get working.

The best bit was the last day before the xmas break. The new recdns hardware arrived on Monday 22nd, so I spent Tuesday racking them up and getting them running.

My Ansible setup already included most of the special cases required for the recdns servers, so I just uncommented their hostnames in the inventory file and told Ansible to run the playbook. It pretty much Just Worked, which was extremely pleasing :-) All that steady work paid off big time.

Multi-VLAN network setup

The main part of the recdns config which did not work was the network interface configuration, which was OK because I didn't expect it to work without fiddling.

The recdns servers are plugged into switch ports which present subnet 8 untagged (mainly to support initial bootstrap without requiring special setup of the machine's BIOS), and subnet 12 with VLAN tags (VLAN number 812). Each server has its own IPv4 and IPv6 addresses on subnet 8 and subnet 12.

The service addresses recdns0 (subnet 8) and recdns1 (subnet 12) will be additional (virtual) addresses which can be brought up on any of the four servers. They will usually be configured something like:

  • recdns-wcdc: VRRP master for recdns0
  • recdns-rnb: VRRP backup for recdns0
  • recdns-sby: VRRP backup for recdns1
  • recdns-cnh: VRRP master for recdns1

And in case of multi-site failures, the recdns1 servers will act as additional backups for the recdns0 servers and vice versa.

There were two problems with my initial untested configuration.

The known problem was that I was likely to need policy routing, to ensure that packets with a subnet 12 source address were sent out with VLAN 812 tags. This turned out to be true for IPv4, whereas IPv6 does the Right Thing by default.

The unknown problem was that the VLAN 812 interface came up only half-configured: it was using SLAAC for IPv6 instead of the static address that I specified. This took a while to debug. The clue to the solution came from running ifup with the -v flag to get it to print out what it was doing:

# ip link delete em1.812
# ifup -v em1.812

This showed that interface configuration was failing when it tried to set up the default route on that interface. Because there can be only one default route, and there was already one on the main subnet 8 interface. D'oh!

Having got ifup to run to completion I was able to verify that the subnet 12 routing worked for IPv6 but not for IPv4, pretty much as expected. With advice from my colleagues David McBride and Anton Altaparmakov I added the necessary runes to the configuration.

My final /etc/network/interfaces files on the recdns servers are generated from the following Jinja template:

# This file describes the network interfaces available on the system
# and how to activate them. For more information, see interfaces(5).

# NOTE: There must be only one "gateway" line because there can be
# only one default route. Interface configuration will fail part-way
# through when you bring up a second interface with a gateway
# specification.

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface, on subnet 8
auto em1

iface em1 inet static
      address 131.111.8.{{ ifnum }}
      netmask 23

iface em1 inet6 static
      address 2001:630:212:8::d:{{ ifnum }}
      netmask 64

# VLAN tagged interface on subnet 12
auto em1.812

iface em1.812 inet static
      address 131.111.12.{{ ifnum }}
      netmask 24

      # send packets with subnet 12 source address
      # through routing table 12 to subnet 12 router

      up   ip -4 rule  add from table 12
      down ip -4 rule  del from table 12
      up   ip -4 route add default table 12 via
      down ip -4 route del default table 12 via

iface em1.812 inet6 static
      address 2001:630:212:12::d:{{ ifnum }}
      netmask 64

      # auto-configured routing works OK for IPv6

# eof
(3 comments | Leave a comment)

Thursday 27th November 2014

Uplift from SCCS to git

My current project is to replace Cambridge University's DNS servers. The first stage of this project is to transfer the code from SCCS to Git so that it is easier to work with.

Ironically, to do this I have ended up spending lots of time working with SCCS and RCS, rather than Git. This was mainly developing analysis and conversion tools to get things into a fit state for Git.

If you find yourself in a similar situation, you might find these tools helpful.


Cambridge was allocated three Class B networks in the 1980s: first the Computer Lab got in 1987; then the Department of Engineering got in 1988; and eventually the Computing Service got in 1989 for the University (and related institutions) as a whole.

The oldest records I have found date from September 1990, which list about 300 registrations. The next two departments to get connected were the Statistical Laboratory and Molecular Biology (I can't say in which order). The Statslab was allocated, which it has kept for 24 years!. Things pick up in 1991, when the JANET IP Service was started and rapidly took over to replace X.25. (Last month I blogged about connectivity for Astronomy in Cambridge in 1991.)

I have found these historical nuggets in our ip-register directory tree. This contains the infrastructure and history of IP address and DNS registration in Cambridge going back a quarter century. But it isn't just an archive: it is a working system which has been in production that long. Because of this, converting the directory tree to Git presents certain challenges.


The ip-register directory tree contains a mixture of:

  • Source code, mostly with SCCS history
  • Production scripts, mostly with SCCS history
  • Configuration files, mostly with SCCS history
  • The occasional executable
  • A few upstream perl libraries
  • Output files and other working files used by the production scripts
  • Secrets, such as private keys and passwords
  • Mail archives
  • Historical artifacts, such as old preserved copies of parts of the directory tree
  • Miscellaneous files without SCCS history
  • Editor backup files with ~ suffixes

My aim was to preserve this all as faithfully as I could, while converting it to Git in a way that represents the history in a useful manner.


The rough strategy was:

  1. Take a copy of the ip-register directory tree, preserving modification times. (There is no need to preserve owners because any useful ownership information was lost when the directory tree moved off the Central Unix Service before that shut down in 2008.)
  2. Convert from SCCS to RCS file-by-file. Converting between these formats is a simple one-to-one mapping.
  3. Files without SCCS history will have very short artificial RCS histories created from their modification times and editor backup files.
  4. Convert the RCS tree to CVS. This is basically just moving files around, because a CVS repository is little more than a directory tree of RCS files.
  5. Convert the CVS repository to Git using git cvsimport. This is the only phase that needs to do cross-file history analysis, and other people have already produced a satisfactory solution.

Simples! ... Not.

sccs2rcs proves inadequate

I first tried ESR's sccs2rcs Python script. Unfortunately I rapidly ran into a number of showstoppers.

  • It didn't work with Solaris SCCS, which is what was available on the ip-register server.
  • It destructively updates the SCCS tree, losing information about the relationship between the working files and the SCCS files.
  • It works on a whole directory tree, so it doesn't give you file-by-file control.

I fixed a bug or two but very soon concluded the program was entirely the wrong shape.

(In the end, the Solaris incompatibility became moot when I installed GNU CSSC on my FreeBSD workstation to do the conversion. But the other problems with sccs2rcs remained.)


So I wrote a small script called sccs2rcs1 which just converts one SCCS file to one RCS file, and gives you control over where the RCS and temporary files are placed. This meant that I would not have to shuffle RCS files around: I could just create them directly in the target CVS repository. Also, sccs2rcs1 uses RCS options to avoid the need to fiddle with checkout locks, which is a significant simplification.

The main regression compared to sccs2rcs is that sccs2rcs1 does not support branches, because I didn't have any files with branches.


At this point I needed to work out how I was going to co-ordinate the invocations of sccs2rcs1 to convert the whole tree. What was in there?!

I wrote a fairly quick-and-dirty script called sccscheck which analyses a directory tree and prints out notes on various features and anomalies. A significant proportion of the code exists to work out the relationship between working files, backup files, and SCCS files.

I could then start work on determining what fix-ups were necessary before the SCCS-to-CVS conversion.


One notable part of the ip-register directory tree was the archive subdirectory, which contained lots of gzipped SCCS files with date stamps. What relationship did they have to each other? My first guess was that they might be successive snapshots of a growing history, and that the corresponding SCCS files in the working part of the tree would contain the whole history.

I wrote sccsprefix to verify if one SCCS file is a prefix of another, i.e. that it records the same history up to a certain point.

This proved that the files were NOT snapshots! In fact, the working SCCS files had been periodically moved to the archive, and new working SCCS files started from scratch. I guess this was to cope with the files getting uncomfortably large and slow for 1990s hardware.


So to represent the history properly in Git, I needed to combine a series of SCCS files into a linear history. It turns out to be easier to construct commits with artificial metadata (usernames, dates) with RCS than with SCCS, so I wrote rcsappend to add the commits from a newer RCS file as successors of commits in an older file.

Converting the archived SCCS files was then a combination of sccs2rcs1 and rcsappend. Unfortunately this was VERY slow, because RCS takes a long time to check out old revisions. This is because an RCS file contains a verbatim copy of the latest revision and a series of diffs going back one revision at a time. The SCCS format is more clever and so takes about the same time to check out any revision.

So I changed sccs2rcs1 to incorporate an append mode, and used that to convert and combine the archived SCCS files, as you can see in the ipreg-archive-uplift script. This still takes ages to convert and linearize nearly 20,000 revisions in the history of the hosts.131.111 file - an RCS checkin rewrites the entire RCS file so they get slower as the number of revisions grows. Fortunately I don't need to run it many times.


There are a lot of files in the ip-register tree without SCCS histories, which I wanted to preserve. Many of them have old editor backup ~ files, which could be used to construct a wee bit of history (in the absence of anything better). So I wrote files2rcs to build an RCS file from this kind of miscellanea.

An aside on file name restrictions

At this point I need to moan a bit.

Why does RCS object to file names that start with a comma. Why.

I tried running these scripts on my Mac at home. It mostly worked, except for the directories which contained files like DB.cam (source file) and db.cam (generated file). I added a bit of support in the scripts to cope with case-insensitive filesystems, so I can use my Macs for testing. But the bulk conversion runs very slowly, I think because it generates too much churn in the Spotlight indexes.


One significant problem is dealing with SCCS files whose working files have been deleted. In some SCCS workflows this is a normal state of affairs - see for instance the SCCS support in the POSIX Make XSI extensions. However, in the ip-register directory tree this corresponds to files that are no longer needed. Unfortunately the SCCS history generally does not record when the file was deleted. It might be possible to make a plausible guess from manual analysis, but perhaps it is more truthful to record an artificial revision saying the file was not present at the time of conversion.

Like SCCS, RCS does not have a way to represent a deleted file. CVS uses a convention on top of RCS: when a file is deleted it puts the RCS file in an "Attic" subdirectory and adds a revision with a "dead" status. The rcsdeadify applies this convention to an RCS file.


There are situations where it is possible to identify a meaningful committer and deletion time. Where a .tar.gz archive exists, it records the original file owners. The tar2usermap script records the file owners from the tar files. The contents can then be unpacked and converted as if they were part of the main directory, using the usermap file to provide the correct committer IDs. After that the files can be marked as deleted at the time the tarfile was created.


The main conversion script is sccs2cvs, which evacuates an SCCS working tree into a CVS repository, leaving behind a tree of (mostly) empty directories. It is based on a simplified version of the analysis done by sccscheck, with more careful error checking of the commands it invokes. It uses sccs2rcs1, files2rcs, and rcsappend to handle each file.

The rcsappend case occurs when there is an editor backup ~ file which is older than the oldest SCCS revision, in which case sccs2cvs uses rcsappend to combine the output of sccs2rcs1 and files2rcs. This could be done more efficiently with sccs2rcs1's append mode, but for the ip-register tree it doesn't cause a big slowdown.

To cope with the varying semantics of missing working files, sccs2rcs leaves behind a tombstone where it expected to find a working file. This takes the form of a symlink pointing to 'Attic'. Another script can then deal with these tombstones as appropriate.

pre-uplift, mid-uplift, post-uplift

Before sccs2cvs can run, the SCCS working tree should be reasonably clean. So the overall uplift process goes through several phases:

  1. Fetch and unpack copy of SCCS working tree;
  2. pre-uplift fixups;
    (These should be the minimum changes that are required before conversion to CVS, such as moving secrets out of the working tree.)
  3. sccs2cvs;
  4. mid-uplift fixups;
    (This should include any adjustments to the earlier history such as marking when files were deleted in the past.)
  5. git cvsimport or cvs-fast-export | git fast-import;
  6. post-uplift fixups;
    (This is when to delete cruft which is now preserved in the git history.)

For the ip-register directory tree, the pre-uplift phase also includes ipreg-archive-uplift which I described earlier. Then in the mid-uplift phase the combined histories are moved into the proper place in the CVS repository so that their history is recorded in the right place.

Similarly, for the tarballs, the pre-uplift phase unpacks them in place, and moves the tar files aside. Then the mid-uplift phase rcsdeadifies the tree that was inside the tarball.

I have not stuck to my guidelines very strictly: my scripts delete quite a lot of cruft in the pre-uplift phase. In particular, they delete duplicated SCCS history files from the archives, and working files which are generated by scripts.


SCCS/RCS/CVS all record committers by simple user IDs, whereas git uses names and email addresses. So git-cvsimport and cvs-fast-export can be given an authors file containing the translation. The sccscommitters script produces a list of user IDs as a starting point for an authors file.

Uplifting cvs to git

At first I tried git cvsimport, since I have successfully used it before. In this case it turned out not to be the path to swift enlightenment - it was taking about 3s per commit. This is mainly because it checks out files from oldest to newest, so it falls foul of the same performance problem that my rcsappend program did, as I described above.

So I compiled cvs-fast-export and fairly soon I had a populated repository: nearly 30,000 commits at 35 commits per second, so about 100 times faster. The fast-import/export format allows you to provide file contents in any order, independent of the order they appear in commits. The fastest way to get the contents of each revision out of an RCS file is from newest to oldest, so that is what cvs-fast-export does.

There are a couple of niggles with cvs-fast-export, so I have a patch which fixes them in a fairly dumb manner (without adding command-line switches to control the behaviour):

  • In RCS and CVS style, cvs-fast-export replaces empty commit messages with "*** empty log message ***", whereas I want it to leave them empty.
  • cvs-fast-export makes a special effort to translate CVS's ignored file behaviour into git by synthesizing a .gitignore file into every commit. This is wrong for the ip-register tree.
  • Exporting the hosts.131.111 file takes a long time, during which cvs-fast-export appears to stall. I added a really bad progress meter to indicate that work was being performed.

Wrapping up

Overall this has taken more programming than I expected, and more time, very much following the pattern that the last 10% takes the same time as the first 90%. And I think the initial investigations - before I got stuck in to the conversion work - probably took the same time again.

There is one area where the conversion could perhaps be improved: the archived dumps of various subdirectories have been converted in the location that the tar files were stored. I have not tried to incorporate them as part of the history of the directories from which the tar files were made. On the whole I think combining them, coping with renames and so on, would take too much time for too little benefit. The multiple copies of various ancient scripts are a bit weird, but it is fairly clear from the git history what was going on.

So, let us declare the job DONE, and move on to building new DNS servers!

(13 comments | Leave a comment)

Saturday 22nd November 2014

Nerdy trivia about Unix time_t

When I was running git-cvsimport yesterday (about which more another time), I wondered what were the nine-digit numbers starting with 7 that it was printing in its progress output. After a moment I realised they were the time_t values corresponding to the commit dates from the early 1990s.

Bert commented that he started using Unix when time_t values started with an 8, which made me wonder if there was perhaps a 26 year ambiguity - early 1970s or mid 1990s?. (For the sake of pedantry - I don't really think Bert is that old!)

So I checked and time_t = 80,000,000 corresponds to 1972-07-14 22:13:20 and 90,000,000 corresponds to 1972-11-07 16:00:00. But I thought this was before modern time_t started.

Page 183 of this very large PDF of the 3rd Edition Unix manual says:

TIME (II)                  3/15/72                  TIME (II)

NAME         time -- get time of year

SYNOPSIS     sys time / time = 13
             (time r0-r1)

DESCRIPTION  time returns the time since 00:00:00, Jan. 1,
             1972, measured in sixtieths of a second.  The
             high order word is in the rO register and the
             low order is in the r1.

SEE ALSO     date(I), mdate(II)


BUGS         The time is stored in 32 bits.  This guarantees
             a crisis every 2.26 years.

So back then the 800,000,000 - 900,000,000 period was about three weeks in June 1972.

The 4th Edition Unix manual (link to tar file of nroff source) says:

TIME (II)                   8/5/73                   TIME (II)

NAME         time -- get date and time

SYNOPSIS     (time = 13)
             sys  time
             int tvec[2];

DESCRIPTION  Time returns the time since 00:00:00 GMT, Jan. 1,
             1970, measured in seconds. From asm, the high
             order word is in the r0 register and the low
             order is in r1. From C, the user-supplied vector
             is filled in.

SEE ALSO     date(I), stime(II), ctime(III)


I think the date on that page is a reasonably accurate indicator of when the time_t format changed. In the Unix manual, each page has its own date, separate from the date on the published editions of the manual. So, for example, the 3rd Edition is dated February 1973, but its TIME(II) page is dated March 1972. However all the 4th Edition system call man pages have the same date, which suggests that part of the documentation was all revised together, and the actual changes to the code happened some time earlier.

Now, time_t = 100,000,000 corresponds to 1973-03-03 09:46:40, so it is pretty safe to say that the count of seconds since the epoch has always had nine or ten digits.

(1 comment | Leave a comment)


I recently saw FixedFixer on Hacker News. This is a bookmarklet which turns off CSS position:fixed, which makes a lot of websites less annoying. Particular offenders include Wired, Huffington Post, Medium, et cetera ad nauseam. A lot of them are unreadable on my old small phone because of all the crap they clutter up the screen with, but even on my new bigger phone the clutter is annoying. Medium's bottom bar is particularly vexing because it looks just like mobile Safari's bottom bar. Bah, thrice bah, and humbug! But, just run FixedFixer and the crap usually disappears.

The code I am using is very slightly adapted from the HN post. In readable form:

  (function(elements, elem, style, i) {
    elements = document.getElementsByTagName('*');
    for (i = 0; elem = elements[i]; i++) {
      style = getComputedStyle(elem);
      if (style && style.position == 'fixed')
        elem.style.position = 'static';

Or in bookmarklet style:

Adding bookmarklets in iOS is a bit annoying because you can't edit the URL when adding a bookmark. You have to add a rubbish bookmark then as a separate step, edit it to replace the URL with the bookmarklet. Sigh.

I have since added a second bookmarklet, because when I was trying to read about AWS a large part of the text was off the right edge of the screen and they had disabled scrolling and zooming so it could not be read. How on earth can they publish a responsive website which does not actually work on a large number of phones?!

Anyway, OverflowFixer is the same as FixedFixer, but instead of changing position='fixed' to position='static', it changes overflow='hidden' to overflow='visible'.

When I mentioned these on Twitter, Tom said "Please share!" so this is a slightly belated reply. Do you have any bookmarklets that you particularly like? Write a comment!

(2 comments | Leave a comment)

Thursday 30th October 2014

The early days of the Internet in Cambridge

I'm currently in the process of uplifting our DNS development / operations repository from SCCS (really!) to git. This is not entirely trivial because I want to ensure that all the archival material is retained in a sensible way.

I found an interesting document from one of the oldest parts of the archive, which provides a good snapshot of academic computer networking in the UK in 1991. It was written by Tony Stonely, aka <ajms@cam.ac.uk>. AJMS is mentioned in RFC 1117 as the contact for Cambridge's IP address allocation. He was my manager when I started work at Cambridge in 2002, though he retired later that year.

The document is an email discussing IP connectivity for Cambridge's Institute of Astronomy. There are a number of abbreviations which might not be familiar...

  • Coloured Book: the JANET protocol suite
  • CS: the University Computing Service
  • CUDN: the Cambridge University Data Network
  • GBN: the Granta Backbone Network, Cambridge's duct and fibre infrastructure
  • grey: short for Grey Book, the JANET email protocol
  • IoA: the Institute of Astronomy
  • JANET: the UK national academic network
  • JIPS: the JANET IP service, which started as a pilot service early in 1991; IP traffic rapidly overtook JANET's native X.25 traffic, and JIPS became an official service in November 1991, about when this message was written
  • PSH: a member of IoA staff
  • RA: the Rutherford Appleton Laboratory, a national research institute in Oxfordshire the Mullard Radio Astronomy Observatory, an outpost at Lords Bridge near Barton, where some of the dishes sit on the old Cambridge-Oxford railway line. (I originally misunderstood the reference.)
  • RGO: The Royal Greenwich Observatory, which moved from Herstmonceux to the IoA site in Cambridge in 1990
  • Starlink: a UK national DECnet network linking astronomical research institutions

Edited to correct the expansion of RA and to add Starlink

    Connection of IoA/RGO to IP world

This note is a statement of where I believe we have got to and an initial
review of the options now open.

What we have achieved so far

All the Suns are properly connected at the lower levels to the
Cambridge IP network, to the national IP network (JIPS) and to the
international IP network (the Internet). This includes all the basic
infrastructure such as routing and name service, and allows the Suns
to use all the usual native Unix communications facilities (telnet,
ftp, rlogin etc) except mail, which is discussed below. Possibly the
most valuable end-user function thus delivered is the ability to fetch
files directly from the USA.

This also provides the basic infrastructure for other machines such as
the VMS hosts when they need it.

VMS nodes

Nothing has yet been done about the VMS nodes. CAMV0 needs its address
changing, and both IOA0 and CAMV0 need routing set for extra-site
communication. The immediate intention is to route through cast0. This
will be transparent to all parties and impose negligible load on
cast0, but requires the "doit" bit to be set in cast0's kernel. We
understand that PSH is going to do all this [check], but we remain
available to assist as required.

Further action on the VMS front is stalled pending the arrival of the
new release (6.6) of the CMU TCP/IP package. This is so imminent that
it seems foolish not to await it, and we believe IoA/RGO agree [check].

Access from Suns to Coloured Book world

There are basically two options for connecting the Suns to the JANET
Coloured Book world. We can either set up one or more of the Suns as
full-blown independent JANET hosts or we can set them up to use CS
gateway facilities. The former provides the full range of facilities
expected of any JANET host, but is cumbersome, takes significant local
resources, is complicated and long-winded to arrange, incurs a small
licence fee, is platform-specific, and adds significant complexity to
the system managers' maintenance and planning load. The latter in
contrast is light-weight, free, easy to install, and can be provided
for any reasonable Unix host, but limits functionality to outbound pad
and file transfer either way initiated from the local (IoA/RGO) end.
The two options are not exclusive.

We suspect that the latter option ("spad/cpf") will provide adequate
functionality and is preferable, but would welcome IoA/RGO opinion.

Direct login to the Suns from a (possibly) remote JANET/CUDN terminal
would currently require the full Coloured Book package, but the CS
will shortly be providing X.29-telnet gateway facilities as part of
the general infrastructure, and can in any case provide this
functionality indirectly through login accounts on Central Unix
facilities. For that matter, AST-STAR or WEST.AST could be used in
this fashion.


Mail is a complicated and difficult subject, and I believe that a
small group of experts from IoA/RGO and the CS should meet to discuss
the requirements and options. The rest of this section is merely a
fleeting summary of some of the issues.
Firstly, a political point must be clarified. At the time of writing
it is absolutely forbidden to emit smtp (ie Unix/Internet style) mail
into JIPS. This prohibition is national, and none of Cambridge's
doing. We expect that the embargo will shortly be lifted somewhat, but
there are certain to remain very strict rules about how smtp is to be
used. Within Cambridge we are making best guesses as to the likely
future rules and adopting those as current working practice. It must
be understood however that the situation is highly volatile and that
today's decisions may turn out to be wrong.

The current rulings are (inter alia)

        Mail to/from outside Cambridge may only be grey (Ie. JANET

        Mail within Cambridge may be grey or smtp BUT the reply
        address MUST be valid in BOTH the Internet AND Janet (modulo
        reversal). Thus a workstation emitting smtp mail must ensure
        that the reply address contained is that of a current JANET
        mail host. Except that -

        Consenting machines in a closed workgroup in Cambridge are
        permitted to use smtp between themselves, though there is no
        support from the CS and the practice is discouraged. They
        must remember not to contravene the previous two rulings, on
        pain of disconnection.

The good news is that a central mail hub/distributer will become
available as a network service for the whole University within a few
months, and will provide sufficient gateway function that ordinary
smtp Unix workstations, with some careful configuration, can have full
mail connectivity. In essence the workstation and the distributer will
form one of those "closed workgroups", the workstation will send all
its outbound mail to the distributer and receive all its inbound mail
from the distributer, and the distributer will handle the forwarding
to and from the rest of Cambridge, UK and the world.

There is no prospect of DECnet mail being supported generally either
nationally or within Cambridge, but I imagine Starlink/IoA/RGO will
continue to use it for the time being, and whatever gateway function
there is now will need preserving. This will have to be largely
IoA/RGO's own responsibility, but the planning exercise may have to
take account of any further constraints thus imposed. Input from
IoA/RGO as to the requirements is needed.

In the longer term there will probably be a general UK and worldwide
shift to X.400 mail, but that horizon is probably too hazy to rate more
than a nod at present. The central mail switch should in any case hide
the initial impact from most users.

The times are therefore a'changing rather rapidly, and some pragmatism
is needed in deciding what to do. If mail to/from the IP machines is
not an urgent requirement, and since they will be able to log in to
the VMS nodes it may not be, then the best thing may well be to await
the mail distributer service. If more direct mail is needed more
urgently then we probably need to set up a private mail distributer
service within IoA/RGO. This would entail setting up (probably) a Sun
as a full JANET host and using it as the one and only (mail) route in
or out of IoA/RGO. Something rather similar has been done in Molecular
Biology and is thus known to work, but setting it up is no mean task.
A further fall-back option might be to arrange to use Central Unix
facilities as a mail gateway in similar vein. The less effort spent on
interim facilities the better, however.

Broken mail

We discovered late in the day that smtp mail was in fact being used
between IoA and RA, and the name changing broke this. We regret having
thus trodden on existing facilities, and are willing to help try to
recover any required functionality, but we believe that IoA/RGO/RA in
fact have this in hand. We consider the activity to fall under the
third rule above. If help is needed, please let us know.

We should also report sideline problem we encountered and which will
probably be a continuing cause of grief. CAVAD, and indeed any similar
VMS system, emits mail with reply addresses of the form
"CAVAD::user"@....  This is quite legal, but the quotes are
syntactically significant, and must be returned in any reply.
Unfortunately the great majority of Unix systems strip such quotes
during emission of mail, so the reply address fails. Such stripping
can occur at several levels, notably the sendmail (ie system)
processing and the one of the most popular user-level mailers. The CS
is fixing its own systems, but the problem is replicated in something
like half a million independent Internet hosts, and little can be done
about it.

Other requirements

There may well be other requirements that have not been noticed or,
perish the thought, we have inadvertently broken. Please let us know
of these.

Bandwidth improvements

At present all IP communications between IoA/RGO and the rest of the
world go down a rather slow (64Kb/sec) link. This should improve
substantially when it is replaced with a GBN link, and to most of
Cambridge the bandwidth will probably become 1-2Mb/sec. For comparison,
the basic ethernet bandwidth is 10Mb/sec. The timescale is unclear, but
sometime in 1992 is expected. The bandwidth of the national backbone
facilities is of the order of 1Mb/sec, but of course this is shared with
many institutions in a manner hard to predict or assess.

For Computing Service,
Tony Stoneley, ajms@cam.cus
(6 comments | Leave a comment)

Wednesday 15th October 2014

POP, IMAP, SMTP, and the POODLE SSLv3.0 vulnerability.

A lot of my day has been spent on the POODLE vulnerability. For details see the original paper, commentary by Daniel Franke, Adam Langley, Robert Graham, and the POODLE.io web page of stats and recommendations.

One thing I have been investigating is to what extent mail software uses SSLv3. The best stats we have come from our message submission server, smtp.hermes, which logs TLS versions and cipher suites and (when possible) User-Agent and X-Mailer headers. (The logging from our POP and IMAP servers is not so good, partly because we don't request or log user agent declarations, and even if we did most clients wouldn't provide them.)

Nearly 100 of our users are using SSLv3, which is about 0.5% of them. The main culprits seem to be Airmail, Evolution, and most of all Android. Airmail is a modern Mac MUA, so in that case I guess it is a bug or misuse of the TLS API. For Evolution my guess is that it has a terrible setup user interface (all MUAs have terrible setup user interfaces) and users are choosing "SSL 3.0" rather than "TLS 1.0" because the number is bigger. In the case of Android I don't have details of version numbers because Android mail software doesn't include user-agent headers (unlike practically everything else), but I suspect old unsupported smart-ish phones running bad Java are to blame.

I haven't decided exactly what we will do to these users yet. However we have the advantage that POODLE seems to be a lot less bad for non-web TLS clients.

The POODLE padding oracle attack requires a certain amount of control over the plaintext which the attacker is trying to decrypt. Specifically:

  1. The plaintext plus MAC has to be an exact multiple of the cipher block size;
  2. It must be possible to move the secret (cookie or password) embedded in the plaintext by a byte at a time to scan it past a block boundary.

In the web situation, the attacker can use JavaScript served from anywhere to make repeated POST requests to an arbitrary target host. The JS can manipulate the body of the POST to control the overall length of the request, and can manipulate the request path to control the position of the cookie in the headers.

In the mail situation (POP, IMAP, SMTP), the attacker can make the client retry requests repeatedly by breaking the connection, but they cannot control the size or framing of the client's authentication command.

So I think we have the option of not worrying too much if forced upgrades turn out to be too painful, though I would prefer not to go that route - it makes me feel uncomfortably complacent.

(5 comments | Leave a comment)

Monday 14th July 2014

Data structures and algorithms

"Bad programmers worry about the code. Good programmers worry about data structures and their relationships." - Linus Torvalds

"If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming." - Rob Pike

"Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious." - Fred Brooks
(7 comments | Leave a comment)

Wednesday 14th May 2014

Dilbert feeds

I just noticed that Dilbert had its 25th anniversary last month. I have created an atom feed of old strips to mark this event. To avoid leap year pain the feed contains strips from the same date 24 years ago, rather than 25 years ago. See dilbert_zoom for current strips and dilbert_24 for the old ones.
(2 comments | Leave a comment)

Tuesday 25th March 2014

Update to SSHFP tutorial

I have updated yesterday's article on how to get SSHFP records, DNSSEC, and VerifyHostKeyDNS=yes to work.

I have re-ordered the sections to avoid interrupting the flow of the instructions with chunks of background discussion.

I have also added a section discussing the improved usability vs weakened security of the RRSET_FORCE_EDNS0 patch in Debian and Ubuntu.

(Leave a comment)

Monday 24th March 2014

SSHFP tutorial: how to get SSHFP records, DNSSEC, and VerifyHostKeyDNS=yes to work.

One of the great promises of DNSSEC is to provide a new public key infrastructure for authenticating Internet services. If you are a relatively technical person you can try out this brave new future now with ssh.

There are a couple of advantages of getting SSHFP host authentication to work. Firstly you get easier-to-use security, since you longer need to rely on manual host authentication, or better security for the brave or foolhardy who trust leap-of-faith authentication. Secondly, it becomes feasible to do host key rollovers, since you only need to update the DNS - the host's key is no longer wired into thousands of known_hosts files. (You can probably also get the latter benefit using ssh certificate authentication, but why set up another PKI if you already have one?)

In principle it should be easy to get this working but there are a surprising number of traps and pitfalls. So in this article I am going to try to explain all the whys and wherefores, which unfortunately means it is going to be long, but I will try to make it easy to navigate. In the initial version of this article I am just going to describe what the software does by default, but I am happy to add specific details and changes made by particular operating systems.

The outline of what you need to do on the server is:

  • Sign your DNS zone. I will not cover that in this article.
  • Publish SSHFP records in the DNS

The client side is more involved. There are two versions, depending on whether ssh has been compiled to use ldns or not. Run ldd $(which ssh) to see if it is linked with libldns.

  • Without ldns:
    • Install a validating resolver (BIND or Unbound)
    • Configure the stub resolver /etc/resolv.conf
    • Configure ssh
  • With ldns:
    • Install unbound-anchor
    • Configure the stub resolver /etc/resolv.conf
    • Configure ssh

Publish SSHFP records in the DNS

Generating SSHFP records is quite straightforward:

    demo:~# cd /etc/ssh
    demo:/etc/ssh# ssh-keygen -r $(hostname)
    demo IN SSHFP 1 1 21da0404294d07b940a1df0e2d7c07116f1494f9
    demo IN SSHFP 1 2 3293d4c839bfbea1f2d79ab1b22f0c9e0adbdaeec80fa1c0879dcf084b72e206
    demo IN SSHFP 2 1 af673b7beddd724d68ce6b2bb8be733a4d073cc0
    demo IN SSHFP 2 2 953f24d775f64ff21f52f9cbcbad9e981303c7987a1474df59cbbc4a9af83f6b
    demo IN SSHFP 3 1 f8539cfa09247eb6821c645970b2aee2c5506a61
    demo IN SSHFP 3 2 9cf9ace240c8f8052f0a6a5df1dea4ed003c0f5ecb441fa2c863034fddd37dc9

Put these records in your zone file, or you can convert them into an nsupdate script with a bit of seddery:

    ssh-keygen -r $(hostname -f) |
        sed 's/^/update add /;s/ IN / 3600 IN /;/ SSHFP . 1 /d;'

The output of ssh-keygen -r includes hashes in both SHA1 and SHA256 format (the shorter and longer hashes). You can discard the SHA1 hashes.

It includes hashes for the different host key authentication algorithms:

  • 1: ssh-rsa
  • 2: ssh-dss
  • 3: ecdsa

I believe ecdsa covers all three key sizes which OpenSSH gives separate algorithm names: ecdsa-sha2-nistp256, ecdsa-sha2-nistp384, ecdsa-sha2-nistp521

OpenSSH supports other host key authentication algorithms, but unfortunately they cannot be authenticated using SSHFP records because they do not have algorithm numbers allocated.

The problem is actually worse than that, because most of the extra algorithms are the same as the three listed above, but with added support for certificate authentication. The ssh client is able to convert certificate to plain key authentication, but a bug in this fallback logic breaks SSHFP authentication.

So I recommend that if you want to use SSHFP authentication your server should only have host keys of the three basic algorithms listed above.

NOTE (added 26-Nov-2014): if you are running OpenSSH-5, it does not support ECDSA SSHFP records. So if your servers need to support older clients, you might want to stick to just RSA and DSA host keys.

You are likely to have an SSHFP algorithm compatibility problem if you get the message: "Error calculating host key fingerprint."

NOTE (added 12-Mar-2015): if you are running OpenSSH-6.7 or later, it has support for Ed25519 SSHFP records which use algorithm number 4.

Install a validating resolver

To be safe against active network interception attacks you need to do DNSSEC validation on the same machine as your ssh client. If you don't do this, you can still use SSHFP records to provide a marginal safety improvement for leap-of-faith users. In this case I recommend using VerifyHostKeyDNS=ask to reinforce to the user that they ought to be doing proper manual host authentication.

If ssh is not compiled to use ldns then you need to run a local validating resolver, either BIND or unbound.

If ssh is compiled to use ldns, it can do its own validation, and you do not need to install BIND or Unbound.

Run ldd $(which ssh) to see if it is linked with libldns.

Install a validating resolver - BIND

The following configuration will make named run as a local validating recursive server. It just takes the defaults for everything, apart from turning on validation. It automatically uses BIND's built-in copy of the root trust anchor.


    options {
        dnssec-validation auto;
        dnssec-lookaside auto;

Install a validating resolver - Unbound

Unbound comes with a utility unbound-anchor which sets up the root trust anchor for use by the unbound daemon. You can then configure unbound as follows, which takes the defaults for everything apart from turning on validation using the trust anchor managed by unbound-anchor.


        auto-trust-anchor-file: "/var/lib/unbound/root.key"

Install a validating resolver - dnssec-trigger

If your machine moves around a lot to dodgy WiFi hot spots and hotel Internet connections, you may find that the nasty middleboxes break your ability to validate DNSSEC. In that case you can use dnssec-trigger, which is a wrapper around Unbound which knows how to update its configuration when you connect to different networks, and which can work around braindamaged DNS proxies.

Configure the stub resolver - without ldns

If ssh is compiled without ldns, you need to add the following line to /etc/resolv.conf; beware your system's automatic resolver configuration software, which might be difficult to persuade to leave resolv.conf alone.

    options edns0

For testing purposes you can add RES_OPTIONS=edns0 to ssh's environment.

On some systems (including Debian and Ubuntu), ssh is patched to force EDNS0 on, so that you do not need to set this option. See the section on RRSET_FORCE_EDNS0 below for further discussion.

Configure the stub resolver - with ldns

If ssh is compiled with ldns, you need to run unbound-anchor to maintain a root trust anchor, and add something like the following line to /etc/resolv.conf

    anchor /var/lib/unbound/root.key

Run ldd $(which ssh) to see if it is linked with libldns.

Configure ssh

After you have done all of the above, you can add the following to your ssh configuration, either /etc/ssh/ssh_config or ~/.ssh/config

    VerifyHostKeyDNS yes

Then when you connect to a host for the first time, it should go straight to the Password: prompt, without asking for manual host authtentication.

If you are not using certificate authentication, you might also want to disable that. This is because ssh prefers the certificate authentication algorithms, and if you connect to a host that offers a more preferred algorithm, ssh will try that and ignore the DNS. This is not very satisfactory; hopefully it will improve when the bug is fixed.

    HostKeyAlgorithms ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,ssh-dss


Check that your resolver is validating and getting a secure result for your host. Run the following command and check for "ad" in the flags. If it is not there then either your resolver is not validating, or /etc/resolv.conf is not pointing at the validating resolver.

    $ dig +dnssec <hostname> sshfp | grep flags
    ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
    ; EDNS: version: 0, flags: do; udp: 4096

See if ssh is seeing the AD bit. Use ssh -v and look for messages about secure or insecure fingerprints in the DNS. If you are getting secure answers via dig but ssh is not, perhaps you are missing "options edns0" from /etc/resolv.conf.

    debug1: found 6 secure fingerprints in DNS
    debug1: matching host key fingerprint found in DNS

Try using specific host key algorithms, to see if ssh is trying to authenticate a key which does not have an SSHFP record of the corresponding algorithm.

    $ ssh -o HostKeyAlgorithms=ssh-rsa <hostname>


What I use is essentially:


    options {
        dnssec-validation auto;
        dnssec-lookaside auto;


    options edns0


    VerifyHostKeyDNS yes

Background on DNSSEC and non-validating stub resolvers

When ssh is not compiled to use ldns, it has to trust the recursive DNS server to validate SSHFP records, and it trusts that the connection to the recursive server is secure. To find out if an SSHFP record is securely validated, ssh looks at the AD bit in the DNS response header - AD stands for "authenticated data".

A resolver will not set the AD bit based on the security status of the answer unless the client asks for it. There are two ways to do that. The simple way (from the perspective of the DNS protocol) is to set the AD bit in the query, which gets you the AD bit in the reply without other side-effects. Unfortunately the standard resolver API makes it very hard to do this, so it is only simple in theory.

The other way is to add an EDNS0 OPT record to the query with the DO bit set - DO stands for "DNSSEC OK". This has a number of side-effects: EDNS0 allows large UDP packets which provide the extra space needed by DNSSEC, and DO makes the server send back the extra records required by DNSSEC such as RRSIGs.

Adding "options edns0" to /etc/resolv.conf only tells it to add the EDNS0 OPT record - it does not enable DNSSEC. However ssh itself observes whether EDNS0 is turned on, and if so also turns on the DO bit.

Regarding "options edns0" vs RRSET_FORCE_EDNS0

At first it might seem annoying that ssh makes you add "options edns0" to /etc/resolv.conf before it will ask for DNSSEC results. In fact on some systems, ssh is patched to add a DNS API flag called RRSET_FORCE_EDNS0 which forces EDNS0 and DO on, so that you do not need to explicitly configure the stub resolver. However although this seems more convenient, it is less safe.

If you are using the standard portable OpenSSH then you can safely set VerifyHostKeyDNS=yes, provided your stub resolver is configured correctly. The rule you must follow is to only add "options edns0" if /etc/resolv.conf is pointing at a local validating resolver. SSH is effectively treating "options edns0" as a signal that it can trust the resolver. If you keep this rule you can change your resolver configuration without having to reconfigure ssh too; it will automatically fall back to VerifyHostKeyDNS=ask when appropriate.

If you are using a version of ssh with the RRSET_FORCE_EDNS0 patch (such as Debian and Ubuntu) then it is sometimes NOT SAFE to set VerifyHostKeyDNS=yes. With this patch ssh has no way to tell if the resolver is trustworthy or if it should fall back to VerifyHostKeyDNS=ask; it will blindly trust a remote validating resolver, which leaves you vulnerable to MitM attacks. On these systems, if you reconfigure your resolver, you may also have to reconfigure ssh in order to remain safe.

Towards the end of February there was a discussion on an IETF list about stub resolvers and DNSSEC which revolved around exactly this question of how an app can tell if it is safe to trust the AD bit from the recursive DNS server.

One proposal was for the stub resolver to strip the AD bit in replies from untrusted servers, which (if it were implemented) would allow ssh to use the RRSET_FORCE_EDNS0 patch safely. However this proposal means you have to tell the resolver if the server is trusted, which might undo the patch's improved convenience. There are ways to avoid that, such as automatically trusting resolvers running on the local host, and perhaps having a separate configuration file listing trusted resolvers, e.g. those reachable over IPSEC.

(1 comment | Leave a comment)

Wednesday 19th February 2014

Relative frequency of initial letters of TLDs

Compare Wikipedia's table of the relative frequencies of initial letters in English.

$ dig axfr . @f.root-servers.net |
  perl -ne '
	next unless /^(([a-z])[a-z0-9-]+)[.][  ].*/;
	$label{$1} = 1; $letter{$2}++; $total++;
	END {
		for my $x (sort keys %letter) {
			my $p = 100.0*$letter{$x}/$total;
			printf "<tr><td>$x</td>
				<td align=right>%5.2f</td>
				<td><span style=\"
					display: inline-block;
					background-color: gray;
					width: %d;\">
			    $p, 32*$p;
a 5.24 
b 5.96 
d 2.45 
e 3.28 
f 2.43 
g 5.03 
h 1.78 
i 3.38 
j 1.14 
k 2.71 
l 3.74 
m 6.69 
n 4.26 
o 0.72 
p 5.68 
q 0.65 
r 2.81 
s 6.15 
t 6.02 
u 1.91 
v 2.66 
w 1.94 
y 0.41 
z 0.75 
(6 comments | Leave a comment)

Wednesday 29th January 2014

Diffing dynamic raw zone files in git with BIND 9.10

On my toy nameserver my master zones are configured with a directory for each zone. In this directory is a "conf" file which is included by the nameserver's main configuration file; a "master" file containing the zone data in the raw binary format; a "journal" file recording changes to the zone from dynamic UPDATEs and re-signing; and DNSSEC and TSIG keys. The "conf" file looks something like this:

    zone dotat.at {
        type master;
        file "/zd/dotat.at/master";
        journal "/zd/dotat.at/journal";
        key-directory "/zd/dotat.at";
        masterfile-format raw;
        auto-dnssec maintain;
        update-policy local;

I must have been having a fit of excessive tidyness when I was setting this up, because although it looks quite neat, the unusual file names cause some irritation - particularly for the journal. Some of the other BIND tools assume that journal filenames are the same as the master file with a .jnl extension.

I keep the name server configuration in git. This is a bit awkward because the configuration contains precious secrets (DNSSEC private keys), and the zone files are constantly-changing binary data. But it is useful for recording manual changes, since the zone files don't have comments explaining their contents. I don't make any effort to record the re-signing churn, though I commit it when making other changes.

To reduce the awkwardness I configured git to convert zone files to plain text when diffing them, so I had a more useful view of the repository. There are three parts to setting this up.

  • Tell git that the zone files require a special diff driver, which I gave the name "bind-raw".

    All the zone files in the repository are called "master", so in the .gitattributes file at the top of the repository I have the line

        master diff=bind-raw
  • The diff driver is part of the repository configuration. (Because the implementation of the driver is a command it isn't safe to set it up automatically in a repository clone, as is the case for .gitattributes settings, so it has to be configured separately.) So add the following lines to .git/config
        [diff "bind-raw"]
    	textconv = etc/raw-to-text
  • The final part is the raw-to-text script, which lives in the repository.

This is where journal file names get irritating. You can convert a raw master file into the standard text format with named-compilezone, which has a -j option to read the zone's journal, but this assumes that the journal has the default file name with the .jnl extension. So it doesn't quite work in my setup.

(It also doesn't quite work on the University's name servers which have a directory for master files and a directory for journal files.)

So in September 2012 I patched BIND to add a -J option to named-compilezone for specifying the journal file name. I have a number of other small patches to my installation of BIND, and this one was very simple, so in it went. (It would have been much more sensible to change my nameserver configuration to go with the flow...)

The patch allowed me to write the raw-to-text script as follows. The script runs named-compilezone twice: the first time with a bogus zone name, which causes named-compilezone to choke with an error message. This helpfully contains the real zone name, which the script extracts then uses to invoke named-compilezone correctly.

    command="named-compilezone -f raw -F text -J journal -o /dev/stdout"
    zone="$($command . "$file" 2>&1)"
    zone="${zone#*: }"
    $command "$zone" "$file" 2>/dev/null

I submitted the -J patch to the ISC and got a favourable response from Evan Hunt. At that time (16 months ago) BIND was at version 9.9.2; since this option was a new feature (and unimportant) it was added to the 9.10 branch. Wind forward 14 months to November 2013 and the first alpha release of 9.10 came out, with the -J option, so I was able to retire my patch. Woo! There will be a beta release of 9.10 in a few weeks.

In truth, you don't need BIND 9.10 to use this git trick, if you use the default journal file names. The key thing to make it simple is to give all your master files the same name, so that you don't have to list them all in your .gitattributes.

(Leave a comment)

Tuesday 3rd December 2013

A weird BIND DNSSEC resolution bug, with a fix.

The central recursive DNS servers in Cambridge act as stealth slaves for most of our local zones, and we recommend this configuration for other local DNS resolvers. This has the slightly odd effect that the status bits in answers have AD (authenticated data) set for most DNSSEC signed zones, except for our local ones which have AA (authoritative answer) set. This is not a very big deal since client hosts should do their own DNSSEC validation and ignore any AD bits they get over the wire.

It is a bit more of a problem for the toy nameserver I run on my workstation. As well as being my validating resolver, it is also the master for my personal zones, and it slaves some of the Cambridge zones. This mixed recursive / authoritative setup is not really following modern best practices, but it's OK when I am the only user, and it makes experimental playing around easier. Still, I wanted it to validate answers from its authoritative zones, especially because there's no security on the slave zone transfers.

I had been procrastinating this change because I thought the result would be complicated and ugly. But last week one of the BIND developers, Mark Andrews, posted a description of how to validate slaved zones to the dns-operations list, and it turned out to be reasonably OK - no need to mess around with special TSIG keys to get queries from one view to another.

The basic idea is to have one view that handles recursive queries and which validates all its answers, and another view that holds the authoritative zones and which only answers non-recursive queries. The recursive view has "static-stub" zone configurations mirroring all of the zones in the authoritative view, to redirect queries to the local copies.

Here's a simplified version of the configuration I tried out. To make it less annoying to maintain, I wrote a script to automatically generate the static-stub configurations from the authoritative zones.

  view rec {
    match-recursive-only yes;
    zone cam.ac.uk         { type static-stub; server-addresses { ::1; }; };
    zone private.cam.ac.uk { type static-stub; server-addresses { ::1; }; };

  view auth {
    recursion no;
    allow-recursion { none; };
    zone cam.ac.uk         { type slave; file "cam";  masters { ucam; }; };
    zone private.cam.ac.uk { type slave; file "priv"; masters { ucam; }; };

This seemed to work fine, until I tried to resolve names in private.cam.ac.uk - then I got a server failure. In my logs was the following (which I have slightly abbreviated):

  client ::1#55687 view rec: query: private.cam.ac.uk IN A +E (::1)
  client ::1#60344 view auth: query: private.cam.ac.uk IN A -ED (::1)
  client ::1#54319 view auth: query: private.cam.ac.uk IN DS -ED (::1)
  resolver: DNS format error from ::1#53 resolving private.cam.ac.uk/DS:
    Name cam.ac.uk (SOA) not subdomain of zone private.cam.ac.uk -- invalid response
  lame-servers: error (FORMERR) resolving 'private.cam.ac.uk/DS/IN': ::1#53
  lame-servers: error (no valid DS) resolving 'private.cam.ac.uk/A/IN': ::1#53
  query-errors: client ::1#55687 view rec:
    query failed (SERVFAIL) for private.cam.ac.uk/IN/A at query.c:7435

You can see the original recursive query that I made, then the resolver querying the authoritative view to get the answer and validate it. The situation here is that private.cam.ac.uk is an unsigned zone, so a DNSSEC validator has to check its delegation in the parent zone cam.ac.uk and get a proof that there is no DS record, to confirm that it is OK for private.cam.ac.uk to be unsigned. Something is going wrong with BIND's attempt to get this proof of nonexistence.

When BIND gets a non-answer it has to classify it as a referral to another zone or an authoritative negative answer, as described in RFC 2308 section 2.2. It is quite strict in its sanity checks, in particular it checks that the SOA record refers to the expected zone. This check often discovers problems with misconfigured DNS load balancers which are given a delegation for www.example.com but which think their zone is example.com, leading them to hand out malformed negative responses to AAAA queries.

This negative answer SOA sanity check is what failed in the above log extract. Very strange - the resolver seems to be looking for the private.cam.ac.uk DS record in the private.cam.ac.uk zone, not the cam.ac.uk zone, so when it gets an answer from the cam.ac.uk zone it all goes wrong. Why is it looking in the wrong place?

In fact the same problem occurs for the cam.ac.uk zone itself, but in this case the bug turns out to be benign:

  client ::1#16276 view rec: query: cam.ac.uk IN A +E (::1)
  client ::1#65502 view auth: query: cam.ac.uk IN A -ED (::1)
  client ::1#61409 view auth: query: cam.ac.uk IN DNSKEY -ED (::1)
  client ::1#51380 view auth: query: cam.ac.uk IN DS -ED (::1)
  security: client ::1#51380 view auth: query (cache) 'cam.ac.uk/DS/IN' denied
  lame-servers: error (chase DS servers) resolving 'cam.ac.uk/DS/IN': ::1#53

You can see my original recursive query, and the resolver querying the authoritative view to get the answer and validate it. But it sends the DS query to itself, not to the name servers for the ac.uk zone. When this query fails, BIND re-tries by working down the delegation chain from the root, and this succeeds so the overall query and validation works despite tripping up.

This bug is not specific to the weird two-view setup. If I revert to my old configuration, without views, and just slaving cam.ac.uk and private.cam.ac.uk, I can trigger the benign version of the bug by directly querying for the cam.ac.uk DS record:

  client ::1#30447 (cam.ac.uk): query: cam.ac.uk IN DS +E (::1)
  lame-servers: error (chase DS servers) resolving 'cam.ac.uk/DS/IN':

In this case the resolver sent the upstream DS query to one of the authoritative servers for cam.ac.uk, and got a negative response from the cam.ac.uk zone apex per RFC 4035 section This did not fail the SOA sanity check but it did trigger the fall-back walk down the delegation chain.

In the simple slave setup, queries for private.cam.ac.uk do not fail because they are answered from authoritative data without going through the resolver. If you change the zone configurations from slave to stub or static-stub then the resolver is used to answer queries for names in those zones, and so queries for private.cam.ac.uk explode messily as BIND tries really hard (128 times!) to get a DS record from all the available name servers but keeps checking the wrong zone.

I spent some time debugging this on Friday evening, which mainly involved adding lots of logging statements to BIND's resolver to work out what it thought it was doing. Much confusion and headscratching and eventually understanding.

BIND has some functions called findzonecut() which take an option to determine whether it wants the child zone or the parent zone. This works OK for dns_db_findzonecut() which looks in the cache, but dns_view_findzonecut() gets it wrong. This function works out whether to look for the name in a locally-configured zone, and if so which one, or otherwise in the cache, or otherwise work down from the root hints. In the case of a locally-configured zone it ignores the option and always returns the child side of the zone cut. This causes the resolver to look for DS records in the wrong place, hence all the breakage described above.

I worked out a patch to fix this DS record resolution problem, and I have sent details of the bug and my fix to bind9-bugs@isc.org. And I now have a name server that correctly validates its authoritative zones :-)

(1 comment | Leave a comment)

Wednesday 13th November 2013

Temporum: secure time: a paranoid fantasy

Imagine that...

Secure NTP is an easy-to-use and universally deployed protocol extension...

The NTP pool is dedicated to providing accurate time from anyone to everyone, securely...

NIST, creators of some of the world's best clocks and keepers of official time for the USA, decide that the NTP pool is an excellent project which they would like to help. They donatate machines and install them around the world and dedicate them to providing time as part of the NTP pool. Their generous funding allows them to become a large and particularly well-connected proportion of the pool.

In fact NIST is a sock-puppet of the NSA. Their time servers are modified so that they are as truthful and as accurate as possible to everyone, except those who the US government decides they do not like.

The NSA has set up a system dedicated to replay attacks. They cause occasional minor outages and screwups in various cryptographic systems - certificate authorities, DNS registries - which seem to be brief and benign when they happen, but no-one notices that the bogus invalid certificates and DNS records all have validity periods covering a particular point in time.

Now the NSA can perform a targeted attack, in which they persuade the victim to reboot, perhaps out of desperation because nothing works and they don't understand denial-of-service attacks. The victim's machine reboots, and it tries to get the time from the NTP pool. The NIST sock-puppet servers all lie to it. The victim's machine believes the time is in the NSA replay attack window. It trustingly fetches some crucial "software update" from a specially-provisioned malware server, which both its DNSSEC and X.509 PKIs say is absolutely kosher. It becomes comprehensively pwned by the NSA.

How can we provide the time in a way that is secure against this attack?

(Previously, previously)

(15 comments | Leave a comment)

Monday 11th November 2013

Security considerations for temporum: quorate secure time

The security of temporum is based on the idea that you can convince yourself that several different sources agree on what the time is, with the emphasis on different. Where are the weaknesses in the way it determines if sources are different?

The starting point for temporum is a list of host names to try. It is OK if lots of them fail (e.g. because your device has been switched off on a shelf for years) provided you have a good chance of eventually getting a quorum.

The list of host names is very large, and temporum selects candidates from the list at random. This makes it hard for an attacker to target the particular infrastructure that temporum might use. I hope your device is able to produce decent random numbers immediately after booting!

The list of host names is statically configured. This is important to thwart Sybil attacks: you don't want an attacker to convince you to try a list of apparently-different host names which are all under the attacker's control. Question: can the host list be made dynamic without making it vulnerable?

Hostnames are turned into IP addresses using the DNS. Temporum uses the TLS X.509 PKI to give some assurance that the DNS returned the correct result, about which more below. The DNS isn't security-critical, but if it worries you perhaps temporum could be configured with a list of IP addresses instead - but maybe that will make the device-on-shelf less likely to boot successfully.

Temporum does not compare the IP addresses of "different" host names. This might become a problem once TLS SNI makes large-scale virtual hosting easier. More subtly, there is a risk that temporum happens to query lots of servers that are hosted on the same infrastructure. This can be mitigated by being careful about selecting which host names to include in the list - no more than a few each of Blogspot, Tumblr, Livejournal, GoDaddy vhosts, etc. More than one of each is OK since it helps with on-shelf robustness.

The TLS security model hopes that X.509 certification authorities will only hand out certificates for host names to the organizations that run the hosts. This is a forlorn hope: CAs have had their infrastructure completely compromised; they have handed out intermediate signing certificates to uncontrolled third parties; they are controlled by nation states that treat our information security with contempt.

In the context of temporum, we can reduce this problem by checking that the quorum hosts are authenticated by diverse CAs. Then an attacker would have to compromise multiple CAs to convince us of an incorrect time. Question: are there enough different CAs used by popular sites that temporum can quickly find a usable set?

(Leave a comment)

Saturday 9th November 2013

nsdiff 1.47

I have done a little bit of work on nsdiff recently.

You can now explicitly manage your DNSKEY RRset, instead of leaving it to named. This is helpful when you are transferring a zone from one operator to another: you need to include the other operator's zone signing key in your DNSKEY RRset to ensure that validation works across the transfer.

There is now support for bump-in-the-wire signing, where nsdiff transfers the new version of the zone from a back-end hidden master server and pushes the updates to a signing server which feeds the public authoritative servers.

Get nsdiff from http://www-uxsup.csx.cam.ac.uk/~fanf2/hermes/conf/bind/bin/nsdiff

(Edit: I decided to simplify the -u option so updated from version 1.46 to 1.47.)

(Previously, previously, previously, previously, previously.)

(Leave a comment)

Tuesday 29th October 2013

Temporum: Quorate secure time

There are essentially two ways to find out what the time is: ask an authoritative source and trust the answer, or ask several more or less unreliable sources and see what they agree on. NTP is based on the latter principle, but since the protocol isn't secured, a client also has to trust the network not to turn NTP responses into lies.

NTP's lack of security causes a bootstrapping problem. Many security protocols rely on accurate time to avoid replay attacks. So nearly the first thing a networked device needs to do on startup is get the time, so that it can then properly verify what it gets from the network - DNSSEC signatures, TLS certificates, software updates, etc. This is particularly challenging for cost-constrained devices that do not have a battery-backed real time clock and so start up in 1970.

When I say NTP isn't secured, I mean that the protocol has security features but they have not been deployed. I have tried to understand NTP security, but I have not found a description of how to configure it for the bootstrap case. What I want is for a minimally configured client to be able to communicate with some time servers and get responses with reasonable authenticity and integrity. Extra bonus points for a clear description of which of NTP's half dozen identity verification schemes is useful for what, and which ones are incompatible with NATs and rely on the client knowing its external IP address.

In the absence of usable security from NTP, Jacob Appelbaum of the Tor project has written a program called tlsdate. In TLS, the ClientHello and ServerHello messages include a random nonce which includes a Unix time_t value as a prefix. So you can use any TLS server as a secure replacement for the old port 37 time service.

Unlike NTP, tlsdate gets time from a single trusted source. It would be much better if it were able to consult multiple servers for their opinions of the time: it would be more robust if a server is down or has the wrong time, and it would be more secure in case a server is compromised. There is also the possibility of using multiple samples spread over a second or two to obtain a more accurate time than the one second resolution of TLS's gmt_unix_time field.

The essential idea is to find a quorum of servers that agree on the time. An adversary or a technical failure would have to break at least that many servers for you to get the wrong time.

In statistical terms, you take a number of samples and find the mode, the most common time value, and keep taking samples until the frequency at the mode is greater than the quorum.

But even though time values are discrete, the high school approach to finding the mode isn't going to work because in many casees we won't be able to take all the necessary samples close enough together in time. So it is better to measure the time offset between a server and the client at each sample, and treat these as a continuous distribution.

The key technique is kernel density estimation. The mode is the point of peak density in the distribution estimated from the samples. The kernel is a function that is used to spread out each sample; the estimated distribution comes from summing the spread-out samples.

NTP's clock select algorithm is basically kernel density estimation with a uniform kernel.

NTP's other algorithms are based on lengthy observations of the network conditions between the client and its servers, whereas we are more concerned with getting a quick result from many servers. So perhaps we can use a simpler, more well-known algorithm to find the mode. It looks like the mean shift algorithm is a good candidate.

For the mean shift algorithm to work well, I think it makes sense to use a smooth kernel such as the Gaussian. (I like exponentials.) The bandwidth of the kernel should probably be one second (the precision of the timestamp) plus the round trip time.

Now it's time to break out the editor and write some code... I think I'll call it "temporum" because that rhymes with "quorum" and it means "of times" (plural). Get temporum from my git server.

(2 comments | Leave a comment)

Wednesday 23rd October 2013

My circular commute

We moved to new offices a month ago and I have settled in to the new route to work. Nico's nursery was a bit out of the way when I was working in the city centre, but now it is about a mile in the wrong direction.

But this is not so bad, since I have decided on a -Ofun route [1] which is almost entirely on cycle paths and very quiet roads, and the main on-road section has good cycle lanes (at least by UK standards). There is lots of park land, a bit of open country, and it goes through the world heritage site in the city centre :-) And it's fairly easy to stop off on the way if I need to get supplies.

[1] optimize for fun!

My route to work on gmap-pedometer

I don't have to wrangle children on the way home, so I take the direct route past the Institute of Astronomy and Greenwich House (where Rachel previously worked and where the Royal Greenwich Observatory was wound down).

My route home on gmap-pedometer

So far it has been pleasantly free of the adrenaline spikes I get from seeing murderous and/or suicidally stupid behaviour. Much better than going along Chesterton Lane and Madingley Road!

(7 comments | Leave a comment)

Tuesday 8th October 2013

Maintaining a local patch set with git

We often need to patch the software that we run in order to fix bugs quickly rather than wait for an official release, or to add functionality that we need. In many cases we have to maintain a locally-developed patch for a significant length of time, across multiple upstream releases, either because it is not yet ready for incorporation into a stable upstream version, or because it is too specific to our setup so will not be suitable for passing upstream without significant extra work.

I have been experimenting with a git workflow in which I have a feature branch per patch. (Usually there is only one patch for each change we make.) To move them on to a new feature release, I tag the feature branch heads (to preserve history), rebase them onto the new release version, and octopus merge them to create a new deployment version. This is rather unsatisfactory, because there is a lot of tedious per-branch work, and I would prefer to have branches recording the development of our patches rather than a series of tags.

Here is a git workflow suggested by Ian Jackson which I am trying out instead. I don't yet have much experience with it; I am writing it down now as a form of documentation.

There are three branches:

  • upstream, which is where public releases live
  • working, which is where development happens
  • deployment, which is what we run

Which branch corresponds to upstream may change over time, for instance when we move from one stable version to the next one.

The working branch exists on the developer's workstation and is not normally published. There might be multiple working branches for work-in-progress. They get rebased a lot.

Starting from an upstream version, a working branch will have a number of mature patches. The developer works on top of these in commit-early-commit-often mode, without worrying about order of changes or cleanliness. Every so often we use git rebase --interactive to tidy up the patch set. Often we'll use the "squash" command to combine new commits with the mature patches that they amend. Sometimes it will be rebased onto a new upstream version.

When the working branch is ready, we use the commands below to update the deployment branch. The aim is to make it look like updates from the working branch are repeatedly merged into the deployment branch. This is so that we can push updated versions of the patch set to a server without having to use --force, and pulling updates into a checked out version is just a fast-forward. However this isn't a normal merge since the tree at the head of deployment always matches the most recent good version of working. (This is similar to what stg publish does.) Diagramatically,

     | \
     |  `A---B-- 1.1-patched
     |    \       |
     |     \      |
     |      `C-- 1.1-revised
     |            |
    2.0           |
     | \          |
     |  `-C--D-- 2.0-patched
     |            |
    3.1           |
     | \          |
     |  `-C--E-- 3.1-patched
     |            |
  upstream        |

The horizontal-ish lines are different rebased versions of the patch set. Letters represent patches and numbers represent version tags. The tags on the deployment branch are for the install scripts so I probably won't need one on every update.

Ideally we would be able to do this with the following commands:

    $ git checkout deployment
    $ git merge -s theirs working

However there is an "ours" merge strategy but not a "theirs" merge strategy. Johannes Sixt described how to simulate git merge -s theirs in a post to the git mailing list in 2010. So the commands are:

    $ git checkout deployment
    $ git merge --no-commit -s ours working
    $ git read-tree -m -u working
    $ git commit -m "Update to $(git describe working)"

Mark Wooding suggested the following more plumbing-based version, which unlike the above does not involve switching to the deployment branch.

    $ d="$(git rev-parse deployment)"
    $ w="$(git rev-parse working)"
    $ m="Update deployment to $(git describe working)"
    $ c="$(echo "$m" | git commit-tree -p $d -p $w working^{tree})
    $ git update-ref -m "$m" deployment $c $d
    $ unset c d w

Now to go and turn this into a script...

(2 comments | Leave a comment)

Sunday 6th October 2013

Bacon and cabbage

Bacon and brassicas are best friends. Here's a very simple recipe which is popular in the Finch household.


  • A Savoy cabbage
  • Two large or three small onions
  • A few cloves of garlic
  • A 200g pack of bacon
  • Oil or butter for frying
  • Soured cream


Chop the onion

Slice the cabbage to make strips about 1cm wide

Cut up the bacon to make lardons


Get a large pan that is big enough to hold all the cabbage. Heat the fat, press in the garlic, then bung in the onion and bacon. Fry over a moderate heat until the bacon is cooked.

Add the cabbage. Stir to mix everything together and keep stirring so it cooks evenly. As the cabbage cooks down and becomes more manageable you can put the heat right up to brown it slightly. Keep stir-frying until the thick ribby parts of the cabbage are soft as you like, usually several minutes. (I haven't timed it since I taste to decide when it is done...)

I serve this as a main dish with just a few dollops of sour cream on top and plenty of black pepper.

(Leave a comment)
Previous 50
Powered by LiveJournal.com