?

Log in

No account? Create an account

fanf

Path names in a rootless DNS

« previous entry | next entry »
28th Feb 2012 | 19:17

Names in the DNS always appear as "fully qualified domain names" in queries, answers, and name server configurations, specifying the complete path from the root to the leaf. A surprisingly small change would be enough to make query names relative rather than absolute, and this change would have interesting and far-reaching consequences.

The first change (and the key) is to the resolution algorithm. When given a referral, instead of repeating the same question at the replacement name servers, trim off the leading labels of the query name, leaving everything up to and including the leftmost label of the delegation NS records.

Authoritative servers will have to distinguish zones by just their apex label, because that's all that is available in incoming queries. This means that, unlike at present, a nameserver will not be able to serve different zones for example.com and example.net.

This modification means that names now trace paths in a graph, rather than being hierarchial addresses. The graph can be cyclic, for example, if zone A has a delegation to zone B which in turn has a delegation back to A, then names can have an arbitrarily long sequence of A.B.A.B.A cycles round the loop.

How does resolution start in this setting, when there is no root? You (or your ISP) would configure your recursive name server with one or more well-known starting zones, which would function rather like top-level domains.

The key difference between this arrangement and the root zone is that it allows diversity and openness. The decision about which zones are starting points for resolution is dispersed to name server vendors and operators (not concentrated in ICANN and the US DOC) and they need not all choose the same set. They can include extra starting zones that are popular with their users, or omit ones that they disapprove of.

Unlike the hierarchial DNS, you can still resolve names in a zone even if it isn't in your starting set. It will be normal for zones to have delegations from multiple parents, ensuring that everyone can reach a name by relying on redundant links instead of global consistency. So the berlin zone might be generally available as a starting point / TLD in Germany, but if you are in Britain you might have to refer to it as berlin.de.

Instead of a political beauty contest, to establish a new TLD you would probably start by obtaining the same label as a subdomain of many existing TLDs, to establish a brand and presence in your target markets. Then as your sales and marketing efforts make your zone more popylar you can negotiate with ISPs and resolver vendors to promote your zone to a TLD instead. I expect this will force DNS registry business models to be more realistic.

Users may be able to augment their ISP's choice of TLDs by configuring extra search paths in their stub resolvers. However this is likely to lead to exciting RFC 1535 ambiguity problems.

In some respects the relationship between vendors and rootless TLDs is a bit like the situation for X.509 certification authorities. ISPs will have to judge whether DNS registries are operating competently and ethically, instead of relying on ICANN to enforce their regulations.

Trust anchor management cannot rely on policies decided by a central authority, and it will need to cope with a greater failure rate due to the much larger and more diverse population of resolution starting points. Perhaps RFC 5011 automated DNSSEC trust anchor management would be sufficient. Alternatively it might be possible to make use of a zone's redundant delegations as witnesses to changes of key along the lines of a proposal I wrote up last year.

These thoughts are partly inspired by the Unmanaged Internet Architecture's user-relative personal names. And bang paths (in the opposite order) were used to refer to machines in the UUCP network. Some other background is Zooko's Triangle and Clay Shirky's essay on domain names. The PetName system described by Mark Miller is also interesting, and similar in some ways to UIA names.

The rootless DNS doesn't quite reach all the corners of Zooko's triangle. The names are as human-meaningful as a crowded namespace can allow. Names are only global to the extent that network effects promote zones as popular TLDs worldwide - but you can work around this by providing alternate names. Names are secure to the extent that you trust the intermediaries described by the path - and if that doesn't satisfy you, you can promote important names to be trust anchors in your setup.

| Leave a comment | Share

Comments {13}

(Deleted comment)

Tony Finch

from: fanf
date: 28th Feb 2012 19:44 (UTC)

Good question :-) Two options: either links don't work if the reader's search list does not overlap with the author's in the right place; or you extend the notation for domain names to allow alternate paths as was sometimes informally done with UUCP bang paths.

Reply | Parent | Thread

(Deleted comment)

Tony Finch

from: fanf
date: 28th Feb 2012 20:11 (UTC)

Indeed. This is what Zooko's triangle says is the consequence of taking a libertarian cipherpunk approach to namespaces :-)

Reply | Parent | Thread

(Deleted comment)

Tony Finch

from: fanf
date: 28th Feb 2012 20:52 (UTC)

I guess that would be the combination of friendly+global, as in the DNS. But this sacrifices security, in that you have to trust a third party who can revoke or transfer the name without your permission. Petnames are secure+friendly, but are parochial and as you point out, can't be used for hyperlinks. (Crypto keys are global+secure but not friendly.)

The interesting thing about the rootless DNS is that it allows users to trade off these two extremes within the system, or get a lot of both by using multiple names with redundant paths. The cost is greater complexity for just about everyone.

Reply | Parent | Thread

ewx

from: ewx
date: 28th Feb 2012 22:25 (UTC)

Links online are at least in principle solvable: you got to the web page, it can tell you where to go next somehow - the details may not be pretty but they probably be hidden from people who for some incomprehensible reason don’t care about the topology of the global naming system.

It’s URLs in print media, or a phone conversation, or a TV program (etc etc) that are really tricky. I don’t believe you can support them without a globally agreed namespace.

Relative addressing (i.e. replacing IP addresses with some kind of relative path) may be more realistic, and might prove necessary if the world can’t construct and maintain policies that result in manageably-sized routing tables.

Reply | Parent | Thread

Tony Finch

from: fanf
date: 28th Feb 2012 23:29 (UTC)

Both the petname markup language and the UIA personal names rely on computer mediation to solve the link problem - plus, in the UIA case, a load of other clever mechanisms.

I probably should have said more explicitly that I expect network effects to create a few dominant zones like .com from which you will be able to reach the whole namespace. (UUCP had its hubs but that was more to do with the economics of subsidised phone calls than network effects, but it had the same consequences for the namespace.) So I guess the side-of-the-bus problem might not be so bad.

I agree with you about relative addressing at the network layer :-)

Reply | Parent | Thread

cozminsky

from: cozminsky
date: 29th Feb 2012 00:09 (UTC)

Why stop at the isp. Why not develop the software so that each individual/home/business had their own root. For instance I could be cozminsky and if you'd somehow contacted me and I'd added your delegation to my zone I'd refer to your heirarchy as fanf.cozminsky. I might also have indirect knowledge such as fanf.livejournal.cozminsky. Just need a convenient way of packaging up and configuring delegations (which I think most of the backend stuff is done with dynamic updates with tsig). So someone would send you their domain SOA, DS and glue records and if you trusted them you'd hit make it so and have a relative path to their URLs.

It also seems that QR codes are becoming very popular as a way of encoding URLs on printed media, etc. You could create a vcard or something which pointed to the delegation information as above. The only problem I see is in renumbering or key revocation situations where the old QR codes will be trash, even though the organization/individual might still be around. If you were to maybe also have the fingerprint of the DS as a DNAME or something then you might be able to discover it in the case of a renumbering event through your network, but key revocation is probably game over.

Reply | Parent | Thread

Tony Finch

from: fanf
date: 29th Feb 2012 00:22 (UTC)

Yes! All good suggestions.

I think a combination of trust anchor history and witnesses can deal with the problem of stale links. The idea is that you don't necessarily trust each individual witness, and they don't necessarily trust each other, but you are all co-operating to make the system work and keep each other honest. Aesop's bundle of twigs is stronger than ICANN's root, or something.

Reply | Parent | Thread

from: mdw [distorted.org.uk]
date: 1st Mar 2012 11:02 (UTC)

Since we're breaking the protocol, I'd like to suggest an extension to change the number of labels preserved across a zone boundary. For an example of why this is useful: I keep my main zone static and delegate to a dynamic `dhcp' subzone which the DHCP server is allowed to submit updates to. (I do something similar but more awful for reverse zones.) If I were managing several zones which had `dhcp' subzones it'd be hard to disambiguate without more context. As another example, you could specify a suffix length of zero to delegate to a server which thinks it's authoritative for a root.

I don't understand why you think ISPs should be involved with recursive resolution. I mean, they are involved, but they've demonstrated repeatedly that they can't be trusted to do this properly. Given how crappy the stub<->recursive interface is, and the computational capabilities of devices -- even small things like phones -- I'd say that it ought to be expected that hosts take responsibility for their own recursive resolution and DNSsec validation, maybe using local nameservers as upstreams for local names or as caches. (This involves overhauling DHCP to report on the local DNS landscape in more detail -- but this is overdue already. I have some ideas here...)

I suspect that fears about bus adverts are unfounded. I can't see .com or ccTLDs going away for a long time: users who turn these off will be rare and probably aware of and willing to accept the consequences. Use of weirder gTLDs is riskier; but I'd expect gatewaying conventions (e.g., a `.icann' suffix to get to the current DNS root) to emerge fairly rapidly.

I worry that this sort of approach will give us the same problems as the opening up of the gTLDs, only writ larger, but maybe I'm wrong on that.

I think this is a fruitful avenue of exploration.

Reply | Thread

Tony Finch

from: fanf
date: 1st Mar 2012 11:40 (UTC)

A better way to identify zones is by key (as in UIA) which would allow you to strip all the labels off. Your query name would be a sequence of sub-labels and a zone key. The zone could be referred to by multiple different labels, perhaps different labels in different parents if a desirable label isn't available, and perhaps multiple labels in the same parent which is useful for internationalization and variant spellings. That's a much bigger syntactic change, but it doesn't have such pervasive consequences. (Also I need a better term than parent to refer to a referring zone.)

I agree with you about taking responsibility for your own name resolution. The reason I talked about ISPs is that they might see it as part of their job to provide naming services, such as a curated set of starting zones - but on the other hand, perhaps (as is the case for X.509) software vendors are in a better position to do this.

I also agree about the side-of-the-bus problem. One of the big differences between the early 1980s when DNS was invented and now is that we are much more aware of the power of network effects to drive consensus, and more comfortable with the idea of crowdsourceing rather than curation.

Of course there are downsides. I think the nature is rather different from the downsides of gTLD expansion - see my remarks about building a userbase and brand vs. political beauty contests. This eliminates the land-grab that occurs when a new TLD is introduced, and might instead lead a start-up registry to court prominent users, rather than forcing them to protect their trademarks - a much more comfortable power relationship. On the other hand, a rootless DNS might put more pressure on the flat single-label namespace since it'll be more normal to have cross-links between zones rather than delegations - e.g. uk.com might be the same as uk rather than a separate registry.

Also it spreads the work of dealing with the namespace to almost everyone: for example, if I have set up a petname for a zone I use frequently I might forget the path I originally used to reach it, making it difficult to refer other people to the zone. Perhaps we will need a distinction between a label used to refer to a zone vs. some kind of longer natural language description of it, such as the name of the company that runs it plus a relevant brand or service name. We might want to have a directory of zones distinct from the namespace, as opposed to the two being tangled up together. (The DNS was never a very good directory despite people trying to use it as one - .museum is a prominent example.)

Reply | Parent | Thread

from: mdw [distorted.org.uk]
date: 1st Mar 2012 18:49 (UTC)

Using zone keys would be fine but we're getting increasingly radical now.

Software vendors, particularly operating system distributor, are an existing root of trust. If you don't trust your OS distributor, then you just lose. In an ideal world, you wouldn't need to trust your ISP to do much other than actually provide service -- and given how consumer ISPs have failed to live up to the levels of trust they appear to want, what with censorship (http://en.wikipedia.org/wiki/Internet_Watch_Foundation), data retention (http://en.wikipedia.org/wiki/Telecommunications_data_retention#United_Kingdom), snooping for targeted advertising (http://en.wikipedia.org/wiki/Phorm), interference with DNS results (http://en.wikipedia.org/wiki/DNS_hijacking#Manipulation_by_ISPs), and probably other things I've forgotten.

Besides, we live in a world full of mobile devices. A smartphone or laptop connects via lots of different networks at different times. The notion that a user deals with a single ISP simply doesn't work any more. There's a home network, with wifi and a crappy NAT box; a work network, hotels with awful captive portals, 3G, and so on. If we leave the job of setting up starting zones to the local network then users will be faced with the naming rules changing when they move around: I'd expect confusion and security problems.

Reply | Parent | Thread