Name Server IP and a Dead Server

About 24 hours ago I rebooted the system that runs the secondary DNS for my zone and a few other zones. I’d upgraded a few things and the system had been running for almost 200 days without a reboot so it was time for it. Unfortunately it didn’t come back up.

Even more unfortunately the other DNS server for my zone is ns.sws.net.au which is also the only other server for the sws.net.au zone. Normally this will work because the servers for the net.au zone have a glue record containing the server IP address. So when asked for the NS records for the sws.net.au domain the reply will include the IP address of ns.sws.net.au. The unfortunate part was that the IP address was the old IP address from before the sws.net.au servers changed to a new IP address range, I wonder whether this was due to the recovery process after the Distribute IT hack [1], as forgetting to change a glue record is not something that I or the other guy who runs that network would forget. But it is possible that we both stuffed up.

The DNS secondary was an IBM P3-1GHz desktop system with two IDE disks in a RAID-1 array. It’s been quite reliable, it’s been running in the same hardware configuration for about four years now with only one disk replacement. It turned out that the cooling fan in the front of the case had seized up due to a lot of dirt and the BIOS wouldn’t let the system boot in that state. Also one of the disks was reporting serious SMART problems and needed to be replaced – poor cooling tends to cause disk errors.

It seems that Compaq systems are good at informing the user of SMART problems, two different Compaq desktop systems (one from before the HP buyout and one from after) made very forceful recommendations that I replace the disk, it’s a pity that the BIOS doesn’t allow a normal boot process after the warning as following the recommendation to backup the data is difficult when the system won’t boot.

I have a temporary server running now, but my plan is to install a P3-866 system and use a 5400rpm disk to replace the 7200rpm that’s currently in the second position in the RAID array. I’ve done some tests on power use and an old P3 system uses a lot less than most new systems [2]. Power use directly maps to heat dissipation and a full size desktop system with big fans that dissipates less than 50W is more likely to survive a poorly cooled room in summer. Laptops dissipate less heat but as their vents are smaller (thus less effective at the best of times and more likely to get blocked) this doesn’t provide a great benefit. Also my past experience of laptops as servers is that they don’t want to boot up when the lid is closed and getting RAID-1 and multiple ethernet ports on a laptop is difficult.

Finally I am going to create a third DNS server for the sws.net.au domain. While it is more pain to run extra servers, for some zones it’s just worth it.

Dynamic DNS

The Problem

My SE Linux Play Machine has been down for a couple of weeks. I’ve changed to a cheaper Internet access plan which also allows me to download a lot more data, but I don’t have a static IP address any more – and my ISP seems to change the IP a lot more often than I’ve experienced in the past (I’m used to having a non-static IP address not change for months rather than hours). So I needed to get Dynamic DNS working. Naturally I wasn’t going to use one of the free or commercial Dynamic DNS solutions, I prefer to do things myself. So my Play Machine had to remain offline until I fixed this.

The Solution

dyn    IN      NS      ns.sws.net.au.
        IN      NS      othello.dycom.com.au.
play    IN      CNAME  play.dyn.coker.com.au.

The first thing I did was to create a separate zone file, I put the above records in my main zone file to make play.coker.com.au be a CNAME for play. and dyn.coker.com.au is a dynamic domain. I have SE Linux denying BIND the ability to write to the primary zone file for my domain to make it slightly more difficult for an attacker to insert fake DNS records (they could of course change the memory state of BIND to make it serve bogus data). The dynamic zone file is stored where BIND can write it – and therefore a BIND exploit could easily replace it (but such an attack is out of the scope of the Play Machine project so don’t get any ideas).

Another reason for separating the dynamic data is that BIND journals changes to a dynamic zone and therefore if you want to manually edit it you have to delete the journal, stop BIND, edit the file, and then restart BIND. One of the things that interests me is setting up dynamic DNS for some of my clients, as a constraint is that my client must be able to edit the zone file themself I have to keep the editing process for the main zone file relatively simple.

dnssec-keygen -a hmac-md5 -b 128 -n host foo-dyn.key

I used the above command to create the key files. It created Kfoo-dyn.key.+X+Y.key and Kfoo-dyn.key.+X+Y.private where X and Y are replacements for numbers that might be secret.

key "foo" { algorithm hmac-md5; secret "XXXXXXXX"; };
zone "dyn.coker.com.au" {
  type master;
  file "/var/cache/bind/dyn.coker.com.au";
  allow-update { key "foo"; };
allow-transfer { key ns; };
};

I added the above to the BIND configuration to create the dynamic zone and allow it to be updated by this key. The value which I replaced with XXXXXXX in this example came from Kfoo-dyn.key.+X+Y.key. I haven’t found any use for the .private file in this mode of operation. Please let me know if I missed something.

Finally I used the following shell script to take the IP address from the interface that is specified on the command-line and update the DNS with it. I chose a 120 second timeout because i will sometimes change IP address often and because the system doesn’t get enough hits for anyone to care about DNS caching.

#!/bin/bash
set -e
IP=$(ip addr list $1|sed -n -e "s/\/.*$//" -e "s/^.*inet //p")
nsupdate -y foo:XXXXXXXX << END
update delete play.dyn.coker.com.au A
update add play.dyn.coker.com.au 120 A $IP
send
END

Update

It is supposed to be possible to use the -k option to nsupdate to specify a file containing the key. Joey’s comment gives some information on how to get it working (it sounds like it’s buggy).

rhesa pointed out another way of doing it, so I’ve now got a script like the following in production which solves the security issue (as long as the script is mode 0700) and avoids using other files.

#!/bin/bash
set -e
IP=$(ip addr list $1|sed -n -e "s/\/.*$//" -e "s/^.*inet //p")
nsupdate << END
key foo XXXXXXXX
update delete play.dyn.coker.com.au A
update add play.dyn.coker.com.au 120 A $IP
send
END

DNS Secondaries and Web Security

At the moment there are ongoing security issues related to web based services and DNS hijacking. the Daily Ack has a good summary of the session hijacking issue [1].

For a long time it has been generally accepted that you should configure a DNS server to not allow random machines on the Internet to copy the entire zone. Not that you should have any secret data there anyway, but it’s regarded as just a precautionary layer of security by obscurity.

Dan Kaminsky (who brought the current DNS security issue to everyone’s attention) has described some potential ways to alleviate the problem [2]. One idea is to use random case in DNS requests (which are case insensitive but case preserving), so if you were to lookup wWw.cOkEr.CoM.aU and the result was returned with different case then you would know that it was forged.

Two options which have been widely rejected are using TCP for DNS (which is fully supported for the case where an answer can not fit in a single UDP packet) and sending requests twice (to square the number of combinations that would need to be guessed). They have been rejected due to the excessive load on the servers (which are apparently already near capacity).

One option that does not seem to get mentioned is the possibility to use multiple source IP addresses, so instead of merely having 2^16 ports to choose from you could multiply that by as many IP addresses as you have available. In the past I’ve worked for ISPs that could have dedicated a /22 (1024 IP addresses) to their DNS proxy if it would have increased the security of their customers – an ISP of the scale that has 1024 spare IP addresses available is going to be a major target of such attacks! Also with some fancy firewall/router devices it would not be impossible to direct all port 53 traffic through the DNS proxies. That would mean that an ISP with 200,000 broadband customers online could use a random IP address from that pool of 200,000 IP addresses for every DNS request. While attacking a random port choice out of 65500 ports is possible, if it was 65500 ports over a pool of 200,000 IP addresses it would be extremely difficult (I won’t claim it to be impossible).

One problem with the consideration that has been given to TCP is that it doesn’t account for the other uses of TCP, such as for running DNS secondaries.

In Australia we have two major ISPs (Telstra and Optus) and four major banks (ANZ, Commonwealth, NAB, and Westpac). It shouldn’t be difficult for arrangements to be made for the major ISPs to have their recursive DNS servers (the caching servers that their customers talk to) act as slaves for the DNS zones related to those four banks (which might be 12 zones or more given the use of different zones for stock-broking etc). If that was combined with a firewall preventing the regular ISP customers (the ones who are denied access to port 25 to reduce the amount of spam) from receiving any data from the Internet with a source port of 53 then the potential for attacks on Australian banks would be dramatically decreased. I note that the Westpac bank has DNS secondaries run by both Optus and Telstra (which makes sense for availability reasons if nothing else), so it seems that the Telstra and Optus ISP services could protect their customers who use Westpac without any great involvement from the bank.

Banks have lots of phone lines and CTI systems. It would be easy for each bank to have a dedicated phone number (which is advertised in the printed phone books, in the telephone “directory assistance” service, and in brochures available in bank branches – all sources which are more difficult to fake than Internet services) which gave a recorded message of a list of DNS zone names and the IP addresses for the master data. Then every sysadmin of every ISP could mirror the zones that would be of most use to their customers.

Another thing that banks could do would be to create a mailing list for changes to their DNS servers for the benefit of the sysadmins who want to protect their customers. Signing mail to such a list with a GPG key and having the fingerprint available from branches should not be difficult to arrange.

Another possibility would be to use the ATM network to provide security relevant data. Modern ATMs have reasonably powerful computers which are used to display bank adverts when no-one is using them. Having an option to press a button on the ATM to get a screen full of Internet banking security details of use to a sysadmin should be easy to implement.

For full coverage (including all the small building societies and credit unions) it would be impractical for every sysadmin to have a special case for every bank. But again there is a relatively easy solution. A federal agency that deals with fraud could maintain a list of zone names and master IP addresses for every financial institution in the country and make it available on CD. If the CD was available for collection from a police station, court-house, the registry of births, deaths, and marriages, or some other official government office then it should not have any additional security risks. Of course you wouldn’t want to post such CDs, even with public key signing (which many people don’t check properly) there would be too much risk of things going wrong.

In a country such as the US (which has an unreasonably large number of banks) it would not be practical to make direct deals between ISPs and banks. But it should be practical to implement a system based on a federal agency distributing CDs with configuration files for BIND and any other DNS servers that are widely used (is any other DNS server widely used?).

Of course none of this would do anything about the issue of Phishing email and typo domain name registration. But it would be good to solve as much as we can.

The New DNS Mess

The Age has an interesting article about proposed DNS changes [1].

Apparently ICANN is going to sell top level DNS names and a prediction has been made that they will cost more than $100,000 each. A suggestion for a potential use of this would be to have cities as top level names (a .paris TLD was given as an example). The problem with this is that they are not unique. Countries that were colonised in recent times (such as the US and Australia) have many names copied from Europe. It will be interesting to see how they plan to determine which of the cities registers names, for the .paris example I’m sure that the council of Paris Illinois [2] would love to register it. Does the oldest city win an international trademark dispute over a TLD?

The current situation is that French law unambiguously determines who gets to register paris.fr and someone who sees the URL will have no confusion as to what it means (providing that they know that fr is the ISO country code for France).

As well as city names there are region names which are used for products. Australian vineyards produce a lot of sparkling wine that they like to call Champagne and a lot of fortified wine that they like to call Port. There are ongoing battles about how these names can be used and it seems likely to me that the Australian wine industry will change to other terms. But in the mean-time it would be interesting if .champagne and .port were registered by Australian companies. The fuss that would surely cause would probably give enough free publicity to the Australian wine industry to justify an investment of $200,000 on TLDs.

The concern that is cited by business people (including the client who forwarded me the URL and requested my comments) is that of the expense of protecting a brand. Currently if you have a company named “Example” you can register example.com, example.net, and example.org if you are feeling enthusiastic. Then if you have a significant presence in any country you could register your name in the DNS hierarchy for that country (large companies try to register their name in every country – for a multinational registering ~200 domains is not really difficult or expensive). But if anyone can create a new TLD (and therefore if new ones are liable to be created at any time) it becomes much more difficult. For example if a new TLD was created every day then a multi-national corporation would need to assign an employee to work full-time on investigating the new TLDs and deciding which ones to use. A small company that has an international presence (IE an Internet company) would just lose a significant amount of control over their name.

I don’t believe that this is as much of a concern as some people (such as my client) do. Currently I could register a phone line with a listed name indicating that it belongs to the Melbourne branch of a multi-national corporation. I don’t expect that Telstra would stop me, but the benefit from doing this would be minimal (probably someone who attempted fraud using such means would not gain much and would get shut down quickly). I don’t think that a DNS name registered under a .melbourne TLD would cause much more harm than a phone number listed in the Melbourne phone book. Incidentally for readers from the US, I’m thinking of Melbourne in Australia not a city of the same name in the US – yet another example of a name conflict.

Now I believe that it would be better if small companies didn’t use .com domains. The use of country specific names relevant to where they work are more appropriate and technically easier to implement. I don’t regret registering coker.com.au instead of some name in another country or in the .com hierarchy. Things would probably be working better right now if a .com domain name had always cost $100,000 and there were only a few dozen companies that had registered them. But we have to go with the flow sometimes, so I have registered RussellCoker.com.

Now when considering the merit of an idea we should consider who benefits and who (if anyone) loses. Ideally we would choose options that provide benefits for many people and losses for few (or none). In this case it seems that the suggested changes would be a loss for corporations that want to protect their brand, a loss for end-users who just want to find something without confusion, and provide more benefits for domain-squatters than anyone else.

Maybe I should register icann.port and icann.champagne if those TLDs are registered in Australia and impersonate ICANN. ;)

BIND Stats

In Debian the BIND server will by default append statistics to the file /var/cache/bind/named.stats when the command rndc stats (which seems to be undocumented) is run. The default for RHEL4 seems to be /var/named/chroot/var/named/data/named_stats.txt.

The output will include the time-stamp of the log in the number of seconds since 1970-01-01 00:00:00 UTC (see my previous post explaining how to convert this to a regular date format [1]).

By default this only logs a summary for all zones, which is not particularly useful if you have multiple zones. If you edit the BIND configuration and put zone-statistics 1; in the options section then it will log separate statistics for each zone. Unfortunately if you add this and apply the change via rndc reload I don’t know of any convenient way that you can determine when this change was made and therefore the period of time for which the per-zone statistics were kept. So after applying this to my servers I restarted the named processes so that it will be obvious from the process start time when the statistics started.

The reason I became interested in this is when a member of a mailing list that I subscribe to was considering the DNSMadeEasy.com service. That company runs primary DNS servers for $US15 per annum which allows 1,000,000 queries per month, 3 zones, and 120 records (for either a primary or a secondary server). Based on three hours of statistics it seems like my zone coker.com.au is going to get about 360,000 queries a month (between both the primary and the secondary server). So the $15 a year package could accommodate 3 such zones for either primary or secondary (they each got about half the traffic). I’m not considering outsourcing my DNS, but it is interesting to consider how the various offers add up.

Another possibility for people who are considering DNS outsourcing is Xname.org which provides free DNS (primary and secondary) but request contributions from business customers (or anyone else).

Updated because I first published it without getting stats from my secondary server.

Multiple DNS Names

There are many situations where multiple DNS names for a single IP address that runs a single service are useful. One common example is with business web servers that have both www.example.com and example.com being active, so whichever a customer hits they will get the right content (the last thing you want is for a potential customer to make some trivial mistake and then give up).

Having both DNS names be equal and separate is common. One example of this is the way http://planet.ubuntulinux.org/ and http://planet.ubuntu.com/ both have the same content, it seems to me that planet.ubuntu.com is the more official name as the wiki for adding yourself to the Planet is wiki.ubuntu.com. Another example of this is the way http://planet.debian.org/ and http://planet.debian.net/ both have the same content. So far this month I have had 337 referrals to my blog from planet.debian.org and 147 from planet.debian.net. So even though I can’t find any official reason for preferring one over another the fact that more than 2/3 of the referrals from that planet come from the planet.debian.org address indicates that most people regard it as the canonical one.

In times past there was no problem with such things, it was quite routine to have web servers with multiple names and no-one cared about this (until of course one name went away and a portion of the user-base had broken links). Now there are three main problems with having two names visible:

  1. Confusion for users. When a post on thedebianuser.org referred to my post about Planet Ubuntu it used a different URL to the one I had used. I was briefly worried that I had missed half (or more) of the content by getting my links from the wrong blog – but it turned out that the same content was on both addresses.
  2. More confusing web stats for the people who run sites that are referenced (primarily the bloggers in the case of a Planet installation). This also means a lower ranking as the counts are split. In my Webalizer logs planet.debian.org is in position #5 and planet.debian.net is in position #14. If they were combined they would get position #3. One thing to keep in mind is that the number of hits that you get has some impact on the content. If someone sees repeated large amounts of traffic coming from planet.debian.org then they are likely to write more content that appeals to those users.
  3. Problems with sites that have strange security policies. Some bloggers configure their servers to only serve images if the referrer field in the HTTP protocol has an acceptable value (to prevent bandwidth theft by unethical people who link to their pictures). My approach to this problem is reactive (I rename the picture to break the links when it happens) because I have not had it happen often enough to do anything else. But I can understand why some people want to do more. If we assume that an increasing number of bloggers do this, it would be good to not make things difficult for them by having the smallest possible number of referrer URLs. It would suck for the readers to find that planet.debian.org has the pictures but planet.debian.net doesn’t.

The solution to this is simple, one name should redirect to the other. Having something like the following in the Apache virtual host configuration (or the .htaccess) file for the least preferred name should redirect all access to the other name.

RewriteCond %{REQUEST_URI} ^(.*$) [NC]
RewriteRule . http://planet.example.com/%1 [R=301,L]

In my posts last night I omitted the URLs for the Planet Searches from the email version (by not making them human readable). Here they are:

MX vs A record

One issue that has been the topic of some pointless discussion is whether a mail server should have an A record or an MX record. Mail can be delivered to a domain that has no MX record but simply an A record pointing to an IP address. But the most common practice is to have an MX record pointing to the name of the machine that serves the mail. A common use for this is to have a bulk mail hosting machine with multiple MX records pointing at it, which then allows you to have matching forward and reverse DNS entries for the machine name.

If you have no MX record for a domain then Postfix will do the following DNS requests:

IP postfix.34245 > DNS.domain:  3448+ MX? example.com. (32)
IP postfix.34261 > DNS.domain:  50123+ A? example.com. (32)

If you have an MX record then it does the following:

IP postfix.34675 > DNS.domain:  29942+ MX? example.com. (32)
IP postfix.34675 > DNS.domain:  33294+ A? mail.example.com. (37)

Now if there are multiple domains on a bulk mail hosting system then the A record might already be in a local cache on the sending machine, so having bulk mail hosting with MX records may reduce the number of DNS lookups, with the minumum number of lookups being half plus one.

If there is no bulk mail hosting then an MX record would still offer some slight benefits if the positive responses are cached for longer than negative responses. This would mean less lookups which gives faster and more reliable delivery of mail plus being more friendly to the net. I don’t know what the cache behaviour is in this regard so I’m not sure if this would actually give a benefit (I’m sure someone will comment with the answer).

Now regardless of these issues I think that using an MX record is the better option. It’s what most software expects and saves you from the excitement of discovering corner case bugs in various software that’s out there on the net.