Donate

Categories

Advert

XHTML

Valid XHTML 1.0 Transitional

DNS Secondaries and Web Security

At the moment there are ongoing security issues related to web based services and DNS hijacking. the Daily Ack has a good summary of the session hijacking issue [1].

For a long time it has been generally accepted that you should configure a DNS server to not allow random machines on the Internet to copy the entire zone. Not that you should have any secret data there anyway, but it’s regarded as just a precautionary layer of security by obscurity.

Dan Kaminsky (who brought the current DNS security issue to everyone’s attention) has described some potential ways to alleviate the problem [2]. One idea is to use random case in DNS requests (which are case insensitive but case preserving), so if you were to lookup wWw.cOkEr.CoM.aU and the result was returned with different case then you would know that it was forged.

Two options which have been widely rejected are using TCP for DNS (which is fully supported for the case where an answer can not fit in a single UDP packet) and sending requests twice (to square the number of combinations that would need to be guessed). They have been rejected due to the excessive load on the servers (which are apparently already near capacity).

One option that does not seem to get mentioned is the possibility to use multiple source IP addresses, so instead of merely having 2^16 ports to choose from you could multiply that by as many IP addresses as you have available. In the past I’ve worked for ISPs that could have dedicated a /22 (1024 IP addresses) to their DNS proxy if it would have increased the security of their customers – an ISP of the scale that has 1024 spare IP addresses available is going to be a major target of such attacks! Also with some fancy firewall/router devices it would not be impossible to direct all port 53 traffic through the DNS proxies. That would mean that an ISP with 200,000 broadband customers online could use a random IP address from that pool of 200,000 IP addresses for every DNS request. While attacking a random port choice out of 65500 ports is possible, if it was 65500 ports over a pool of 200,000 IP addresses it would be extremely difficult (I won’t claim it to be impossible).

One problem with the consideration that has been given to TCP is that it doesn’t account for the other uses of TCP, such as for running DNS secondaries.

In Australia we have two major ISPs (Telstra and Optus) and four major banks (ANZ, Commonwealth, NAB, and Westpac). It shouldn’t be difficult for arrangements to be made for the major ISPs to have their recursive DNS servers (the caching servers that their customers talk to) act as slaves for the DNS zones related to those four banks (which might be 12 zones or more given the use of different zones for stock-broking etc). If that was combined with a firewall preventing the regular ISP customers (the ones who are denied access to port 25 to reduce the amount of spam) from receiving any data from the Internet with a source port of 53 then the potential for attacks on Australian banks would be dramatically decreased. I note that the Westpac bank has DNS secondaries run by both Optus and Telstra (which makes sense for availability reasons if nothing else), so it seems that the Telstra and Optus ISP services could protect their customers who use Westpac without any great involvement from the bank.

Banks have lots of phone lines and CTI systems. It would be easy for each bank to have a dedicated phone number (which is advertised in the printed phone books, in the telephone “directory assistance” service, and in brochures available in bank branches – all sources which are more difficult to fake than Internet services) which gave a recorded message of a list of DNS zone names and the IP addresses for the master data. Then every sysadmin of every ISP could mirror the zones that would be of most use to their customers.

Another thing that banks could do would be to create a mailing list for changes to their DNS servers for the benefit of the sysadmins who want to protect their customers. Signing mail to such a list with a GPG key and having the fingerprint available from branches should not be difficult to arrange.

Another possibility would be to use the ATM network to provide security relevant data. Modern ATMs have reasonably powerful computers which are used to display bank adverts when no-one is using them. Having an option to press a button on the ATM to get a screen full of Internet banking security details of use to a sysadmin should be easy to implement.

For full coverage (including all the small building societies and credit unions) it would be impractical for every sysadmin to have a special case for every bank. But again there is a relatively easy solution. A federal agency that deals with fraud could maintain a list of zone names and master IP addresses for every financial institution in the country and make it available on CD. If the CD was available for collection from a police station, court-house, the registry of births, deaths, and marriages, or some other official government office then it should not have any additional security risks. Of course you wouldn’t want to post such CDs, even with public key signing (which many people don’t check properly) there would be too much risk of things going wrong.

In a country such as the US (which has an unreasonably large number of banks) it would not be practical to make direct deals between ISPs and banks. But it should be practical to implement a system based on a federal agency distributing CDs with configuration files for BIND and any other DNS servers that are widely used (is any other DNS server widely used?).

Of course none of this would do anything about the issue of Phishing email and typo domain name registration. But it would be good to solve as much as we can.

4 comments to DNS Secondaries and Web Security

  • bd_

    Another reason for ipv6 – with ipv6 one could give their DNS server a /96 (the size of the ipv4 internet today) easily. Plus source port randomization and DNS’s built-in request IDs gives 64 bits of entropy, which ought to be enough (and the root and GTLD servers support this already!)

  • wg

    My concern with these (and other) interim fixes to the “problems” with DNS, is that they only delay the final solution; DNSSEC. The sooner this issue is forced, the better, I think.

  • Anonymous

    Re: “But it should be practical to implement a system based on a federal agency distributing CDs with configuration files for BIND and any other DNS servers that are widely used (is any other DNS server widely used?).”

    djbdns is another DNS server, with configuration files very unlike BIND. It was also randomizes ports (and always has), and ignores answers to questions it didn’t ask, which means it was immune to this bug.

    @wq

    The same people who shipped a DNS server with all the bugs and implementation flaws that caused this problem, are the same people who say DNSSEC is the solution. Why do you trust them?

  • etbe

    bd_: Good idea! I think that assigning all addresses from a /96 to a single host would require either raw network access or some changes to the OS, while assigning a mere 1024 addresses could be achieved by a loop that runs “ip addr add” (or similar).

    wg: I believe that DNSSEC has at least as much overhead as the proposal to send two requests – which was rejected because of the overhead.

    Incidentally when using your initials as a link (which is what happens when you enter a URL when submitting a comment) you might want to use upper-case to make it more readable.

    Anon: The ignoring of unsolicited answers is a solution to a previous DNS problem. I believe that the current problems don’t involve such bugs.