The New OLPC

TED has a post about the design of the new OLPC [1].

I never liked the previous OLPCs [2], for my use a machine needs a better keyboard than the tiny rubber thing that they had. I understand why they designed it that way, for use in places where it would be an expensive asset it is necessary to make it resistant to water and dust to the greatest possible degree. But in a first-world country where a computer is a cheap item, having a better interface makes sense. I’m sure that I could have plugged a USB keyboard into a OLPC, but having a keyboard integrated as well as an external keyboard is just kludgy.

The new design of the OLPC with two panels (at least one of which is a touch-screen – I hope that both are) is very innovative. With the keyboard displayed on a touch screen there are much greater possibilities for changing alphabet (teaching children to write in multiple languages is a good thing). Of course a touch-screen keyboard won’t allow proper touch typing, but it should still be possible to plug in a USB keyboard – with the additional benefit that it could use both panels to display data when an external keyboard is used.

One of the new uses of an OLPC machine is to play two player games with the players sitting opposite each other. The next logical extension to this idea is to have a multi-user OLPC machine so that two people can use it at the same time (a machine that can run copy of a program that compares to the GIMP can surely run two programs to display electronic books or read email). A large part of the design of the OLPC is based around the needs for children in the developing world, in such cases one computer per home is likely to be common so it would be good if two children could do homework at the same time (or if a parent could check email while a child studies).

Finally the new design is much better suited to reading documents, while they show a picture of a book being read in two panes (similar to the way that paper books are read) I think that a more common use will be to display book text in one pane while using the other pane for writing notes. That would mean that instead of having a full-sized (by OLPC standards) keyboard on the touch-screen they would have to use a small keyboard (maybe with a stylus) or an external keyboard. It is awkward and distracting to hold open a paper book with one hand while writing notes with the other, using one half of an OLPC to write notes while the other half displays an electronic book would be a significant advantage.

The potential utility of the new OLPC design for reading documents is significant enough to make me want to own one, and I expect that other people who have similar interests to me will have similar desires when they see the pictures. While the OLPC isn’t really designed for the use of people like me, it’s unnatural for me to use a computer without programming it – so I expect that the new hardware developments will encourage new developers to join the OLPC project.

14

Debian SSH Problems

It has recently been announced that Debian had a serious bug in the OpenSSL code [1], the most visible affect of this is compromising SSH keys – but it can also affect VPN and HTTPS keys. Erich Schubert was one of the first people to point out the true horror of the problem, only 2^15 different keys can be created [2]. It should not be difficult for an attacker to generate 2^15 host keys to try all combinations for decrypting a login session. It should also be possible to make up to 2^15 attempts to login to a session remotely if an attacker believes that an authorized key was being used – that would take less than an hour at a rate of 10 attempts per second (which is possible with modern net connections) and could be done in a day if the server was connected to the net by a modem.

John Goerzen has some insightful thoughts about the issue [3]. I recommend reading his post. One point he makes is that the person who made the mistake in question should not be lynched. One thing I think we should keep in mind is the fact that people tend to be more careful after they have made mistakes, I expect that anyone who makes a mistake in such a public way which impacts so many people will be very careful for a long time…

Steinar H. Gunderson analyses the maths in relation to DSA keys, it seems that if a DSA key is ever used with a bad RNG then it can be cracked by someone who sniffs the network [4]. It seems that it is safest to just not use DSA to avoid this risk. Another issue is that if a client supports multiple host keys (ssh version 2 can use three different key types, one for the ssh1 protocol, one for ssh2 with RSA, and one for ssh2 with DSA) then a man in the middle attack can be implemented by forcing a client to use a different key type – see Stealth’s article in Phrack for the details [5]. So it seems that we should remove support for anything other than SSHv2 with RSA keys.

To remove such support from the ssh server edit /etc/ssh/sshd_config and make sure it has a line with “Protocol 2“, and that the only HostKey line references an RSA key. To remove it from the ssh client (the important thing) edit /etc/ssh/ssh_config and make sure that it has something like the following:

Host *
Protocol 2
HostKeyAlgorithms ssh-rsa
ForwardX11 no
ForwardX11Trusted no

You can override this for different machines. So if you have a machine that uses DSA only then it would be easy to add a section:

Host strange-machine
Protocol 2
HostKeyAlgorithms ssh-dsa

So making the default configuration of the ssh client on all machines you manage has the potential to dramatically reduce the incidence of MITM attacks from the less knowledgable users.

When skilled users who do not have root access need to change things they can always edit the file ~/.ssh/config (which has the same syntax as /etc/ssh/ssh_config) or they can use command-line options to override it. The command ssh -o “HostKeyAlgorithms ssh-dsa” user@server will force the use of DSA encryption even if the configuration file requests RSA.

Enrico Zini describes how to use ssh-keygen to get the fingerprint of the host key [6]. One thing I have learned from comments on this post is how to get a fingerprint from a known hosts file. A common situation is that machine A has a known hosts file with an entry for machine B. I want to get the right key in machine C and there is no way of directly communicating between machine A and machine C (EG they are in different locations with no network access). In that situation the command “ssh-keygen -l -f ~/.ssh/known_hosts” can be used to display all the fingerprints of hosts that you have connected to in the past, then it’s a simple matter of grepping the output.

Docunext has an interesting post about ways of mitigating such problems [7]. One thing that they suggest is using fail2ban to block IP addresses that appear to be trying to do brute-force attacks. It’s unfortunate that the version of fail2ban in Debian uses /tmp/fail2ban.sock for it’s Unix domain socket for talking to the server (the version in Unstable uses /var/run/fail2ban/fail2ban.sock). They also mention patching network drivers to add entropy to the kernel random number generator. One thing that seems interesting is the package randomsound (currently in Debian/Unstable) which takes ALSA sound input as a source of entropy, note that you don’t need to have any sound input device connected.

When considering fail2ban and similar things, it’s probably best to start by restricting the number of machines which can connect to your SSH server. Firstly if you put it on a non-default port then it’ll take some brute-force to find it. This will waste some of the attacker’s time and also make the less persistent attackers go elsewhere. One thing that I am considering is having a few unused ports configured such that any IP address which connects to them gets added to my NetFilter configuration – if you connect to such ports then you can’t connect to any other ports for a week (or until the list becomes too full). So if for example I had port N configured in such a manner and port N+100 used for ssh listening then it’s likely that someone who port-scans my server would be blocked before they even discovered the SSH server. Does anyone know of free software to do this?

The next thing to consider is which IP addresses may connect. If you were to allow all the IP addresses from all the major ISPs in your country to connect to your server then it would still be a small fraction of the IP address space. Sure attackers could use machines that they already cracked in your country to launch their attacks, but they would have to guess that you had such a defense in place, and even so it would be an inconvenience for them. You don’t necessarily need to have a perfect defense, you only need to make the effort to reward ratio be worse for attacking you than for attacking someone else. Note that I am not advocating taking a minimalist approach to security, merely noting that even a small increment in the strength of your defenses can make a significant difference to the risk you face.

Update: based on comments I’m now considering knockd to open ports on demand. The upstream site for knockd is here [8], and some documentation on setting it up in Debian is here [9]. The concept of knockd is that you make connections to a series of ports which act as a password for changing the firewall rules. An attacker who doesn’t know those port numbers won’t be able to connect. Of course anyone who can sniff your network will discover the ports soon enough, but I guess you can always login and change the port numbers once knockd has let you in.

Also thanks to Helmut for advice on ssh-keygen.

9

Ideas to Copy from Red Hat

I believe that the Red Hat process which has Fedora for home users (with a rapid release cycle and new versions of software but support for only about one year) and Enterprise Linux (with a ~18 month release cycle, seven years of support, and not always having the latest versions) gives significant benefits for the users.

The longer freeze times of Enterprise Linux (AKA RHEL) mean that it often has older versions of software than a Fedora release occurring at about the same time. In practice the only time I ever notice users complaining about this is in terms of OpenOffice (which is always being updated for compatability with the latest MS changes). As an aside, a version of RHEL or CentOS with a back-port of the latest OpenOffice would probably get a lot of interest.

RHEL also has a significantly smaller package set than Fedora, there is a lot of software out there that you wouldn’t want to support for seven years, a lot of software that you might want to support if you had more resources, and plenty of software that is not really of interest to enterprise customers (EG games).

Now there are some down-sides to the Red Hat plan. The way that they run Fedora is to have new releases of software instead of back-porting fixes. This means that bugs can be fixed with less effort (simply compiling a new version is a lot less effort than back-porting a fix), and that newer versions of the upstream code get tested. With some things this isn’t a problem, but in the past I have had problems with the Fedora kernel. One example was when I upgraded the kernel on a bunch of remote Fedora machines only to find that the new kernel didn’t support the network card, so I had to talk the users through selecting the older kernel at the GRUB menu (this caused pain and down-time). A problem with RHEL (which I see regularly on the CentOS machines I run) is that it doesn’t have the community support that Fedora does, and therefore finding binary packages for RHEL can be difficult – and often the packages are outdated.

I believe that in Debian we could provide benefits for some of our users by copying some ideas from Red Hat. There is currently some work in progress on releasing packages that are half-way between Etch and Lenny (Etch is the current release, Lenny will be the next one). The term Etch and a half refers to the work to make Etch run on newer hardware [1]. It’s a good project, but I don’t think that it goes far enough. It certainly won’t fulfill the requirements of people who want something like Fedora.

I think that if we had half-way releases of Debian (essentially taking a snap-shot of Testing and then fixing the worst of the bugs) then we could accommodate user demand for newer versions (making available a release which is on average half as old). Users who want really solid systems would run the full releases (which have more testing pre-release and more attention paid to bug fixes), but users who need the new features could run a half-way release. Currently there are people working on providing security support for Testing so that people who need the more recent versions of software can use Testing, I believe that making a half-way release would provide better benefits to most users while also possibly taking less resources from the developers. This would not preclude the current “Etch and a half” work of back-porting drivers, in the Red Hat model such driver back-ports are done in the first few years of RHEL support. If we were to really follow Red Hat in this regard the “Etch and a half” work would operate in tandem with similar work for Sarge (version 3.1 of Debian which was released in 2005)!

In summary, the Red Hat approach is to have Fedora releases aimed at every 6 months, but in practice coming out every 9 months or so and to have Enterprise Linux releases aimed at every year, but in practice coming out every 18 months. This means among other things that there can be some uncertainty as to the release order of future Fedora and RHEL releases.

I believe that a good option for Debian would be to have alternate “Enterprise” (for want of a better word) and half-way releases (comparable to RHEL and Fedora). The Enterprise releases could be frozen in coordination with Red Hat, Ubuntu, and other distributions (Mark Shuttleworth now refers to this as being a “pulse” in the free software community [], while the half-way releases would come out either when it’s about half-way between releases, or when there is a significant set of updates that would encourage users to switch.

One of the many benefits to having synchronised releases is that if the work in back-porting support for new hardware lagged in Debian then users would have a reasonable chance of taking the code from CentOS. If nothing else I think that making kernels from other distributions available for easy install is a good thing. There is a wide combination of kernel patches that may be selected by distribution maintainers, and sometimes choices have to be made between mutually exclusive options. If the Debian kernel doesn’t work best for a user then it would be good to provide them with a kernel compiled from the RHEL kernel source package and possibly other kernels.

Mark also makes the interesting suggestion of having different waves of code freeze, the first for the kernel, GCC, and glibc, and possibly server programs such as Apache. The second for major applications and desktop environments. The third for distributions. One implication of this is that not all distributions will follow the second wave. If a distribution follows the kernel, GCC, and glibc wave but not the applications wave it will still save some significant amounts of effort for the users. It will mean that the distributions in question will all have the same hardware support and kernel features, and that they will be able to run each others’ applications (except when the applications in question use system libraries from later waves). Also let’s not forget the possibility of running a kernel from distribution A on distribution B, it’s something I’ve done on many occasions, but it does rely on the kernels in question being reasonably similar in terms of features.

3

Smoke from the PSU

Yesterday I received two new machines from DOLA on-line auctions [1]. I decided to use the first to replace the hardware for my SE Linux Play Machine [2]. The previous machine I had used for that purpose was a white-box 1.1GHz Celeron and I replaced it with an 800MHz Pentium3 system (which uses only 35W when slightly active and only 28W when the hard disk spins down [3]).

The next step was to get the machine in question ready for it’s next purpose, I was planning to give it to a friend of a friend. A machine of those specs which was made by Compaq would be very useful to me, but when it’s a white-box I’ll just give it away. So I installed new RAM and a new hard drive in it (both of which had been used in another machine a few hours earlier and seemed to be OK) and turned it on. Nothing happened, I was just checking that it was plugged in correctly when I noticed smoke coming from the PSU… It seems strange that the machine in question had run 24*7 for about 6 months and then suddenly started smoking after being moved to a different room and being turned off overnight.

It is possible that the hard drive was broken and shorted out the PSU (the power cables going to the hard drive are thick enough that it could damage the PSU if it had a short-circuit). What I might do in the future is keep an old and otherwise useless machine on hand for testing hard drives so that if something like that happens then it won’t destroy a machine that is useful. Another possibility is that the dust in the PSU contained some metal fragments and that moving the machine to another room caused them to short something out, but there’s not much I can do with that when I get old machines. I might put an air filter in each room that I use for running computers 24*7 to stop such problems getting worse in future though.

I recently watched the TED lecture “5 dangerous things you should let your kids do” [4], so I’m going to offer the broken machine to some of my neighbors if they want to let their children take it apart.

18

Release Dates for Debian

Mark Shuttleworth has written an interesting post about Ubuntu release dates [1]. He claims that free software distributions are better able to meet release dates than proprietary OSs because they are not doing upstream development. The evidence that free software distributions generally do a reasonable job of meeting release dates (and Ubuntu does an excellent job) is clear.

But the really interesting part of his post is where he offers to have Ubuntu collaborate with other distributions on release dates. He states that if two out of Red Hat (presumably Enterprise Linux), Novell (presumably SLES), and Debian will commit to the same release date (within one month) and (possibly more importantly) to having the same versions of major components then he will make Ubuntu do the same.

This is a very significant statement. From my experience working in the Debian project and when employed by Red Hat I know that decisions about which versions of major components to include are not taken lightly, and therefore if the plan is to include a new release of a major software project and that project misses a release date then it forces a difficult decision about whether to use an older version or delay the release. For Ubuntu to not merely collaborate with other distributions but to instead follow the consensus of two different distributions would be a massive compromise. But I agree with Mark that the benefits to the users are clear.

I believe that the Debian project should align it’s release cycles with Red Hat Enterprise Linux. I believe that RHEL is being released in a very sensible manner and that the differences of opinion between Debian and Red Hat people about how to manage such things are small. Note that it would not be impossible to have some variations of version numbers of components but still stick mostly to the same versions.

If Debian, Ubuntu, and RHEL released at about the same time with the same versions of the kernel, GCC, and major applications and libraries then it would make it much easier for users who want to port software between distributions and run multiple distributions on the same network or the same hardware.

The Debian Social Contract [2] states that “Our priorities are our users and free software“. I believe that by using common versions across distributions we would help end-users in configuring software and maintaining networks of Linux systems running different distributions, and also help free software developers by reducing the difficulty in debugging problems.

It seems to me that the best way of achieving the goal that Mark advocates (in the short term at least) is for Debian to follow Red Hat’s release cycle. I think that after getting one release with common versions out there we could then discuss how to organise cooperation between distributions.

I also believe that a longer support cycle would be a good thing for Debian. I’m prepared to do the necessary work for the packages that I maintain and would also be prepared to do some of the work in other areas that is needed (EG back-porting security fixes).

Miro AKA DemocracyPlayer

www.ted.com is a premier partner for the Miro player [1]. This is a free player for free online content, the site www.getmiro.com has the player for download, it has binaries for Mac OS/X, Windows, and Ubuntu as well as the source (GPL licensed), it is in Debian/Unstable. It supports downloading in a number of ways (including bittorrent) and can keep the files online indefinitely. A Debian machine connected to the net could be a cheap implementation of my watching while waiting idea for showing interesting and educational TV in waiting areas for hospitals etc [2]. When I first checked out the getmiro.com site it only seemed to have binaries for Mac OS/X and Windows. But now I realise that it’s been in Debian since 11 Sep 2007 under the name Miro and since 12 Jun 2006 under the name Democracyplayer. I have only briefly played with Miro (just checked the channel list) and it seems quite neat so far. I wish I had tried this years ago. Good work Uwe Hermann!

I hope that the Miro player will allow me to more easily search the TED archives. Currently I find the TED site painful to use, a large part of this is slow Javascript which makes each page take an unreasonable delay before it allows me to do anything. I am not planning to upgrade my laptop to a dual-core 64bit machine just to allow Firefox to render badly written web pages.

Biella recently wrote about the Miro player and gave a link to a documentary about Monsanto [3].

One thing I really like about this trend towards publishing documentaries on the net is that they can be cited as references in blog posts. I’ve seen many blog posts that reference documentaries that I can’t reasonably watch (they were shown on TV stations in other countries and even starting to try tracking them down was more trouble than it was worth). Also when writing my own posts I try and restrict myself to using primary sources that are easy to verify, this means only the most popular documentaries.

8

The Future of Xen

I’m currently in Xen hell. My Thinkpad (which I won’t replace any time soon) has a Pentium-M CPU without PAE support. I think that Debian might re-introduce Xen support for CPUs without PAE in Lenny, but at the moment I have the choice of running without Xen or running an ancient kernel on my laptop. Due to this I’ve removed Xen from my laptop (I’m doing most of my development which needs Xen on servers anyway).

Now I’ve just replaced my main home server. It was a Pentium-D 2.8GHz machine with 1.5G of RAM and a couple of 300G SATA disks in a RAID-1. Now it’s a Pentium E2160 1.8Ghz machine with 3G of RAM with the same disks. Incidentally Intel suck badly, they are producing CPUs with names that have no meaning, and most of their chipsets don’t support more than 4G of physical address space [1]. I wanted 4G of RAM but the machine I was offered only supported addressing 4G and 700M of that was used for PCI devices. For computation tasks it’s about the same speed as the old Pentium-D, but it has faster RAM access, more RAM, uses less power, and makes less noise. If I was going to a shop to buy something I probably would have chosen something different to get support for more than 4G of RAM, but as I got the replacement machine for free as a favor I’m not complaining!

I expected that I could just install the new server and have things just work. There were some minor issues such as configuring X for the different video hardware (and installing the 915resolution package (which is only needed in Etch) to get the desired 1650×1400 resolution. But for the core server tasks I expected that I could just move the hard drives across and have it work.

After the initial install the system crashed whenever I did any serious hard drive access from Dom0, the Dom0 kernel Oopsed and network access was cut off from the DomU’s (I’m not sure whether the DomU’s died but without any way of accessing them it doesn’t really matter much). As a test I installed the version of the Xen hypervisor from Unstable and it worked. But the Xen hypervisor from Unstable required the Xen tools from Unstable which also required the latest libc6, and therefore the entire Dom0 had to be upgraded. Then in an unfortunate accident unrelated to Xen (cryptsetup in Debian/Unstable warns you if you try to use a non-LUKS option on a device which has been used for LUKS and would have saved me) I lost the root filesystem before I finished the upgrade.

So I did a fresh install of Debian/Unstable, this time it didn’t crash on heavy disk IO, instead it would lock up randomly when under no load.

I’ve now booted a non-Xen kernel and it’s working well. But this situation is not acceptable long-term, a large part of the purpose of the machine is to run virtualisation so that I can test various programs under multiple distributions. I think that I will have to try some other virtualisation technologies. The idea of running KVM on real servers (ones that serve data to the Internet) doesn’t thrill me, Tavis Ormandy’s paper about potential ways of exploiting virtual machine technologies [2] is a compelling argument for para-virtualisation. Fortunately however my old Pentium-3 machines running Xen seem quite reliable (replacing both software and hardware is a lot of pain that I don’t want).

In the near future I will rename the Xen category on my blog to Virtualisation. For older machines Xen is still working reasonably well, but for all new machines I expect that I will have to use something else – and I’ll be blogging about the new machines not the old. I expect that an increasing number of people will be moving away from Xen in the near future. It doesn’t seem to have the potential to give systems that are reliable when running on common hardware.

Ulrich Drepper doesn’t have a high opinion of Xen [3], the more I learn about it the more I agree with Ulrich.