|
I’ve been trying to get ipsec to work correctly as a basic VPN between two CentOS 5 systems. I set up the ipsec devices according to the IPSEC section of the RHEL4 security guide [1] (which is the latest documentation available and it seems that nothing has changed since). The documentation is quite good, but getting IPSEC working is really difficult. One thing I really don’t like about IPSEC is the way that it doesn’t have a device, I prefer to have my VPN have it’s own device so that I can easily run tcpdump on the encrypted or the unencrypted stream (or two separate tcpdump sessions) if that’s necessary to discover the cause of the problem).
I’ve got IPSEC basically working, and I probably could get it working fully, but it doesn’t seem worth the effort.
While fighting with IPSEC at the more complex site (multiple offices which each have multiple networks, and switching paths to route around unreliable networks) I set up an IPSEC installation at a very simple site (two offices within 200 meters both with static public IP addresses, no dynamic routing or anything else complex). The simple site STILL doesn’t work as well as desired, one issue that still concerns me is the arbitrary MTU sizes in some routing gear which (for some reason I haven’t diagnosed yet) lose packets if I have an MTU over 1470 bytes.
So today I set up a test network with OpenVPN [2]. It was remarkably simple, the server config file (/etc/openvpn/server.conf) is as follows:
dev tun
ifconfig 10.8.0.1 10.8.0.2
secret static.key
comp-lzo
This means that the IP address 10.8.0.1 will be used for the “server” end of the tunnel, and 10.8.0.2 is the “client” end. The secret is stored in /etc/openvpn/static.key (which is the same on both machines and is generated by “openvpn --genkey --secret static.key“).
The client config file (/etc/openvpn/client.conf) is as follows:
remote 10.5.0.2
dev tun
ifconfig 10.8.0.2 10.8.0.1
secret static.key
comp-lzo
Then I enable IP forwarding on both VPN machines, open UDP port 1194 (the command “lokkit -q -p 1194:udp” does this) and start the daemon on each end. The script /etc/init.d/openvpn (in Dag Wieers package for CentOS 5 – which I believe is the standard script) will just take every file matching /etc/openvpn/*.conf as a service to start.
The end result is a point to point link that I can easily route other packets to, I can easily get dynamic routing daemons to add routes pointing to it. Nothing like the IPSEC configuration where the config file needs to have the IP address range hard-coded, I can just add routes whenever I want.
This isn’t necessarily going to be the way I deploy it. The documentation notes that a disadvantage is “lack of perfect forward secrecy — key compromise results in total disclosure of previous sessions“. But what gives me confidence is the fact that it was so easy to get it going, if I have problems in adding further features to the configuration it should be easy to debug. As opposed to IPSEC where it’s all complex and if it doesn’t work then it’s all pain.
Also I tested this out with four Xen DomU’s, two running as VPN routers and two as clients on separate segments of the VPN. They were connected with three bridge devices. I’ll blog about how to set this up if there is interest.
Matt Bottrell has written about some issues related to the acceptable weight of laptops for school use [1].
Matt cites a reference from the Victorian government stating that a school bag should not be heavier than 10% of the body weight of the child who carries it [2]. So the next thing we need to do is to calculate what a student can carry without being at risk of spinal injuries.
Firstly we need to determine what children weigh, if we restrict this to the 14+ age range (the older children have more need of computers) then children are almost as heavy as they will be when they are 18 and can carry significantly heavier bags than those in the 10+ age range. Also it seems reasonable to consider the 25th percentile weight (the weight which is exceeded by 75% of children). Of course this means that 25% would be carrying overly heavy bags but it does give us a bit more weight allowance.
The 25th percentile weight for white girls in the US is 48Kg [3]. The 25th percentile weight for white boys in the US is also 48Kg [4]. When considering the Australian situation it seems that white children in the US will be most similar (in terms of genetics and diet).
The 25th percentile weights at age 18 are 53Kg for girls and 64Kg for boys.
So the acceptable bag weights would be 4.8Kg for 14yos, 5.3Kg for 18yo girls, and 6.4Kg for 18yo boys.
The next step is to determine the weight carried to school. The weight of one of my laptop bags is almost 1Kg, I think that a school-bag would have a similar weight.
When I was at school I recall that the worst days for carrying baggage were when I had PE (Physical Education – sports), the weight of PE gear was a significant portion of the overall weight I carried. I tried to estimate this by weighing a track-suit top, a t-shirt, and a pair of board-shorts (the only shorts I could find at short notice), and it was almost 1Kg. While the board-shorts might weigh more than PE uniform shorts I didn’t include the weight of track-suit pants. Assuming that a female PE uniform has the same weight as a male uniform is the least of my assumptions.
The weight of a good pair of sneakers (suitable for playing basketball and other school sports) that is in my size is just over 1Kg.
To get a rough estimate of the weight of lunch I put six slices of bread and a small apple in a lunch box and found that it weighed 500g. A real lunch would probably include four slices of bread but the other things would weigh at least as much as the extra two slices. If a drink was included then it could be more than 500g.
So the total is 3.5Kg for bare essentials (including PE gear) without including any books!
It seems that it would be impossible to stick to the government recommendations for school bag weight if a full set of PE gear is included. Probably the best thing that could be done would be to make a school uniform allow wearing sneakers which removes 1Kg from the overall bag weight.
So a bag with lunch and PE gear (minus sneakers) is about 2.5Kg, leaving 2.3Kg for books etc at age 14. As text books for 14yo children are considerably thinner than those for 18yo children, it seems that this might be achievable. Fitting a couple of text books and an EeePC into the 2.3Kg weight budget should be achievable. But fitting a full size laptop (which seems to start at about 1.8Kg) and some text books into a 2.3Kg budget will be a challenge – it might be possible to do that but wouldn’t leave any budget for the random junk that children tend to carry around.
For an 18yo girl, the weight budget (after the basics have been deducted) is 2.8Kg, it seems likely that on occasion the weight of year-12 text books will exceed that. Therefore it seems that the only way of avoiding spinal injuries in year-12 girls would be to have text books stored in digital form on a light laptop such as an EeePC. Rather than avoiding the use of laptops because of weight (as some people suggest), laptops with electronic text books should be used to replace traditional text books! An EeePC weighing less than 1Kg will give a significant amount of extra capacity for any other things that need to be carried. If there is little other stuff to be carried then 75% of 18yo girls should be able to carry a full size laptop plus PE gear without risk of back injuries. If digital text books are used then if in any journey two text books (which according to my previous measurements can be as much as 1.6Kg [5]) can be replaced with an EeePC then overall something like 600g is saved (depending on the configuration of the EeePC, if one battery was stored at home and another at school then it could save more than that).
It seems that a year 12 girl who has PE and three science subjects scheduled on the same day would be most likely to exceed the recommended weight in the current situation (even without having to carry a spare pair of shoes), and that carrying a laptop with digital text books would be the only way of avoiding back injury.
For an 18yo boy the weight budget is 3.9Kg after the basics have been deducted. So if they don’t carry other random stuff in their bag (I probably had 1Kg of random junk in my bag on occasion) then they could carry PE gear, a full sized laptop, AND a full set of text books.
It seems to me that there is no situation where children would be unable to have a laptop within the reasonable weight allowance if digital text books were used. The only way that the weight of a laptop could be a problem would be if it was carried in addition to all the text books.
One final point, it would be good if books from the Gutenberg project were used for studying English literature, that’s one easy way of reducing weight (and cost). Also it would be good if there was an option for a non-literature based English subject. Knowledge of English literature is of no value to most students. It would be better to teach students how to write while using topics that interest them. Maybe have blogging as an English project. ;)
http://youtube.com/watch?v=TEBpC0GLr6Y
At the above Youtube page there is a video from MSNBC where Keith Olbermann discusses Bush’s record. Before I watched that I thought that it was impossible for me to have a lower opinion of Bush, however Keith’s presentation achieved the seemingly impossible task of making me despise the cretin even more.
The first step is to copy /images/xen/vmlinuz and /images/xen/initrd.img from the Fedora (or RHEL or CentOS) DVD somewhere convenient, I use /boot/OS/ (where OS is the name of the image) but other locations will do.
Now choose a suitable Ethernet MAC address for the interface (see my previous post on how I choose them [1]).
Create a temporary block device for the install, I use /dev/VG0/OS-install (where OS is replaced by the name of the distribution, “f8” or “cent5“), it’s a logical volume in an LVM volume group named VG0. The device should be at least 2G in size for a basic Fedora install (512M for swap, 1G for files, and 512M free after the install). It is of course possible to use DOS partitions for the Xen block devices, but this would be unreasonably difficult to manage. An option for people who don’t like LVM would be to use files on an XFS filesystem (Ext3 performs poorly when creating and removing large files).
When configuring Xen on Debian systems I generally use /dev/hda type device names. The device name seems quite arbitrary and /dev/hda is a familiar name for hard drives that many people have been used to for 15+ years. But the Fedora install process doesn’t like it and I’m forced to use /dev/xvda etc.
I often install Fedora on a machine that only has 256M of RAM spare for the DomU. For recent versions of Fedora 256M of RAM is about the minimum for an install at the best of times, and a HTTP install takes even more because the root filesystem used for the install is copied via HTTP and stored in a RAM disk. It might be possible to use less RAM with a CD or DVD install or even a NFS install, but I couldn’t get CD/DVD installation working and I generally don’t give Xen DomU’s NFS access if I can avoid it. So I had to create a swap space (an attempt to do an install with 256M of RAM and no swap aborted when installing the kernel package). I expect that most serious use of Xen will have 256M of RAM or less for the DomU, part of the problem here is that Xen allocates RAM not virtual memory. VMWare allocates virtual memory so the total memory for virtual machines can be greater than physical RAM and thus this problem will be less common with VMWare.
I believe that the best way of configuring virtual machine images is to have the virtual machine manager (Xen in this case) provide block devices to the virtual machine and have the virtual machine implement no partitioning (no LVM or anything equivalent). The main reason is that DOS partition tables and LVM configuration on a block device used by Xen can not be used easily in the host environment (the Dom0 for Xen). I am not aware of how to access DOS partition tables (although I’m sure it’s possible somehow) and while LVM can be used it’s a bad idea due to the fact that there is no way to deactivate a LVM volume group that is active, and the fact that there is no support for having multiple volume groups of the same name. The lack of support for multiple volume groups of the same name is a reasonable limitation, but an insurmountable problem when using a virtual machine environment. It’s quite reasonable to create several cloned instances of a virtual machine and renaming an LVM volume group would require more changes inside the virtual machine than you would want. Also using snap-shots of old versions of the virtual machine data is difficult if the same volume group name is used.
So for ease of management I want to have filesystems on block devices (such as /dev/xvda) instead of partitions (such as /dev/xvda1). Unfortunately Anaconda (the Fedora installer) doesn’t support this. So I had to do the initial install with DOS partitions and then fix it afterwards. So use the manual option and create a primary partition for the root filesystem and then create a non-primary partition for swap (when using small amounts of RAM such as 256M) so that swap can be used during the install. The root filesystem needs to be at the start of the disk to make it easier to sort this out later.
After installing Fedora and shutting the virtual machine down the next step is to copy the block device to the desired configuration (filesystem on an unpartitioned device). If the root filesystem is the first partition then the first 63 sectors will be the partition table and reserved space so dd can be used to copy the data with the following commands:
dd if=/dev/VG0/OS-install of=/dev/VG0/OS bs=512 skip=63
e2fsck -f /dev/VG0/OS
resize2fs /dev/VG0/OS
The next step is to mount the device /dev/VG0/OS in the Dom0 to change /etc/fstab, I use /dev/xvda for the root device and /dev/xvdb for swap.
Now to remove the cruft:
Avahi is a network service discovery system, mainly used for laptops and isn’t needed on a server, it is installed by default on all recent Fedora, RHEL and CentOS releases but it is not useful in a DomU (any unused network service is a security risk if you don’t disable or remove it). Smartmontools is for detecting impending failure of a hard drive and does not do any good when using a virtual block device (you run it on the Dom0). It might be considered a bug that smartd doesn’t exit on startup when it sees a device such as /dev/xvda. The pcsc-lite package for managing smart cards is of no use to me and all the other people who don’t own readers for smart-cards, and it can therefore be removed. Bluetooth networking support (in the package bluez-utils) is also only usable in a Dom0 (AFAIK), and the only bluetooth device I own is my mobile phone so I can’t use it on my computer. The command “yum remove avahi smartmontools pcsc-lite bluez-utils” removes them.
For almost all of my DomU’s I don’t use NFS or do any printing, so I remove the packages related to them. Also autofs is in most cases only useful for servers when mounting NFS filesystems. I remove them with the command “yum remove nfs-utils portmap cups autofs“.
The GPM daemon (which supports cut/paste operations with a mouse on virtual consoles) is of no use on a Xen DomU, unfortunately the vim-enhanced package depends on it. I could just disable the daemon, but as I like to run small images I remove it with “yum remove gpm“. I may have to reinstall it on some images as some of my clients like the extra VIM functionality.
It’s unfortunate that debootstrap doesn’t work on CentOS (and presumably Fedora) so installing a Debian DomU on a CentOS/Fedora Dom0 requires creating an image on a Debian machine or downloading an image from www.jailtime.org .
Sample Xen Config for the install:
kernel = “/boot/OS/vmlinuz”
ramdisk = “/boot/OS/initrd.img”
memory = 256
name = “OS”
vif = [ ‘mac=00:16:3e:66:66:68, bridge=xenbr0’ ]
disk = [ ‘phy:/dev/VG0/OS-install,xvda,w’ ]
extra=”askmethod text”
Sample Xen Config for operation:
kernel = “/boot/cent5/vmlinuz-2.6.18-53.el5xen”
ramdisk = “/boot/cent5/initrd-2.6.18-53.el5xen.img”
memory = 256
name = “cent5”
vif = [ ‘mac=00:16:3e:66:66:68, bridge=xenbr0’ ]
disk = [ ‘phy:/dev/VG0/cent5,xvda,w’, ‘phy:/dev/VG0/cent5-swap,xvdb,w’ ]
root = “/dev/xvda ro”
The way Xen works is that the RAM used by a virtual machine is not swappable, so the only swapping that happens is to the swap device used by the virtual machine. I wondered whether I could improve swap performance by using a tmpfs for that swap space. The idea is that as only one out of several virtual machines might be using swap space, a tmpfs storage could cache the most recently used data and result in the virtual machine which is swapping heavily taking less time to complete the memory-hungry job.
I decided to test this on Debian/Etch (both DomU and Dom0).

Above is a graph of the time taken in seconds (on the Y axis) to complete the command “apt-get -y install psmisc openssh-client openssh-server linux-modules-2.6.18-5-xen-686 file less binutils strace ltrace bzip2 make m4 gcc g++“, while the X axis has the amount of RAM assigned to the DomU in megabytes.
The four graphs are for using a real disk (in this case an LVM logical volume) and for using tmpfs with different amounts of RAM backing it. The numbers 128000, 196000, and 256000 and the numbers of kilobytes of RAM assigned to the Dom0 (which manages the tmpfs). As you can see it’s only below about 20M of RAM that tmpfs provides a benefit. I don’t know why it didn’t provide a benefit with larger amounts of RAM, below 48M the amount of time taken started increasing exponentially and I expected that there was the potential for a significant performance boost.
After finding that the benefits for a single active DomU were not that great I did some tests with three DomU’s running the same APT command. With 16M of RAM and swap on the hard drive it took an average of 408 seconds, but with swap on the tmpfs it took an average of 373 seconds – an improvement of 8.5%. With 32M of RAM the times were 225 and 221 seconds – a 1.8% improvement.
Incidentally to make the DomU boot successfully with less than 30M of RAM I had to use “MODULES=dep” in /etc/initramfs-tools/initramfs.conf. To get it to boot with less than 14M I had to manually hack the initramfs to remove LVM support (I create my initramfs in the Dom0 so it gets drivers that aren’t needed in the DomU). I was unable to get a DomU with 12M of RAM to boot with any reasonable amount of effort (I expect that compiling the kernel without an initramfs would have worked but couldn’t be bothered).
Future tests will have to be on another machine as the machine used for these tests caught on fire – this is really annoying, if someone suggests an extra test I can’t run it.
To plot the data I put the following in a file named “command” and then ran “gnuplot command“:
unset autoscale x
set autoscale xmax
unset autoscale y
set autoscale ymax
set xlabel “MB”
set ylabel “seconds”
plot “128000”
replot “196000”
replot “256000”
set terminal png
set output “xen-cache.png”
replot “disk”
In future when doing such tests I will use “time -p ” (for POSIX format) which means that it displays a count of seconds rather than minutes and seconds (and saves me from running sed and bc to fix things up).
I am idly considering writing a program to exercise virtual memory for the purpose of benchmarking swap on virtual machines.
My raw data is below:
Continue reading Xen and Swap
I just wrote about the system administration issues related to the recent Debian SSL/SSH security flaw [1]. The next thing we need to consider is how we can change things to reduce the incidence of such problems.
The problem we just had was due to the most important part of the entropy supply for the random number generator not being used due to a mistake in commenting out some code. The only entropy that was used was the PID number of the process which uses the SSL library code which gives us 15 bits of entropy. It seems to me that if we had zero bits of entropy the problem would have been discovered a lot sooner (almost certainly before the code was released in a stable version). Therefore it seems that using a second-rate source of entropy (which was never required) masked the problem that the primary source of entropy was not working. Would it make sense to have a practice of not using such second-rate sources of entropy to reduce the risk of such problems being undetected for any length of time? Is this a general issue or just a corner case?
Joss makes some suggestions for process improvements [2]. He suggests that having a single .diff.gz file (the traditional method for maintaining Debian packages) that directly contains all patches can obscure some patches. The other extreme is when you have a patch management system with several dozen small patches and the reader has to try and discover what each of them does. For an example of this see the 43 patches which are included in the Debian PAM package for Etch, also note that the PAM system is comprised of many separate shared objects (modules), this means that the patching system lends itself to having one patch per module and thus 43 patches for PAM isn’t as difficult to manage as 43 patches for a complex package which is not comprised of multiple separate components might be. That said I think that there is some potential for separating out patches. Having a distinction between different types of patches might help. For example we could have a patch for Makefiles etc (including autoconf etc), a patch for adding features, and a patch for fixing bugs. Then people reviewing the source for potential bugs could pay a lot of attention to bug fixes, a moderate amount of attention to new features, and casually skim the Makefile stuff.
The problem began with this mailing list discussion [3]. Kurt’s first message starts with “When debbuging applications” and ends with “What do you people think about removing those 2 lines of code?“. The reply he received from Ulf (a member of the OpenSSL development team) is “If it helps with debugging, I’m in favor of removing them“. It seems to me that there might have been a miscommunication there, Ulf may have believed that the discussion only concerned a debugging built and not a build that would eventually end up on millions of machines.
It seems possible that the reaction would have been different if Kurt had mentioned that he wanted to have a single source tree for both debugging and for regular use. It also seems likely that his proposed change may have received more inspection if he had clearly stated that he was doing to include it in Debian where it would be used by millions of people. When I am doing Debian development I generally don’t mention all the time “this code will be used by millions of people so it’s important that we get it right“, although I do sometimes make such statements if I feel that my questions are not getting the amount of consideration from upstream that a binary package destined for use by millions of people deserves. Maybe it would be a good practice to clarify such things in the case of important packages. For a package that is relevant to the security of the entire distribution (and possibly to other machines around the net – as demonstrated in this case) it doesn’t seem unreasonable to include a post-script mentioning the scope of the code use (it could be done with an alternate SIG if using a MUA that allows selecting from multiple SIGs in a convenient manner).
In the response from the OpenSSL upstream [4] it is claimed that the mailing list used was not the correct one. Branden points out that the openssl-team mailing list address seems not to be documented anywhere [5]. One thing to be learned from this is that distribution developers need to be proactive in making contact with upstream developers. You might think that building packages for a major distribution and asking questions about it on the mailing list would result in someone from the team noticing and mentioning any other things that you might need to do. But maybe it would make sense to send private mail to one of the core developers, introduce yourself, and ask for advice on the best way to manage communication to avoid this type of confusion.
I think that it is ideal for distribution developers to have the phone numbers of some of the upstream developers. If the upstream work is sponsored by the employer of one of the upstream developers then it seems reasonable to ask for their office phone number. Sometimes it’s easier to sort things out by phone than by email.
Gunnar Wolf describes how the way this bug was discovered and handled shows that the Debian processes work [6]. A similar bug in proprietary software would probably not be discovered nearly as quickly and would almost certainly not be fixed in such a responsible manner.
Update: According to the OpenSSL project about page [7], Ulf is actually not a “core” member, just a team member. I had used the term “core” in a slang manner based on the fact that Ulf has an official @openssl.org email address.
TED has a post about the design of the new OLPC [1].
I never liked the previous OLPCs [2], for my use a machine needs a better keyboard than the tiny rubber thing that they had. I understand why they designed it that way, for use in places where it would be an expensive asset it is necessary to make it resistant to water and dust to the greatest possible degree. But in a first-world country where a computer is a cheap item, having a better interface makes sense. I’m sure that I could have plugged a USB keyboard into a OLPC, but having a keyboard integrated as well as an external keyboard is just kludgy.
The new design of the OLPC with two panels (at least one of which is a touch-screen – I hope that both are) is very innovative. With the keyboard displayed on a touch screen there are much greater possibilities for changing alphabet (teaching children to write in multiple languages is a good thing). Of course a touch-screen keyboard won’t allow proper touch typing, but it should still be possible to plug in a USB keyboard – with the additional benefit that it could use both panels to display data when an external keyboard is used.
One of the new uses of an OLPC machine is to play two player games with the players sitting opposite each other. The next logical extension to this idea is to have a multi-user OLPC machine so that two people can use it at the same time (a machine that can run copy of a program that compares to the GIMP can surely run two programs to display electronic books or read email). A large part of the design of the OLPC is based around the needs for children in the developing world, in such cases one computer per home is likely to be common so it would be good if two children could do homework at the same time (or if a parent could check email while a child studies).
Finally the new design is much better suited to reading documents, while they show a picture of a book being read in two panes (similar to the way that paper books are read) I think that a more common use will be to display book text in one pane while using the other pane for writing notes. That would mean that instead of having a full-sized (by OLPC standards) keyboard on the touch-screen they would have to use a small keyboard (maybe with a stylus) or an external keyboard. It is awkward and distracting to hold open a paper book with one hand while writing notes with the other, using one half of an OLPC to write notes while the other half displays an electronic book would be a significant advantage.
The potential utility of the new OLPC design for reading documents is significant enough to make me want to own one, and I expect that other people who have similar interests to me will have similar desires when they see the pictures. While the OLPC isn’t really designed for the use of people like me, it’s unnatural for me to use a computer without programming it – so I expect that the new hardware developments will encourage new developers to join the OLPC project.
It has recently been announced that Debian had a serious bug in the OpenSSL code [1], the most visible affect of this is compromising SSH keys – but it can also affect VPN and HTTPS keys. Erich Schubert was one of the first people to point out the true horror of the problem, only 2^15 different keys can be created [2]. It should not be difficult for an attacker to generate 2^15 host keys to try all combinations for decrypting a login session. It should also be possible to make up to 2^15 attempts to login to a session remotely if an attacker believes that an authorized key was being used – that would take less than an hour at a rate of 10 attempts per second (which is possible with modern net connections) and could be done in a day if the server was connected to the net by a modem.
John Goerzen has some insightful thoughts about the issue [3]. I recommend reading his post. One point he makes is that the person who made the mistake in question should not be lynched. One thing I think we should keep in mind is the fact that people tend to be more careful after they have made mistakes, I expect that anyone who makes a mistake in such a public way which impacts so many people will be very careful for a long time…
Steinar H. Gunderson analyses the maths in relation to DSA keys, it seems that if a DSA key is ever used with a bad RNG then it can be cracked by someone who sniffs the network [4]. It seems that it is safest to just not use DSA to avoid this risk. Another issue is that if a client supports multiple host keys (ssh version 2 can use three different key types, one for the ssh1 protocol, one for ssh2 with RSA, and one for ssh2 with DSA) then a man in the middle attack can be implemented by forcing a client to use a different key type – see Stealth’s article in Phrack for the details [5]. So it seems that we should remove support for anything other than SSHv2 with RSA keys.
To remove such support from the ssh server edit /etc/ssh/sshd_config and make sure it has a line with “Protocol 2“, and that the only HostKey line references an RSA key. To remove it from the ssh client (the important thing) edit /etc/ssh/ssh_config and make sure that it has something like the following:
Host *
Protocol 2
HostKeyAlgorithms ssh-rsa
ForwardX11 no
ForwardX11Trusted no
You can override this for different machines. So if you have a machine that uses DSA only then it would be easy to add a section:
Host strange-machine
Protocol 2
HostKeyAlgorithms ssh-dsa
So making the default configuration of the ssh client on all machines you manage has the potential to dramatically reduce the incidence of MITM attacks from the less knowledgable users.
When skilled users who do not have root access need to change things they can always edit the file ~/.ssh/config (which has the same syntax as /etc/ssh/ssh_config) or they can use command-line options to override it. The command ssh -o “HostKeyAlgorithms ssh-dsa” user@server will force the use of DSA encryption even if the configuration file requests RSA.
Enrico Zini describes how to use ssh-keygen to get the fingerprint of the host key [6]. One thing I have learned from comments on this post is how to get a fingerprint from a known hosts file. A common situation is that machine A has a known hosts file with an entry for machine B. I want to get the right key in machine C and there is no way of directly communicating between machine A and machine C (EG they are in different locations with no network access). In that situation the command “ssh-keygen -l -f ~/.ssh/known_hosts” can be used to display all the fingerprints of hosts that you have connected to in the past, then it’s a simple matter of grepping the output.
Docunext has an interesting post about ways of mitigating such problems [7]. One thing that they suggest is using fail2ban to block IP addresses that appear to be trying to do brute-force attacks. It’s unfortunate that the version of fail2ban in Debian uses /tmp/fail2ban.sock for it’s Unix domain socket for talking to the server (the version in Unstable uses /var/run/fail2ban/fail2ban.sock). They also mention patching network drivers to add entropy to the kernel random number generator. One thing that seems interesting is the package randomsound (currently in Debian/Unstable) which takes ALSA sound input as a source of entropy, note that you don’t need to have any sound input device connected.
When considering fail2ban and similar things, it’s probably best to start by restricting the number of machines which can connect to your SSH server. Firstly if you put it on a non-default port then it’ll take some brute-force to find it. This will waste some of the attacker’s time and also make the less persistent attackers go elsewhere. One thing that I am considering is having a few unused ports configured such that any IP address which connects to them gets added to my NetFilter configuration – if you connect to such ports then you can’t connect to any other ports for a week (or until the list becomes too full). So if for example I had port N configured in such a manner and port N+100 used for ssh listening then it’s likely that someone who port-scans my server would be blocked before they even discovered the SSH server. Does anyone know of free software to do this?
The next thing to consider is which IP addresses may connect. If you were to allow all the IP addresses from all the major ISPs in your country to connect to your server then it would still be a small fraction of the IP address space. Sure attackers could use machines that they already cracked in your country to launch their attacks, but they would have to guess that you had such a defense in place, and even so it would be an inconvenience for them. You don’t necessarily need to have a perfect defense, you only need to make the effort to reward ratio be worse for attacking you than for attacking someone else. Note that I am not advocating taking a minimalist approach to security, merely noting that even a small increment in the strength of your defenses can make a significant difference to the risk you face.
Update: based on comments I’m now considering knockd to open ports on demand. The upstream site for knockd is here [8], and some documentation on setting it up in Debian is here [9]. The concept of knockd is that you make connections to a series of ports which act as a password for changing the firewall rules. An attacker who doesn’t know those port numbers won’t be able to connect. Of course anyone who can sniff your network will discover the ports soon enough, but I guess you can always login and change the port numbers once knockd has let you in.
Also thanks to Helmut for advice on ssh-keygen.
I believe that the Red Hat process which has Fedora for home users (with a rapid release cycle and new versions of software but support for only about one year) and Enterprise Linux (with a ~18 month release cycle, seven years of support, and not always having the latest versions) gives significant benefits for the users.
The longer freeze times of Enterprise Linux (AKA RHEL) mean that it often has older versions of software than a Fedora release occurring at about the same time. In practice the only time I ever notice users complaining about this is in terms of OpenOffice (which is always being updated for compatability with the latest MS changes). As an aside, a version of RHEL or CentOS with a back-port of the latest OpenOffice would probably get a lot of interest.
RHEL also has a significantly smaller package set than Fedora, there is a lot of software out there that you wouldn’t want to support for seven years, a lot of software that you might want to support if you had more resources, and plenty of software that is not really of interest to enterprise customers (EG games).
Now there are some down-sides to the Red Hat plan. The way that they run Fedora is to have new releases of software instead of back-porting fixes. This means that bugs can be fixed with less effort (simply compiling a new version is a lot less effort than back-porting a fix), and that newer versions of the upstream code get tested. With some things this isn’t a problem, but in the past I have had problems with the Fedora kernel. One example was when I upgraded the kernel on a bunch of remote Fedora machines only to find that the new kernel didn’t support the network card, so I had to talk the users through selecting the older kernel at the GRUB menu (this caused pain and down-time). A problem with RHEL (which I see regularly on the CentOS machines I run) is that it doesn’t have the community support that Fedora does, and therefore finding binary packages for RHEL can be difficult – and often the packages are outdated.
I believe that in Debian we could provide benefits for some of our users by copying some ideas from Red Hat. There is currently some work in progress on releasing packages that are half-way between Etch and Lenny (Etch is the current release, Lenny will be the next one). The term Etch and a half refers to the work to make Etch run on newer hardware [1]. It’s a good project, but I don’t think that it goes far enough. It certainly won’t fulfill the requirements of people who want something like Fedora.
I think that if we had half-way releases of Debian (essentially taking a snap-shot of Testing and then fixing the worst of the bugs) then we could accommodate user demand for newer versions (making available a release which is on average half as old). Users who want really solid systems would run the full releases (which have more testing pre-release and more attention paid to bug fixes), but users who need the new features could run a half-way release. Currently there are people working on providing security support for Testing so that people who need the more recent versions of software can use Testing, I believe that making a half-way release would provide better benefits to most users while also possibly taking less resources from the developers. This would not preclude the current “Etch and a half” work of back-porting drivers, in the Red Hat model such driver back-ports are done in the first few years of RHEL support. If we were to really follow Red Hat in this regard the “Etch and a half” work would operate in tandem with similar work for Sarge (version 3.1 of Debian which was released in 2005)!
In summary, the Red Hat approach is to have Fedora releases aimed at every 6 months, but in practice coming out every 9 months or so and to have Enterprise Linux releases aimed at every year, but in practice coming out every 18 months. This means among other things that there can be some uncertainty as to the release order of future Fedora and RHEL releases.
I believe that a good option for Debian would be to have alternate “Enterprise” (for want of a better word) and half-way releases (comparable to RHEL and Fedora). The Enterprise releases could be frozen in coordination with Red Hat, Ubuntu, and other distributions (Mark Shuttleworth now refers to this as being a “pulse” in the free software community [], while the half-way releases would come out either when it’s about half-way between releases, or when there is a significant set of updates that would encourage users to switch.
One of the many benefits to having synchronised releases is that if the work in back-porting support for new hardware lagged in Debian then users would have a reasonable chance of taking the code from CentOS. If nothing else I think that making kernels from other distributions available for easy install is a good thing. There is a wide combination of kernel patches that may be selected by distribution maintainers, and sometimes choices have to be made between mutually exclusive options. If the Debian kernel doesn’t work best for a user then it would be good to provide them with a kernel compiled from the RHEL kernel source package and possibly other kernels.
Mark also makes the interesting suggestion of having different waves of code freeze, the first for the kernel, GCC, and glibc, and possibly server programs such as Apache. The second for major applications and desktop environments. The third for distributions. One implication of this is that not all distributions will follow the second wave. If a distribution follows the kernel, GCC, and glibc wave but not the applications wave it will still save some significant amounts of effort for the users. It will mean that the distributions in question will all have the same hardware support and kernel features, and that they will be able to run each others’ applications (except when the applications in question use system libraries from later waves). Also let’s not forget the possibility of running a kernel from distribution A on distribution B, it’s something I’ve done on many occasions, but it does rely on the kernels in question being reasonably similar in terms of features.
Yesterday I received two new machines from DOLA on-line auctions [1]. I decided to use the first to replace the hardware for my SE Linux Play Machine [2]. The previous machine I had used for that purpose was a white-box 1.1GHz Celeron and I replaced it with an 800MHz Pentium3 system (which uses only 35W when slightly active and only 28W when the hard disk spins down [3]).
The next step was to get the machine in question ready for it’s next purpose, I was planning to give it to a friend of a friend. A machine of those specs which was made by Compaq would be very useful to me, but when it’s a white-box I’ll just give it away. So I installed new RAM and a new hard drive in it (both of which had been used in another machine a few hours earlier and seemed to be OK) and turned it on. Nothing happened, I was just checking that it was plugged in correctly when I noticed smoke coming from the PSU… It seems strange that the machine in question had run 24*7 for about 6 months and then suddenly started smoking after being moved to a different room and being turned off overnight.
It is possible that the hard drive was broken and shorted out the PSU (the power cables going to the hard drive are thick enough that it could damage the PSU if it had a short-circuit). What I might do in the future is keep an old and otherwise useless machine on hand for testing hard drives so that if something like that happens then it won’t destroy a machine that is useful. Another possibility is that the dust in the PSU contained some metal fragments and that moving the machine to another room caused them to short something out, but there’s not much I can do with that when I get old machines. I might put an air filter in each room that I use for running computers 24*7 to stop such problems getting worse in future though.
I recently watched the TED lecture “5 dangerous things you should let your kids do” [4], so I’m going to offer the broken machine to some of my neighbors if they want to let their children take it apart.
|
|