Installing a Red Hat based DomU on a Debian Dom0

The first step is to copy /images/xen/vmlinuz and /images/xen/initrd.img from the Fedora (or RHEL or CentOS) DVD somewhere convenient, I use /boot/OS/ (where OS is the name of the image) but other locations will do.

Now choose a suitable Ethernet MAC address for the interface (see my previous post on how I choose them [1]).

Create a temporary block device for the install, I use /dev/VG0/OS-install (where OS is replaced by the name of the distribution, “f8” or “cent5“), it’s a logical volume in an LVM volume group named VG0. The device should be at least 2G in size for a basic Fedora install (512M for swap, 1G for files, and 512M free after the install). It is of course possible to use DOS partitions for the Xen block devices, but this would be unreasonably difficult to manage. An option for people who don’t like LVM would be to use files on an XFS filesystem (Ext3 performs poorly when creating and removing large files).

When configuring Xen on Debian systems I generally use /dev/hda type device names. The device name seems quite arbitrary and /dev/hda is a familiar name for hard drives that many people have been used to for 15+ years. But the Fedora install process doesn’t like it and I’m forced to use /dev/xvda etc.

I often install Fedora on a machine that only has 256M of RAM spare for the DomU. For recent versions of Fedora 256M of RAM is about the minimum for an install at the best of times, and a HTTP install takes even more because the root filesystem used for the install is copied via HTTP and stored in a RAM disk. It might be possible to use less RAM with a CD or DVD install or even a NFS install, but I couldn’t get CD/DVD installation working and I generally don’t give Xen DomU’s NFS access if I can avoid it. So I had to create a swap space (an attempt to do an install with 256M of RAM and no swap aborted when installing the kernel package). I expect that most serious use of Xen will have 256M of RAM or less for the DomU, part of the problem here is that Xen allocates RAM not virtual memory. VMWare allocates virtual memory so the total memory for virtual machines can be greater than physical RAM and thus this problem will be less common with VMWare.

I believe that the best way of configuring virtual machine images is to have the virtual machine manager (Xen in this case) provide block devices to the virtual machine and have the virtual machine implement no partitioning (no LVM or anything equivalent). The main reason is that DOS partition tables and LVM configuration on a block device used by Xen can not be used easily in the host environment (the Dom0 for Xen). I am not aware of how to access DOS partition tables (although I’m sure it’s possible somehow) and while LVM can be used it’s a bad idea due to the fact that there is no way to deactivate a LVM volume group that is active, and the fact that there is no support for having multiple volume groups of the same name. The lack of support for multiple volume groups of the same name is a reasonable limitation, but an insurmountable problem when using a virtual machine environment. It’s quite reasonable to create several cloned instances of a virtual machine and renaming an LVM volume group would require more changes inside the virtual machine than you would want. Also using snap-shots of old versions of the virtual machine data is difficult if the same volume group name is used.

So for ease of management I want to have filesystems on block devices (such as /dev/xvda) instead of partitions (such as /dev/xvda1). Unfortunately Anaconda (the Fedora installer) doesn’t support this. So I had to do the initial install with DOS partitions and then fix it afterwards. So use the manual option and create a primary partition for the root filesystem and then create a non-primary partition for swap (when using small amounts of RAM such as 256M) so that swap can be used during the install. The root filesystem needs to be at the start of the disk to make it easier to sort this out later.

After installing Fedora and shutting the virtual machine down the next step is to copy the block device to the desired configuration (filesystem on an unpartitioned device). If the root filesystem is the first partition then the first 63 sectors will be the partition table and reserved space so dd can be used to copy the data with the following commands:

dd if=/dev/VG0/OS-install of=/dev/VG0/OS bs=512 skip=63
e2fsck -f /dev/VG0/OS
resize2fs /dev/VG0/OS

The next step is to mount the device /dev/VG0/OS in the Dom0 to change /etc/fstab, I use /dev/xvda for the root device and /dev/xvdb for swap.

Now to remove the cruft:
Avahi is a network service discovery system, mainly used for laptops and isn’t needed on a server, it is installed by default on all recent Fedora, RHEL and CentOS releases but it is not useful in a DomU (any unused network service is a security risk if you don’t disable or remove it). Smartmontools is for detecting impending failure of a hard drive and does not do any good when using a virtual block device (you run it on the Dom0). It might be considered a bug that smartd doesn’t exit on startup when it sees a device such as /dev/xvda. The pcsc-lite package for managing smart cards is of no use to me and all the other people who don’t own readers for smart-cards, and it can therefore be removed. Bluetooth networking support (in the package bluez-utils) is also only usable in a Dom0 (AFAIK), and the only bluetooth device I own is my mobile phone so I can’t use it on my computer. The command “yum remove avahi smartmontools pcsc-lite bluez-utils” removes them.

For almost all of my DomU’s I don’t use NFS or do any printing, so I remove the packages related to them. Also autofs is in most cases only useful for servers when mounting NFS filesystems. I remove them with the command “yum remove nfs-utils portmap cups autofs“.

The GPM daemon (which supports cut/paste operations with a mouse on virtual consoles) is of no use on a Xen DomU, unfortunately the vim-enhanced package depends on it. I could just disable the daemon, but as I like to run small images I remove it with “yum remove gpm“. I may have to reinstall it on some images as some of my clients like the extra VIM functionality.

It’s unfortunate that debootstrap doesn’t work on CentOS (and presumably Fedora) so installing a Debian DomU on a CentOS/Fedora Dom0 requires creating an image on a Debian machine or downloading an image from www.jailtime.org .

Sample Xen Config for the install:

kernel = “/boot/OS/vmlinuz”
ramdisk = “/boot/OS/initrd.img”
memory = 256
name = “OS”
vif = [ ‘mac=00:16:3e:66:66:68, bridge=xenbr0’ ]
disk = [ ‘phy:/dev/VG0/OS-install,xvda,w’ ]
extra=”askmethod text”

Sample Xen Config for operation:

kernel = “/boot/cent5/vmlinuz-2.6.18-53.el5xen”
ramdisk = “/boot/cent5/initrd-2.6.18-53.el5xen.img”
memory = 256
name = “cent5”
vif = [ ‘mac=00:16:3e:66:66:68, bridge=xenbr0’ ]
disk = [ ‘phy:/dev/VG0/cent5,xvda,w’, ‘phy:/dev/VG0/cent5-swap,xvdb,w’ ]
root = “/dev/xvda ro”

Ideas to Copy from Red Hat

I believe that the Red Hat process which has Fedora for home users (with a rapid release cycle and new versions of software but support for only about one year) and Enterprise Linux (with a ~18 month release cycle, seven years of support, and not always having the latest versions) gives significant benefits for the users.

The longer freeze times of Enterprise Linux (AKA RHEL) mean that it often has older versions of software than a Fedora release occurring at about the same time. In practice the only time I ever notice users complaining about this is in terms of OpenOffice (which is always being updated for compatability with the latest MS changes). As an aside, a version of RHEL or CentOS with a back-port of the latest OpenOffice would probably get a lot of interest.

RHEL also has a significantly smaller package set than Fedora, there is a lot of software out there that you wouldn’t want to support for seven years, a lot of software that you might want to support if you had more resources, and plenty of software that is not really of interest to enterprise customers (EG games).

Now there are some down-sides to the Red Hat plan. The way that they run Fedora is to have new releases of software instead of back-porting fixes. This means that bugs can be fixed with less effort (simply compiling a new version is a lot less effort than back-porting a fix), and that newer versions of the upstream code get tested. With some things this isn’t a problem, but in the past I have had problems with the Fedora kernel. One example was when I upgraded the kernel on a bunch of remote Fedora machines only to find that the new kernel didn’t support the network card, so I had to talk the users through selecting the older kernel at the GRUB menu (this caused pain and down-time). A problem with RHEL (which I see regularly on the CentOS machines I run) is that it doesn’t have the community support that Fedora does, and therefore finding binary packages for RHEL can be difficult – and often the packages are outdated.

I believe that in Debian we could provide benefits for some of our users by copying some ideas from Red Hat. There is currently some work in progress on releasing packages that are half-way between Etch and Lenny (Etch is the current release, Lenny will be the next one). The term Etch and a half refers to the work to make Etch run on newer hardware [1]. It’s a good project, but I don’t think that it goes far enough. It certainly won’t fulfill the requirements of people who want something like Fedora.

I think that if we had half-way releases of Debian (essentially taking a snap-shot of Testing and then fixing the worst of the bugs) then we could accommodate user demand for newer versions (making available a release which is on average half as old). Users who want really solid systems would run the full releases (which have more testing pre-release and more attention paid to bug fixes), but users who need the new features could run a half-way release. Currently there are people working on providing security support for Testing so that people who need the more recent versions of software can use Testing, I believe that making a half-way release would provide better benefits to most users while also possibly taking less resources from the developers. This would not preclude the current “Etch and a half” work of back-porting drivers, in the Red Hat model such driver back-ports are done in the first few years of RHEL support. If we were to really follow Red Hat in this regard the “Etch and a half” work would operate in tandem with similar work for Sarge (version 3.1 of Debian which was released in 2005)!

In summary, the Red Hat approach is to have Fedora releases aimed at every 6 months, but in practice coming out every 9 months or so and to have Enterprise Linux releases aimed at every year, but in practice coming out every 18 months. This means among other things that there can be some uncertainty as to the release order of future Fedora and RHEL releases.

I believe that a good option for Debian would be to have alternate “Enterprise” (for want of a better word) and half-way releases (comparable to RHEL and Fedora). The Enterprise releases could be frozen in coordination with Red Hat, Ubuntu, and other distributions (Mark Shuttleworth now refers to this as being a “pulse” in the free software community [], while the half-way releases would come out either when it’s about half-way between releases, or when there is a significant set of updates that would encourage users to switch.

One of the many benefits to having synchronised releases is that if the work in back-porting support for new hardware lagged in Debian then users would have a reasonable chance of taking the code from CentOS. If nothing else I think that making kernels from other distributions available for easy install is a good thing. There is a wide combination of kernel patches that may be selected by distribution maintainers, and sometimes choices have to be made between mutually exclusive options. If the Debian kernel doesn’t work best for a user then it would be good to provide them with a kernel compiled from the RHEL kernel source package and possibly other kernels.

Mark also makes the interesting suggestion of having different waves of code freeze, the first for the kernel, GCC, and glibc, and possibly server programs such as Apache. The second for major applications and desktop environments. The third for distributions. One implication of this is that not all distributions will follow the second wave. If a distribution follows the kernel, GCC, and glibc wave but not the applications wave it will still save some significant amounts of effort for the users. It will mean that the distributions in question will all have the same hardware support and kernel features, and that they will be able to run each others’ applications (except when the applications in question use system libraries from later waves). Also let’s not forget the possibility of running a kernel from distribution A on distribution B, it’s something I’ve done on many occasions, but it does rely on the kernels in question being reasonably similar in terms of features.