Archives

Categories

The Transition to Ext4

I’ve been investigating the Ext4 filesystem [1].

The main factor that is driving me to Ext4 at the moment is fsck times. I have some systems running Ext3 on large filesystems which I need to extend. In most cases Ext3 filesystems have large numbers of Inodes free because the relationship between the number of Inodes and the filesystem size is set when it is created, enlarging the filesystem increases the number of Inodes and apart from backup/format/restore there is no way of changing this. Some of the filesystems I manage can’t be converted because the backup/restore time would involve an unreasonable amount of downtime.

Page 11 of the OLS paper by Avantika Mathur et al [2] has a graph of the relationship between the number of Inodes and fsck time.

Ext4 also has a number of other features to improve performance, including changes to journaling and block allocation.

Now my most important systems are all virtualised. I am using Debian/Lenny and RHEL5 for the Dom0s. Red Hat might back-port Ext4 to the RHEL5 kernel, but there will probably never be a supported kernel for Debian/Lenny with Ext4 and Xen Dom0 support (there may never be a kernel for any Debian release with such support).

So this means that in a few months time I will be running some DomUs which have filesystems that can’t be mounted in the Dom0. This isn’t a problem when everything works well. But when things go wrong it’s really convenient to be able to amount a filesystem in the Dom0 to fix things, this option will disappear for some of my systems, so if virtual machine A has a problem then I will have to mount it’s filesystems with virtual machine B to fix it. Of course this is a strong incentive to use multiple block devices for the virtual machine so that a small root filesystem can be run with Ext3 and the rest can be Ext4.

At the moment only Debian/Unstable and Fedora have support for Ext4 so this isn’t a real issue. But Debian/Squeeze will release with Ext4 support and I expect that RHEL6 will also have it. When those releases happen I will be upgrading my virtual machines and will have these support issues.

It’s a pity that Red Hat never supported XFS, I could have solved some of these problems years ago if XFS was available.

Now for non-virtual machines one factor to consider is that the legacy version of GRUB doesn’t support Ext4, I discovered this after I used tune2fs to convert all filesystems on my EeePC to Ext4. I think I could have undone that tune2fs option but instead decided to upgrade to the new version of GRUB and copy the kernel and initramfs to a USB device in case it didn’t boot. It turns out that the new version of GRUB seems to work well for booting from Ext4.

One thing that is not explicitly mentioned in the howto is that the fsck pass needed to convert to Ext4 will not be done automatically by most distributions. So when I converted my EeePC I had to use sulogin to manually fsck the filesystems. This isn’t a problem with a laptop, but could be a problem with a co-located server system.

For the long term BTRFS may be a better option, I plan to test it on /home on my EeePC. But I will give Ext4 some more testing first. In any case the Ext3 filesystems on big servers are not going to go away in a hurry.

7 comments to The Transition to Ext4

  • Anonymous

    Depending on what options you use when converting to ext4, you can still mount an ext4 filesystem as ext2, and use older GRUB with it as well. You’d still get many of the benefits of ext4 without breaking backward compatibility. Unfortunately, one of the big features you’d have to live without, uninit_bg, provides most of the benefit in fsck times.

  • I’ve migrated a bunch of vhosts on my server from Xen to KVM with libvirt. The main problem was that it requires you to have an image of a hard drive and not just of a partition, so I had to learn how to use kpartx, but otherwise it was much easier – all native kernels, fresh software, easier to see memory usage and manage the hosts remotely (virt-manager is awesome), no need to have guest kernel images in the host, …. And it is pretty easy to turn on virtualisation-aware hard drive and network access to get enough of a performance to not worry about it much.

  • James

    The RHEL 5.4 kernel has XFS support. Userspace tools aren’t in the standard RHEL yet though (I don’t know why).

    Several of the XFS developers now work for Redhat as well. And xfs_repair is much faster than the ext4 version in my experiences.

  • Anonymous

    (Followup to my first comment: uninit_bg and flex_bg provide the fsck benefit, not just uninit_bg; both of those represent backward-incompatible features if you turn them on.)

  • etbe

    Anon: Yes, there is simply no way around it for me. Ext4 is what I need to do.

    Aigars: When you say that an image of a drive is needed, I presume you mean that the Xen trick of making several LVM volumes look like partitions on one larger disk isn’t supported. Even if I was using that Xen feature it wouldn’t be a big deal, I could turn it off.

    The RHEL5 Xen server I run is not going to be changed any time soon.

    I run a bunch of Debian i686 Xen servers that can’t run KVM. I run one Debian Xen server that could potentially run KVM. But it seems best to run Xen on all my virtual servers instead of using Xen on all but one.

    James: That’s interesting, it seems to have appeared in version 2.6.18-138.el5. If they had done that earlier it would have saved me some pain.

    Anon: Do you have a reference to exactly what uninit_bg and flex_bg do?

  • Hmm.. can one send magic keys to a virtualised machine? You’re gonna need that to sync your filesystem when something goes wrong. If Squeeze (mid next year?) will come with ext4, then I hope that they either have a very recent kernel or backport fixes.

    -c

  • Yes, where Xen pretends that /dev/data/production_root and /dev/data/production_swap are actually partition 1 and 2 on a virtual hard drive, for KVM you need to have an actual drive (like a /dev/data/production LVM LV) with a partition table, partitions and a boot loader.

    If your systems can boot on hardware, they should be able to boot on KVM. I mean, even Windows boots on a KVM. You don’t even really need those hardware bits in the processor nowadays on the host. You will loose some speed if you don’t have the VT extension in the CPU and use an older kernel that does not understand virtio and virtnet devices, but for most situations that is less of a problem and the benefits from simpler control are overwhelming (especially, if migrating from some old bare metal hardware).