Designing a Secure Linux System

The Threat

Bruce Schneier’s blog post about the Mariposa Botnet has an interesting discussion in the comments about how to make a secure system [1]. Note that the threat is considered to be remote attackers, that means viruses and trojan horses – which includes infected files run from USB devices (IE you aren’t safe just because you aren’t on the Internet). The threat we are considering is not people who can replace hardware in the computer (people who have physical access to it which includes people who have access to where it is located or who are employed to repair it). This is the most common case, the risk involved in stealing a typical PC is far greater than the whatever benefit might be obtained from the data on it – a typical computer user is at risk of theft only for the resale value of a second-hand computer.

So the question is, how do can we most effectively use free software to protect against such threats?

The first restriction is that the hardware in common use is cheap and has little special functionality for security. Systems that have a TPM seem unlikely to provide a useful benefit due to the TPM being designed more for Digital Restrictions Management than for protecting the user – and due to TPM not being widely enough used.

The BIOS and the Bootloader

It seems that the first thing that is needed is a BIOS that is reliable. If an attacker manages to replace the BIOS then it could do exciting things like modifying the code of the kernel at boot time. It seems quite plausible for the real-mode boot loader code to be run in a VM86 session and to then have it’s memory modified before it starts switches to protected mode. Every BIOS update is a potential attack. Coreboot replaces the default PC BIOS, it initialises the basic hardware and then executes an OS kernel or boot loader [2] (the Coreboot Wikipedia page has a good summary). The hardest part of the system startup process is initialising the hardware, Coreboot has that solved for 213 different motherboards.

If engineers were allowed to freely design hardware without interference then probably a significant portion of the computers in the market would have a little switch to disable the write line for the flash BIOS. I heard a rumor that in the days of 286 systems a vendor of a secure OS shipped a scalpel to disable the hardware ability to leave protected mode, cutting a track on the motherboard is probably still an option. Usually once a system is working you don’t want to upgrade the BIOS.

One of the payloads for Coreboot is GRUB. The Grub Feature Requests page has as it’s first entry “Option to check signatures of the bootchain up to the cryptsetup/luksOpen: MBR, grub partition, kernel, initramfs” [3]. Presumably this would allow a GPG signature to be checked so that a kernel and initrd would only be used if they came from a known good source. With this feature we could only boot a known good kernel.

How to run User Space

The next issue is how to run the user-space. There has been no shortage of Linux kernel exploits and I think it’s reasonable to assume that there will continue to be a large number of exploits. Some of the kernel flaws will be known by the bad guys for some time before there are patches, some of them will have patches which don’t get applied as quickly as desired. I think we have to assume that the Linux kernel will be compromised. Therefore the regular user applications can’t be run against a kernel that has direct hardware access.

It seems to me that the best way to go is to have the Linux kernel run in a virtual environment such as Xen or KVM. That means you have a hypervisor (Xen+Linux or Linux+KVM+QEMU) that controls the hardware and creates the environment for the OS image that the user interacts with. The hypervisor could create multiple virtual machines for different levels of data in a similar manner to the NSA NetTop project, not that this is really a required part of solving the general secure Internet terminal problem but as it would be a tiny bit of extra work you might as well do it.

One problem with using a hypervisor is that the video hardware tends to want to use features such as bus-mastering to give best performance. Apparently KVM has IOMMU support so it should be possible to grant a virtual machine enough hardware access to run 3D graphics at full speed without allowing it to break free.

Maintaining the Virtual Machine Image

Google has a good design for the ChromiumOS in terms of security [4]. They are using CGroups [5] to control access to device nodes in jails, RAM, CPU time, and other resources. They also have some intrusion detection which can prompt a user to perform a hardware reset. Some of the features would need to be implemented in a different manner for a full desktop system but most of the Google design features would work well.

For an OS running in a virtual machine when an intrusion is detected it would be best to have the hypervisor receive a message by some defined interface (maybe a line of text printed on the “console”) and then terminate and restart the virtual machine. Dumping the entire address space of the virtual machine would be a good idea too, with typical RAM sizes at around 4G for laptops and desktops and typical storage sizes at around 200G for laptops and 2T for new desktops it should be easy to store a few dumps in case they are needed.

The amount of data received by a typical ADSL link is not that great. Apart from the occasional big thing (like downloading a movie or listening to Internet radio for a long time) most data transfers are from casual web browsing which doesn’t involve that much data. A hypervisor could potentially store the last few gigabytes of data that were received which would then permit forensic analysis if the virtual machine was believed to be compromised. With cheap SATA disks in excess of 1TB it would be conceivable to store the last few years of data transfer (with downloaded movies excluded) – but such long-term storage would probably involve risks that would outweigh the rewards, probably storing no more than 24 hours of data would be best.

Finally in terms of applying updates and installing new software the only way to do this would be via the hypervisor as you don’t want any part of the virtual machine to be able to write to it’s data files or programs. So if the user selects to install a new application then the request “please install application X” would have to be passed to the hypervisor. After the application is installed a reboot of the virtual machine would be needed to apply the change. This is a common experience for mobile phones (where you even have to reboot if the telco changes some of their network settings) and it’s something that MS-Windows users have become used to – but it would get a negative reaction from the more skilled Linux users.

Would this be Accepted?

The question is, if we built this would people want to use it? The NetTop functionality of having two OSs interchangeable on the one desktop would attract some people. But most users don’t desire greater security and would find some reason to avoid this. They would claim that it lowered the performance (even for aspects of performance where benchmarks revealed no difference) and claim that they don’t need it.

At this time it seems that computer security isn’t regarded as a big enough problem for users. It seems that the same people who will avoid catching a train because one mugging made it to the TV news will happily keep using insecure computers in spite of the huge number of cases of fraud that are reported all the time.

10 comments to Designing a Secure Linux System

  • Andrew D.

    “Presumably this would allow a GPG signature to be checked so that a kernel and initrd would only be used if they came from a known good source. With this feature we could only boot a known good kernel.”

    Wouldn’t shipping a system with this signature verification turned on disable the user from building their own kernel from source and installing it? I would think that would run up against some serious GPL issues.

  • Robert

    I think you’re placing too much confidence in virtualization and/or hypervisors.

    What makes you think that once hypervisors become mainstream they will succeed where operating systems have ‘failed’ (i.e. in securely and reliably separating applications / classes of applications) ? Surely it would be better to make the operating system do the job it was originally designed for properly rather than just pushing the problem up the stack.

  • etbe

    Andrew: It wouldn’t necessarily stop the user, it would depend on the user’s settings. Even if it did that wouldn’t break the GPL, any more than just setting up a system with a password that the user doesn’t know.

    Robert: Firstly hypervisors have a simpler interface which means less avenues of attack and they have less code. So if they have the same code quality as the OS kernel then there will be fewer bugs and fewer ways of trying to exploit them. Next you can’t attack the hypervisor without cracking the virtual OS kernel first. So gaining root on a the real hardware of a system that runs on a virtual machine requires getting root on the VM, cracking the hypervisor application, and then if the part of the hypervisor that was cracked was running without root privs the attacker may need to crack root on the hypervisor kernel.

    Next gaining root can be done by exploiting a daemon which performs actions on behalf of the user. The virtual machine in such a design would have lots of applications installed to create a friendly environment. The hypervisor OS would have a minimum set of installed applications to reduce the risk of bugs.

    Finally it’s not inconceivable to run a different kernel for the hypervisor. Port KVM to OpenBSD and run your user-friendly Linux desktop environment in a virtual machine with the OpenBSD kernel being the final line of defense. No-one criticises the OpenBSD kernel code with regard to security issues!

    In summary I don’t think that hypervisors magically solve anything, but they do add another layer which makes the attack more difficult. The extra layer can be significantly stronger than the other layers.

  • From my experience, the organizations wanting high levels of Confidentiality, etc etc are the ones who would want this mousetrap. I have heard of various groups trying to do this, but they are usually hamstrung by being too small and not able to share out something they themselves invented (they can usually help with something in neutral territory versus another agency and such :)).

  • Oh I wonder how many security considerations about the OpenBSD kernel would go away with a virtual technology embedded in it. It will be interesting if they ever crack that nut.

  • etbe

    Stephen: Well NetTop was an attempt to do such things, it wasn’t that good. My design above could be considered to be the next generation of NetTop. As for Theo’s thoughts on this matter, I’ve already responded to them at the above URL.

  • Robert

    “Firstly hypervisors have a simpler interface which means less avenues of attack and they have less code.”

    True. But surely this is only because inter-partition communication is currently typically very ‘narrowband’. As the need for richer communication between applications/partitions grows (to the point that would be needed in a desktop environment), I bet the same problems will appear.

    “The extra layer can be significantly stronger than the other layers.”

    Ah, but the abstraction interface is also 100x more inappropriate. Having hosts pretending to their applications to be a rather backwards overextended PC design from the 1980s and pretending to have PCI devices and so on seems absurd to me. And I’m also not totally convinced with the idea of adding more layers of indirection to make a cracker’s life more difficult. It reminds me slightly of the reasoning behind security through obscurity.

    Anyway – I’m sure you have much more real world experience of these things than me – I just don’t see virtualization as a security feature, and the current trend to view it as such worries me.

  • Ary Kokos

    2 years ago the french National Research Agency launched a similar project, the SEC&SI project
    The idea was to provide a more secure system for end users which run on a standard computer, and does not rely on specific hardware (like TPM) and based on open source technologies.

    SAFE-OS is similar to the solution you presented here

    There are 3 project :
    – OS^4 by EADS/Supelec: based on Debian + vserver and grsecurity

    – SAFE-OS : based on Debian and Xen, 1 VM per environment (1 for email, 1 for web ; 1 for online tax paying system) (sorry, it is in french but some figure are in english)

    – SPACLik : Gentoo + SELinux

    The project Website, in French (

  • etbe

    This article has appeared on Reddit. A Reddit comment make a comparison with “Paranoid Linux”. It seems that the Paranoid Linux project is dead.

    The above URL discusses Paranoid Linux and it doesn’t seem to have much correlation with what I am proposing. My idea could be implemented by a small distribution for the hypervisor (maybe Debian) and another installation of Linux (it could be Debian or it could be something else like CentOS or Ubuntu LTS) for the virtual machine.

    As Kristin Shoemaker points out in the above URL it’s a lot of work to create a new OS and it’s better to add security features to an existing one. To the outside world my proposal might look like two different Linux systems behind a NAT device or it might look like a single Linux system with Squid caching the package updates.

  • Ary Kokos

    It seems that Johanna R made a very interesting proposal for a secure OS :
    A particular effort is put on protecting against some physical attacks and an easier management and conevniant user interface among many other things