Linux, politics, and other interesting things
Bruce Schneier’s blog post about the Mariposa Botnet has an interesting discussion in the comments about how to make a secure system . Note that the threat is considered to be remote attackers, that means viruses and trojan horses – which includes infected files run from USB devices (IE you aren’t safe just because you aren’t on the Internet). The threat we are considering is not people who can replace hardware in the computer (people who have physical access to it which includes people who have access to where it is located or who are employed to repair it). This is the most common case, the risk involved in stealing a typical PC is far greater than the whatever benefit might be obtained from the data on it – a typical computer user is at risk of theft only for the resale value of a second-hand computer.
So the question is, how do can we most effectively use free software to protect against such threats?
The first restriction is that the hardware in common use is cheap and has little special functionality for security. Systems that have a TPM seem unlikely to provide a useful benefit due to the TPM being designed more for Digital Restrictions Management than for protecting the user – and due to TPM not being widely enough used.
It seems that the first thing that is needed is a BIOS that is reliable. If an attacker manages to replace the BIOS then it could do exciting things like modifying the code of the kernel at boot time. It seems quite plausible for the real-mode boot loader code to be run in a VM86 session and to then have it’s memory modified before it starts switches to protected mode. Every BIOS update is a potential attack. Coreboot replaces the default PC BIOS, it initialises the basic hardware and then executes an OS kernel or boot loader  (the Coreboot Wikipedia page has a good summary). The hardest part of the system startup process is initialising the hardware, Coreboot has that solved for 213 different motherboards.
If engineers were allowed to freely design hardware without interference then probably a significant portion of the computers in the market would have a little switch to disable the write line for the flash BIOS. I heard a rumor that in the days of 286 systems a vendor of a secure OS shipped a scalpel to disable the hardware ability to leave protected mode, cutting a track on the motherboard is probably still an option. Usually once a system is working you don’t want to upgrade the BIOS.
One of the payloads for Coreboot is GRUB. The Grub Feature Requests page has as it’s first entry “Option to check signatures of the bootchain up to the cryptsetup/luksOpen: MBR, grub partition, kernel, initramfs” . Presumably this would allow a GPG signature to be checked so that a kernel and initrd would only be used if they came from a known good source. With this feature we could only boot a known good kernel.
The next issue is how to run the user-space. There has been no shortage of Linux kernel exploits and I think it’s reasonable to assume that there will continue to be a large number of exploits. Some of the kernel flaws will be known by the bad guys for some time before there are patches, some of them will have patches which don’t get applied as quickly as desired. I think we have to assume that the Linux kernel will be compromised. Therefore the regular user applications can’t be run against a kernel that has direct hardware access.
It seems to me that the best way to go is to have the Linux kernel run in a virtual environment such as Xen or KVM. That means you have a hypervisor (Xen+Linux or Linux+KVM+QEMU) that controls the hardware and creates the environment for the OS image that the user interacts with. The hypervisor could create multiple virtual machines for different levels of data in a similar manner to the NSA NetTop project, not that this is really a required part of solving the general secure Internet terminal problem but as it would be a tiny bit of extra work you might as well do it.
One problem with using a hypervisor is that the video hardware tends to want to use features such as bus-mastering to give best performance. Apparently KVM has IOMMU support so it should be possible to grant a virtual machine enough hardware access to run 3D graphics at full speed without allowing it to break free.
Google has a good design for the ChromiumOS in terms of security . They are using CGroups  to control access to device nodes in jails, RAM, CPU time, and other resources. They also have some intrusion detection which can prompt a user to perform a hardware reset. Some of the features would need to be implemented in a different manner for a full desktop system but most of the Google design features would work well.
For an OS running in a virtual machine when an intrusion is detected it would be best to have the hypervisor receive a message by some defined interface (maybe a line of text printed on the “console”) and then terminate and restart the virtual machine. Dumping the entire address space of the virtual machine would be a good idea too, with typical RAM sizes at around 4G for laptops and desktops and typical storage sizes at around 200G for laptops and 2T for new desktops it should be easy to store a few dumps in case they are needed.
The amount of data received by a typical ADSL link is not that great. Apart from the occasional big thing (like downloading a movie or listening to Internet radio for a long time) most data transfers are from casual web browsing which doesn’t involve that much data. A hypervisor could potentially store the last few gigabytes of data that were received which would then permit forensic analysis if the virtual machine was believed to be compromised. With cheap SATA disks in excess of 1TB it would be conceivable to store the last few years of data transfer (with downloaded movies excluded) – but such long-term storage would probably involve risks that would outweigh the rewards, probably storing no more than 24 hours of data would be best.
Finally in terms of applying updates and installing new software the only way to do this would be via the hypervisor as you don’t want any part of the virtual machine to be able to write to it’s data files or programs. So if the user selects to install a new application then the request “please install application X” would have to be passed to the hypervisor. After the application is installed a reboot of the virtual machine would be needed to apply the change. This is a common experience for mobile phones (where you even have to reboot if the telco changes some of their network settings) and it’s something that MS-Windows users have become used to – but it would get a negative reaction from the more skilled Linux users.
The question is, if we built this would people want to use it? The NetTop functionality of having two OSs interchangeable on the one desktop would attract some people. But most users don’t desire greater security and would find some reason to avoid this. They would claim that it lowered the performance (even for aspects of performance where benchmarks revealed no difference) and claim that they don’t need it.
At this time it seems that computer security isn’t regarded as a big enough problem for users. It seems that the same people who will avoid catching a train because one mugging made it to the TV news will happily keep using insecure computers in spite of the huge number of cases of fraud that are reported all the time.