3

Continuously Usable Testing of SE Linux

Joey has proposed a new concept of “Continuously Usable Testing” for Debian [1], basically testing should be usable at all times and packages that aren’t usable should be dropped. But to properly achieve this goal we need continual testing of usability.

The Plan For SE Linux

To do this for SE Linux I’m setting up a Xen server which will have a number of different DomUs for testing a variety of server applications. The system has 1.5G of RAM and 160G of mirrored storage. An image of a typical server will take about 4G of disk space, so we could have something like 40 images online and ready for testing. I have already setup Squid on another system on the same LAN to cache Debian packages, so running “apt-get dist-upgrade” on a number of DomUs won’t take that long. With 256M for a typical server image I could have 5 images running at the same time. If the hardware isn’t enough then I can expand it, I hope to get some donations of DDR-266 or DDR-333 RAM (or maybe DDR-400) to upgrade the system to 4G of RAM, I can add more hard drives, and I could even install more servers.

I want to have testing be very usable for SE Linux throughout the development cycle so that I don’t have to rush things before release.

At this stage I’m not sure whether to track Unstable or Testing for this. I guess it might be best to track Testing most of the time and only track Unstable for daemons that are changing rapidly. It might get boring testing every version that comes through Unstable, but if people want to do this then I won’t stop them.

Setting up the Tests

What I need are interested people who want to install server configurations for testing. If you have some favorite combination of daemons that you want tested for SE Linux support (even if it’s daemons that have no current policy) then I can give you root access to a DomU to develop test cases. Ideally there would be automated tests used for most things for example testing a mail server by using swaks to deliver mail and a POP or IMAP client script to retrieve it. But some things can’t be tested properly without human intervention.

For the automated tests I want to script the creation of DomUs, upgrading the packages in the DomU, testing it, and then shutting down the DomU if it all works. If at any time the tests fail (or the upgrade fails) then it would wait for human intervention. That would be me fixing SE Linux problems and other people fixing the application problems. I think that discovering SE Linux issues will only be a part of this project.

For the manual tests I will grant access to create and destroy the DomUs in question to people who can run the tests.

I’m thinking of having a couple of DomUs running permanently for things which are test candidates but also useful for the project, such as a MediaWiki instance. It really depends on the interest of people who might use such things.

Also I’m thinking of setting up some Ubuntu DomUs too, I probably should join Ubuntu and get involved with SE Linux there.

Sharing the Images

I have a web server in Germany with almost unlimited bandwidth and storage. For every image that is created I want to upload a version to the server in Germany to allow anyone in the world to test it. There are lots of possible ways of using this for software development. For example if you had a patched version of Apache that you wanted to test then you could download every image that had Apache installed and test that they all work. That would be easier than configuring Apache in different ways and also possibly provide better coverage.

Also if someone can’t figure out how to configure a daemon correctly then downloading a Xen image of a working configuration could be helpful (if a little bandwidth intensive). Note that deploying such an image in production would be a really bad idea, among other things there are lots of places where passwords are stored and you wouldn’t want to risk missing one.

I also plan to share the scripts used in running the Dom0 and anything else that seems useful along the way.

What We Need

The main thing we need is volunteers to configure virtual machines with their favorite daemons. Note that I don’t plan to have only one daemon per DomU, if we can get multiple daemons running that don’t conflict (EG file server and mail server) or multiple daemons that can interact (EG database server and a mail server or anything else that can be a database client) running on the same system then that’s a good thing. So there will be some degree of interaction with other people.

I’m happy to accept contributions from people who aren’t interested in SE Linux. But SE Linux will run on all DomUs.

Finally I also need more RAM for a HP D530S, DU875PA (that’s a Celeron 2.4GHz). I’ll accept donations of complete systems too once my HP system gets full, preferably relatively low power systems as they will be housed in a location that’s not as well ventilated as I would like (cost and availability of IP addresses were the main criteria). A laptop with a broken screen would be great!

The system won’t go live until Monday, but I think that probably people won’t be ready to do much work with less than two days notice anyway.

7

Dynamic DNS

The Problem

My SE Linux Play Machine has been down for a couple of weeks. I’ve changed to a cheaper Internet access plan which also allows me to download a lot more data, but I don’t have a static IP address any more – and my ISP seems to change the IP a lot more often than I’ve experienced in the past (I’m used to having a non-static IP address not change for months rather than hours). So I needed to get Dynamic DNS working. Naturally I wasn’t going to use one of the free or commercial Dynamic DNS solutions, I prefer to do things myself. So my Play Machine had to remain offline until I fixed this.

The Solution

dyn    IN      NS      ns.sws.net.au.
        IN      NS      othello.dycom.com.au.
play    IN      CNAME  play.dyn.coker.com.au.

The first thing I did was to create a separate zone file, I put the above records in my main zone file to make play.coker.com.au be a CNAME for play. and dyn.coker.com.au is a dynamic domain. I have SE Linux denying BIND the ability to write to the primary zone file for my domain to make it slightly more difficult for an attacker to insert fake DNS records (they could of course change the memory state of BIND to make it serve bogus data). The dynamic zone file is stored where BIND can write it – and therefore a BIND exploit could easily replace it (but such an attack is out of the scope of the Play Machine project so don’t get any ideas).

Another reason for separating the dynamic data is that BIND journals changes to a dynamic zone and therefore if you want to manually edit it you have to delete the journal, stop BIND, edit the file, and then restart BIND. One of the things that interests me is setting up dynamic DNS for some of my clients, as a constraint is that my client must be able to edit the zone file themself I have to keep the editing process for the main zone file relatively simple.

dnssec-keygen -a hmac-md5 -b 128 -n host foo-dyn.key

For newer versions of BIND use the following command instead:

tsig-keygen -a hmac-sha512 foo-dyn

I used the above command to create the key files. It created Kfoo-dyn.key.+X+Y.key and Kfoo-dyn.key.+X+Y.private where X and Y are replacements for numbers that might be secret.

key "foo" { algorithm hmac-md5; secret "XXXXXXXX"; };
zone "dyn.coker.com.au" {
  type master;
  file "/var/cache/bind/dyn.coker.com.au";
  allow-update { key "foo"; };
allow-transfer { key ns; };
};

I added the above to the BIND configuration to create the dynamic zone and allow it to be updated by this key. The value which I replaced with XXXXXXX in this example came from Kfoo-dyn.key.+X+Y.key. I haven’t found any use for the .private file in this mode of operation. Please let me know if I missed something.

Finally I used the following shell script to take the IP address from the interface that is specified on the command-line and update the DNS with it. I chose a 120 second timeout because i will sometimes change IP address often and because the system doesn’t get enough hits for anyone to care about DNS caching.

#!/bin/bash
set -e
IP=$(ip addr list $1|sed -n -e "s/\/.*$//" -e "s/^.*inet //p")
nsupdate -y foo:XXXXXXXX << END
update delete play.dyn.coker.com.au A
update add play.dyn.coker.com.au 120 A $IP
send
END

Update

It is supposed to be possible to use the -k option to nsupdate to specify a file containing the key. Joey’s comment gives some information on how to get it working (it sounds like it’s buggy).

rhesa pointed out another way of doing it, so I’ve now got a script like the following in production which solves the security issue (as long as the script is mode 0700) and avoids using other files.

#!/bin/bash
set -e
IP=$(ip addr list $1|sed -n -e "s/\/.*$//" -e "s/^.*inet //p")
nsupdate << END
key foo XXXXXXXX
update delete play.dyn.coker.com.au A
update add play.dyn.coker.com.au 120 A $IP
send
END

Update

Added a reference to the tsig-keygen command for newer bind.

7

Play Machine Online Again

My SE Linux Play Machine is online again. It’s been online for the last month and much of the month before due to Xen issues. Nothing really tricky to solve, but I was busy with other things. Sorry for any inconvenience.

1

My Squeeze SE Linux Repository

deb http://www.coker.com.au squeeze selinux

I have an Apt repository for Squeeze SE Linux packages at the above URL. Currently it contains a modified version of ffmpeg that doesn’t need execmod access on i386 and fixes the labeling of /dev/xen on systems that use devtmpfs as reported in bug #597403. I will keep updating this repository for any SE Linux related bugs that won’t get fixed in Squeeze.

Is there any interest in architectures other than i386 and AMD64?

1

Creating a SE Linux Chroot environment

Why use a Chroot environment?

A large part of the use of chroot environments is for the purpose of security, it used to be the only way of isolating a user from a section of the files on a server. In many of the cases where a chroot used to be used for security it is now common practice to use a virtual server. Also another thing to note is that SE Linux provides greater access restrictions to most daemons than a chroot environment would so in many case using SE Linux with a sensible policy is a better option than using a chroot environment to restrict a daemon. So it seems to me that the security benefits that can be obtained by using a chroot environment have been dramatically decreased over the last 5+ years.

One significant benefit of a chroot environment is that of running multiple different versions of software on one system. If for example you have several daemons that won’t run correctly on the same distribution and if you don’t want to have separate virtual machines (either because you don’t run a virtualisation technology or because the resources/expense of having multiple virtual servers is unacceptable) then running multiple chroot environments is a reasonable option.

The Simplest Solution

The simplest case is when all the chroot environments are equally trusted, that means among many other things that they all have the latest security patches applied. Then you can run them all with the same labels, so every file in the chroot environment will have the same label as it’s counterpart in the real root – this will mean that for example a user from the real root could run /chroot/bin/passwd and possibly get results you don’t desire. But it’s generally regarded that the correct thing to do is to have a chroot environment on a filesystem that’s mounted nosuid which will deal with most instances of such problems. One thing to note however is that the nosuid mount option also prevents SE Linux domain transitions, so it’s not such a good option when you use SE Linux as domain transitions are often used to reduce the privileges assigned to the process.

There are two programs for labeling files in SE Linux, restorecon is the most commonly used one but there is also setfiles which although being the same executable (restorecon is a symlink to setfiles) has some different command-line options. The following command on a default configuration of a Debian/Lenny system will label a chroot environment under /chroot with the same labels as the main environment:

setfiles -r /chroot /etc/selinux/default/contexts/files/file_contexts /chroot

I am considering adding an option to support chroot environments to restorecon, if I do that then I will probably back-port it to Lenny, but that won’t happen for a while.

For a simple chroot once the filesystem is labelled it’s ready to go, then you can start daemons in the chroot environment in the usual way.

Less trusted Chroot environments

A reasonably common case is where the chroot environment is not as trusted. One example is when you run an image of an old server in a chroot environment. A good way of dealing with this is to selectively label parts of the filesystem as required. The following shell code instructs semanage to add file contexts entries for a chroot environment that is used for the purpose of running Apache. Note that I have given specific labels to device nodes null and urandom and the socket file log in the /dev directory of the chroot environment (these are the only things that are really required under /dev), and I have also put in a rule to specify that no other files or devices under /dev should be labelled. If /dev is bind mounted to /chroot/dev then it’s important to not relabel all the devices to avoid messing up the real root environment – and it’s impractical to put in a specific rule for every possible device node. Note that the following is for a RHEL4 chroot environment, other distributions will vary a little some of the file names.

semanage -i – << END
fcontext -a -t root_t -f -d /chroot
fcontext -a -t bin_t “/chroot/bin.*”
fcontext -a -t usr_t “/chroot/usr.*”
fcontext -a -t usr_t “/chroot/opt.*”
fcontext -a -f -d /chroot/dev
fcontext -a -f -s -t devlog_t /chroot/dev/log
fcontext -a -f -c -t null_device_t /chroot/dev/null
fcontext -a -f -c -t urandom_device_t /chroot/dev/urandom
fcontext -a -t "<<none>>" "/chroot/dev/.*"
fcontext -a -t "<<none>>" "/chroot/proc.*"
fcontext -a -t lib_t “/chroot/lib.*”
fcontext -a -t lib_t “/chroot/usr/lib.*”
fcontext -a -t bin_t “/chroot/usr/bin.*”
fcontext -a -t httpd_exec_t -d — /chroot/usr/bin/httpd
fcontext -a -t var_t “/chroot/var.*”
fcontext -a -t var_lib_t “/chroot/var/lib.*”
fcontext -a -t httpd_var_lib_t “/chroot/var/lib/php.*”
fcontext -a -t var_log_t “/chroot/var/log.*”
fcontext -a -t var_log_t -f — “/chroot/var/log/horde.log.*”
fcontext -a -t httpd_log_t “/chroot/var/log/httpd.*”
fcontext -a -t var_run_t “/chroot/var/run.*”
fcontext -a -t httpd_var_run_t -f — /chroot/var/run/httpd.pid
fcontext -a -t httpd_sys_content_t “/chroot/var/www.*”
END

You could create a shell script to run the above commands multiple times for multiple separate Apache chroot environments.

If there is a need to isolate the various Apache instances from each other (as opposed to just protecting the rest of the system from a rogue Apache process) then you could start each copy of Apache with a different MCS sensitivity label which will provide adequate isolation for most purposes as long as no sensitivity label dominates the low level of any of the others. If you do that then the semanage commands require the -r option to specify the range. You could have one chroot environment under /chroot-0 with the sensitivity label of s0:c0 for it’s files and another under /chroot-1 with the sensitivity label of s0:c1 for it’s files. To start one environment you would use a command such as the following:

runcon -l s0:c0 setsid chroot /chroot-0 /usr/sbin/httpd

8

SE Linux status in Debian/Squeeze

ffmpeg

I’ve updated my SE Linux repository for Squeeze to include a modified version of the ffmpeg packages without MMX support for the i386 architecture. When MMX support is enabled it uses assembler code which requires text relocations (see Ulrich Drepper’s documentation for the explanation of this [1]). This makes it possible to run programs such as mplayer under SE Linux without granting excessive access – something which we really desire because mplayer will usually be dealing with untrusted data. In my past tests with such changes to ffmpeg on my EeePC701 have resulted in no difference to my ability to watch movies from my collection, the ones that could be played without quality loss on a system with such a slow CPU could still be viewed correctly with the patched ffmpeg.

$ mplayer
mplayer: error while loading shared libraries: /usr/lib/i686/cmov/libswscale.so.0: cannot restore segment prot after reloc: Permission denied

The AMD64 architecture has no need for such patches, presumably due to having plenty of registers. I don’t know whether other architectures need such patches, they might – the symptom is having mplayer abort with an error such as the above when running in Enforcing Mode.

The below apt sources.list line can be used to add my SE Linux repository:

deb http://www.coker.com.au squeeze selinux

dpkg

In my repository for i386 and AMD64 architectures I have included a build of dpkg that fixes bug #587949. This bug causes some sym-links and directories to be given the wrong label by dpkg when a package is installed. Usually this doesn’t impact the operation of the system and I was unable to think of a situation where it could be a security hole, but it can deny access in situations where it should be granted. I would appreciate some help in getting the patch in a form that can be accepted by the main dpkg developers, the patch I sent in the bug report probably isn’t ideal even though it works quite well – someone who knows absolutely nothing about SE Linux but is a good C coder with some knowledge of dpkg could beat it into shape.

In my repository I don’t currently provide any support for architectures other than i386 and AMD64. I could be persuaded to do so if there is a demand. How many people are using Debian SE Linux on other architectures? Of course there’s nothing stopping someone from downloading the source from my AMD64 repository and building it for another architecture, I would be happy to refer people to an APT repository that someone established for the purpose of porting my SE Linux packages to another architecture.

Policy

selinux-policy-default version 20100524-2 is now in Testing. It’s got a lot of little fixes and among other things allows sepolgen-ifgen to work without error which allows using the -R option of audit2allowsee my post about audit2allow and creating the policy for milters for defails [2].

I have uploaded selinux-policy-default version 20100524-3 to Unstable. It has a bunch of little fixes that are mostly related to desktop use. You can now run KDE4 on Unstable in enforcing mode, login via kdm and expect that everything will work – probably some things won’t work, but some of my desktop systems work well with it. I have to admit that not all of my desktop systems run my latest SE Linux code, I simply can’t have all my systems run unstable and risk outages.

Let me know if you find any problems with desktop use of the latest SE Linux code, it’s the focus of my current work. But if you find problems with chrome (from Google) or the Debian package chromium-browser then don’t report them to me. They each use their own version of ffmpeg in the shared object /usr/lib/chromium-browser/libffmpegsumo.so which has text relocations and I don’t have time to rebuild chromium-browser without text relocations – I’ll make sure it does the right thing when they get it working with the standard ffmpeg libraries. That said the text relocation problem doesn’t seem to impact the use of Chromium, Youtube doesn’t work even when the browser is run in permissive mode.

GNOME is a lower priority than KDE for me at this time. But the only area where problems are likely to occur is with gdm and everything associated with logging in. Once your X session starts up GNOME and KDE look pretty similar in terms of access control. I would appreciate it if someone could test gdm and let me know how it goes. I’ll do it eventually if no-one else does, but I’ve got some other things to fix first.

SE Linux audit2allow -R and Milter policy

Since the earliest days there has been a command named audit2allow that takes audit messages of operations that SE Linux denied and produces policy that will permit those operations. A lesser known option for this program is the “-R” option to use the interfaces from the Reference Policy (the newer version of the policy that was introduced a few years ago). I have updated my SE Linux repository for Lenny [1] with new packages of policy and python-sepolgen that fix some bugs that stopped this from being usable.

To use the -R option you have to install the selinux-policy-dev package and then run the command sepolgen-ifgen to generate the list of interfaces (for Squeeze I will probably make the postinst script of selinux-policy-dev do this). Doing this on Lenny requires selinux-policy-default version 0.0.20080702-20 or better and doing this on Debian/Unstable now requires selinux-policy-default version 0.2.20100524-2 (which is now in Testing) or better.

Would it be useful if I maintained my own repository of SE Linux packages from Debian/Unstable that can be used with Debian/Testing? You can use preferences to get a few packages from Unstable with the majority from Testing, but that’s inconvenient and anyone who wants to test the latest SE Linux stuff would need to include all SE Linux related packages to avoid missing an important update. If I was to use my own repository I would only include packages that provide a significant difference and let the trivial changes migrate through Testing in the normal way.

The new Lenny policy includes a back-port of the new Milter policy from Unstable, this makes it a lot easier to write policy for milters. Here is an example of the basic policy for two milters, it allows the milters (with domains foo_milter_t and bar_milter_t) to start, to receive connections from mail servers, and to create PID files and Unix domain sockets.

policy_module(localmilter,1.0.0)

milter_template(foo)
files_pid_filetrans(foo_milter_t, foo_milter_data_t, { sock_file file })

milter_template(bar)
files_pid_filetrans(bar_milter_t, bar_milter_data_t, { sock_file file })
allow bar_milter_t self:process signull;
type bar_milter_tmp_t;
files_tmp_file(bar_milter_tmp_t)
files_tmp_filetrans(bar_milter_t, bar_milter_tmp_t, file)
manage_files_pattern(bar_milter_t, tmp_t, bar_milter_tmp_t)

After generating that policy I ran a test system in permissive mode and sent a test message. I ran audit2allow on the resulting AVC messages from /var/log/audit/audit.log and got the following output:

#============= bar_milter_t ==============
allow bar_milter_t bin_t:dir search;
allow bar_milter_t bin_t:file getattr;
allow bar_milter_t home_root_t:dir search;
allow bar_milter_t ld_so_cache_t:file { read getattr };
allow bar_milter_t lib_t:file execute;
allow bar_milter_t mysqld_port_t:tcp_socket name_connect;
allow bar_milter_t net_conf_t:file { read getattr ioctl };
allow bar_milter_t self:process signal;
allow bar_milter_t self:tcp_socket { read write create connect setopt };
allow bar_milter_t unlabeled_t:association { recvfrom sendto };
allow bar_milter_t unlabeled_t:packet { recv send };
allow bar_milter_t urandom_device_t:chr_file read;
allow bar_milter_t usr_t:file { read getattr ioctl };
allow bar_milter_t usr_t:lnk_file read;
#============= foo_milter_t ==============
allow foo_milter_t ld_so_cache_t:file { read getattr };
allow foo_milter_t lib_t:file execute;
allow foo_milter_t mysqld_port_t:tcp_socket name_connect;
allow foo_milter_t net_conf_t:file { read getattr };
allow foo_milter_t self:capability { setuid setgid };
allow foo_milter_t self:tcp_socket { write setopt shutdown read create connect };
allow foo_milter_t unlabeled_t:association { recvfrom sendto };
allow foo_milter_t unlabeled_t:packet { recv send };

Running the audit2allow command with the “-R” option gives the following output, it includes the require section that is needed for generating policy modules:
require {
type sshd_t;
type ld_so_cache_t;
type bar_milter_t;
type foo_milter_t;
class process signal;
class tcp_socket { setopt read create write connect shutdown };
class capability { setuid setgid };
class fd use;
class file { read getattr };
}
#============= bar_milter_t ==============
allow bar_milter_t ld_so_cache_t:file { read getattr };
allow bar_milter_t self:process signal;
allow bar_milter_t self:tcp_socket { read write create connect setopt };
corecmd_getattr_sbin_files(bar_milter_t)
corecmd_search_sbin(bar_milter_t)
corenet_sendrecv_unlabeled_packets(bar_milter_t)
corenet_tcp_connect_mysqld_port(bar_milter_t)
dev_read_urand(bar_milter_t)
files_read_usr_files(bar_milter_t)
files_read_usr_symlinks(bar_milter_t)
files_search_home(bar_milter_t)
kernel_sendrecv_unlabeled_association(bar_milter_t)
libs_exec_lib_files(bar_milter_t)
sysnet_read_config(bar_milter_t)
#============= foo_milter_t ==============
allow foo_milter_t ld_so_cache_t:file { read getattr };
allow foo_milter_t self:capability { setuid setgid };
allow foo_milter_t self:tcp_socket { write setopt shutdown read create connect };
corenet_sendrecv_unlabeled_packets(foo_milter_t)
corenet_tcp_connect_mysqld_port(foo_milter_t)
kernel_sendrecv_unlabeled_association(foo_milter_t)
libs_exec_lib_files(foo_milter_t)
sysnet_read_config(foo_milter_t)

To get this working I removed the require lines for foo_milter_t and bar_milter_t as it’s not permitted to both define a type and require it in the same module. Then I replaced the set of tcp_socket operations { write setopt shutdown read create connect } with create_socket_perms as it’s easiest to allow all the operations in that set and doesn’t give any security risks.

Finally I replaced the mysql lines such as corenet_tcp_connect_mysqld_port(foo_milter_t) with sections such as the following:
mysql_tcp_connect(foo_milter_t)
optional_policy(`
mysql_stream_connect(foo_milter_t)
‘)

This gives it all the access it needs and additionally the optional policy will allow Unix domain socket connections for the case where the mysqld is running on localhost.

2

Tracking down Write/Execute mmap() calls with LD_PRELOAD

One of the access controls in SE Linux is for execmem – which is used to stop processes from creating memory regions that are writable and executable (as they make it easier to compromise programs and get them to execute supplied code). When the SE Linux audit log tells you that a program is attempting such access it’s sometimes difficult to discover where in the code such an access occurs, for example if you have a large code base and mmap() is called in many places it can be difficult to determine which one is the culprit. Especially if you have a source package that contains multiple binaries that use a common shared library and you don’t know which bits of library code are called by each executable.

To solve this problem in the case of freshclam to provide extra information for Debian bug report #588599 [1] I wrote the following little shared object which can be compiled with “gcc -shared -g -fPIC mmap.c -o mmap.so” and used with “LD_PRELOAD=./mmap.so whatever“. Then when the program in question (or any non-SUID program it executes) calls mmap() with both PROT_EXEC and PROT_WRITE set the program will abort. If you run this through gdb then the program will break and you will get a back-trace of the function calls that led to the undesired mmap().

One thing to note is that this method only catches direct calls to a library function outside libc. When the libc code calls the library function (EG all the fwrite() etc code that calls mmap()) the LD_PRELOAD hack won’t catch it. Thanks to Keith Owens for pointing this out.

#include <dlfcn.h>
#include <stdio.h>
#include <sys/mman.h>
#include <stdlib.h>
#undef NDEBUG
#include <assert.h>

void *libc6 = NULL;

void *(*real_mmap)(void *, size_t, int, int, int, off_t);

void do_init()
{
  libc6 = dlopen("libc.so.6", RTLD_LAZY | RTLD_GLOBAL);
  if(!libc6)
  {
    printf("Aieee\n");
    exit(1);
  }
  real_mmap = (void * (*)(void *, size_t, int, int, int, off_t))dlsym(libc6, "mmap");
}

void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset)
{
  if(!real_mmap)
    do_init();
  assert(!(prot & PROT_EXEC) || !(prot & PROT_WRITE));
  return real_mmap(addr, length, prot, flags, fd, offset);
}

1

Play Machine Online Again with Xen 4.0

My SE Linux Play Machine [1] has been offline for almost a month (it went offline late May 30 and has just gone online again). It’s the sort of downtime that can happen when you use Debian/Unstable.

For a while I’ve been using a HP E-PC (a SFF desktop system with 256M of RAM and a P3-800 CPU) to run my SE Linux Play Machine. I run it under Xen to make it easier for me to watch what happens. I’ve had some problems with increased memory use in the Xen Dom0 in Squeeze [2]. The latest installment of the memory problems is when I discovered that I can’t run two copies of tcpdump (for tracing separate interfaces) at once on a Xen Dom0 that has ~110M of RAM – this seems unreasonable, I’m sure that back when a big server had 128M of RAM I could have done such things! So now I’m using a Thinkpad T20 with 512M of RAM for my new SE Linux Play Machine, it uses less power than most systems (probably even less than the HP E-PC) and is very quiet.

I was forced to install on a new system when I broke my GRUB configuration. GRUB-2 in Debian currently has no support for generating a configuration that will boot a Xen Dom0. You can manually edit the GRUB configuration to get this working, but if you get it wrong then you can make GRUB not even display a prompt and force a reinstall (as I did). As an aside it would be really handy if someone would create a CD or USB bootable image that does nothing but install GRUB. Such an image would ideally allow replacing the configuration of an existing GRUB, overwriting an existing GRUB installation (all files in /boot/grub get replaced), or formatting a spare partition (default swap space) and installing GRUB there.

My current solution to the GRUB problems is to use the old version of GRUB in the grub-legacy package. The old version of GRUB has always done everything I want so I don’t seem to be missing anything by not using the new version. I’m happy to refrain from using Ext4 for /boot and have no desire to have /boot on an LVM volume.

Most of the month of down-time for my Play Machine was caused by bugs in the SE Linux policy I’m developing for Squeeze, while they weren’t difficult bugs I haven’t had much time to work on them consistently. I’m still running the Play Machine on Lenny, but the Dom0 is running Unstable.

New SE Linux Policy for Squeeze

I have just uploaded refpolicy version 0.2.20100524-1 to Unstable. This policy is not well tested (a SE Linux policy package ending in “-1” is not something that tends to work well for all people) and in particular lacks testing for Desktop environments. But for servers it should work reasonably well.

I expect to have a better version uploaded before this one gets out of Unstable.

Note that the selinux-policy-default package in this release lacks support for roles, it’s a targeted policy only. I plan to fix this soon.