Archives

Categories

The Lord of the Fries

Today I bought a box of fries from The Lord of the Fries [1]. I bought it from their new stand at Flinders St station because I was going past and saw no queue. In the past I had considered buying from their store on Elizabeth St but the queues were too long.

The fries were nice – probably among the best fries that I’ve had from local fish and chip shops. Way better than any other fries that you can find in the center of Melbourne. The range of sauces is quite good if you like that thing (I just like vinegar on mine). However it should be noted that the quantity of chips that you would get for the same price at a local fish and chip shop is usually a lot greater.

Overall I was a bit disappointed, sure it’s nice to have someone hand-cut fresh potatoes and to actually care about making a quality product. But when compared to the other options for relatively fast food in the CBD it didn’t seem that great to me. I’m never going to join a queue that has more than 20 people to buy them! But I probably will buy from them on occasion if they don’t have big queues.

It seems to me that the best thing that they have done is to create a strong commitment to food quality and document it on their web site. I hope that this will inspire other fast-food companies to do the same thing and result in an overall increase in the food quality.

On a related note Jamie Oliver has an IDEO project running with the aim of getting kids into fresh food [2].

Why Clusters Usually Don’t Work

It’s widely regarded that to solve reliability problems you can just install a cluster. It’s quite obvious that if instead of having one system of a particular type you have multiple systems of that type and a cluster configured such that broken systems aren’t used then reliability will increase. Also in the case of routine maintenance a cluster configuration can allow one system to be maintained in a serious way (EG being rebooted for a kernel or BIOS upgrade) without interrupting service (apart from a very brief interruption that may be needed for resource failover). But there are some significant obstacles in the path of getting a good cluster going.

Buying Suitable Hardware

If you only have a single server that is doing something important and you have some budget for doing things properly then you really must do everything possible to keep it going. You need RAID storage with hot-swap disks, hot-swap redundant PSUs, and redundant ethernet cables bonded together. But if you have redundant servers then the requirement for making one server reliable is slightly reduced.

Hardware is getting cheaper all the time, a Dell R300 1RU server configured with redundant hot-plug PSUs, two 250G hot-plug SATA disks in a RAID-1 array, 2G of RAM, and a dual-core Xeon Pro E3113 3.0GHz CPU apparently costs just under $2,800AU (when using Google Chrome I couldn’t add some necessary jumper cables to the list so I couldn’t determine the exact price). So a cluster of two of them would cost about $5,600 just for the servers. But a Dell R200 1RU server with no redundant PSUs, a single 250G SATA disk, 2G of RAM, and a Core 2 Duo E7400 2.8GHz CPU costs only $1,048.99AU. So if a low end server is required then you could buy two R200 servers that have no redundancy built in which cost less than a single server that has hardware RAID and redundant PSUs. Those two servers have different sets of CPU options and probably other differences in the technical specs, but for many applications they will probably both provide more than adequate performance.

Using a server that doesn’t even have RAID is a bad idea, a minimal RAID configuration is a software RAID-1 array which only requires an extra disk per server. That takes the price of a Dell R200 to $1,203. So it seems that two low-end 1RU servers from Dell that have minimal redundancy features will be cheaper than a single 1RU server that has the full set of features. If you want to serve static content then that’s all you need, and a cluster can save you money on hardware! Of course we can debate whether any cluster node should be missing redundant hot-plug PSUs and disks. But that’s not an issue I want to address in this post.

Also serving static content is the simplest form of cluster, if you have a cluster for running a database server then you will need a dual-attached RAID array which will make things start to get expensive (or software for replicating the data over the network which is difficult to configure and may be expensive), so while a trivial cluster may not cost any extra money a real-world cluster deployment is likely to add significant expense.

My observation is that most people who implement clusters tend to have problems getting budget for decent hardware. When you have redundancy via the cluster you can tolerate slightly less expected uptime from the individual servers. While we can debate about whether a cluster member should have redundant PSUs and other expensive features it does seem that using a cheap desktop system as a cluster node is a bad idea. Unfortunately some managers think that a cluster solves the reliability problem and therefore you can just use recycled desktop systems as cluster nodes, this doesn’t give a good result.

Even if it is agreed that server class hardware is used for all servers so features such as ECC RAM are used you will still have problems if someone decides to use different hardware specs for each of the cluster nodes.

Testing a Cluster

Testing a non-clustered server or some servers that use a load-balancing device at the front-end isn’t that difficult in concept. Sure you have lots of use cases and exception conditions to test, but they are all mostly straight-through tests. With a cluster you need to test node failover at unexpected times. When a node is regarded as having an inconsistent state (which can mean that one service it runs could not be cleanly shutdown when it was due to be migrated) it will need to be rebooted which is sometimes known as a STONITH. A STONITH event usually involves something like IPMI to cut the power or a command such as “reboot -nf“, this loses cached data and can cause serious problems for any application which doesn’t call fsync() as often as it should. It seems likely that the vast majority of sysadmins run programs which don’t call fsync() often enough, but the probability of losing data is low and the probability of losing data in a way that you will notice (IE it doesn’t get automatically regenerated) is even lower. The low probability of data loss due to race conditions combined with the fact that a server with a UPS and redundant PSUs doesn’t unexpectedly halt that often means that problems don’t get found easily. But when clusters have problems and start calling STONITH the probability starts increasing.

Getting cluster software to work in a correct manner isn’t easy. I filed Debian bug #430958 about dpkg (the Debian package manager) not calling fsync() and thus having the potential to leave systems in an inconsistent or unusable state if a STONITH happened at the wrong time. I was inspired to find this problem after finding the same problem with RPM on a SUSE system. The result of applying a patch to call fsync() on every file was bug report #578635 about the performance of doing so, the eventual solution was to call sync() after each package is installed. Next time I do any cluster work on Debian I will have to test whether the sync() code seems to work as desired.

Getting software to work in a cluster requires that not only bugs in system software such as dpkg be fixed, but also bugs in 3rd party applications and in-house code. Please someone write a comment claiming that their favorite OS has no such bugs and the commercial and in-house software they use is also bug-free – I could do with a cheap laugh.

For the most expensive cluster I have ever installed (worth about 4,000,000 UK pounds – back when the pound was worth something) I was not allowed to power-cycle the servers. Apparently the servers were too valuable to be rebooted in that way, so if they did happen to have any defective hardware or buggy software that would do something undesirable after a power problem it would become apparent in production rather than being a basic warranty or patching issue before the system went live.

I have heard many people argue that if you install a reasonably common OS on a server from a reputable company and run reasonably common server software then the combination would have been tested before and therefore almost no testing is required. I think that some testing is always required (and I always seem to find some bugs when I do such tests), but I seem to be in a minority on this issue as less testing saves money – unless of course something breaks. It seems that the need for testing systems before going live is much greater for clusters, but most managers don’t allocate budget and other resources for this.

Finally there is the issue of testing issues related to custom code and the user experience. What is the correct thing to do with an interactive application when one of the cluster nodes goes down and how would you implement it at the back-end?

Running a Cluster

Systems don’t just sit there without changing, you have new versions of the OS and applications and requirements for configuration changes. This means that the people who run the cluster will ideally have some specialised cluster skills. If you hire sysadmins without regard to cluster skills then you will probably end up not hiring anyone who has any prior experience with the cluster configuration that you use. Learning to run a cluster is not like learning to run yet another typical Unix daemon, it requires some differences in the way things are done. All changes have to be strictly made to all nodes in the cluster, having a cluster fail-over to a node that wasn’t upgraded and can’t understand the new data is not fun at all!

My observation is that the typical experience of having a team of sysadmins who have no prior cluster experience being hired to run a cluster usually involves “learning experiences” for everyone. It’s probably best to assume that every member of the team will break the cluster and cause down-time on at least one occasion! This can be alleviated by only having one or two people ever work on the cluster and having everyone else delegate cluster work to them. Of course if something goes wrong when the cluster experts aren’t available then the result is even more downtime than might otherwise be expected.

Hiring sysadmins who have prior experience running a cluster with the software that you use is going to be very difficult. It seems that any organisation that is planning a cluster deployment should plan a training program for sysadmins. Have a set of test machines suitable for running a cluster and have every new hire install the cluster software and get it all working correctly. It’s expensive to buy extra systems for such testing, but it’s much more expensive to have people who lack necessary skills try and run your most important servers!

The trend in recent years has been towards sysadmins not being system programmers. This may be a good thing in other areas but it seems that in the case of clustering it is very useful to have a degree of low level knowledge of the system that you can only gain by having some experience doing system coding in C.

It’s also a good idea to have a test network which has machines in an almost identical configuration to the production servers. Being able to deploy patches to test machines before applying them in production is a really good thing.

Conclusion

Running a cluster is something that you should either do properly or not at all. If you do it badly then the result can easily be less uptime than a single well-run system.

I am not suggesting that people avoid running clusters. You can take this post as a list of suggestions for what to avoid doing if you want a successful cluster deployment.

WordPress Maintainability

For a while I’ve been maintaining my own WordPress packages. I use quite a few plugins that weren’t included in Debian, some of them have unclear licenses so they can’t go in Debian while the rest would have to go in Volatile at best because they update regularly and often have little or no information in the changelog to describe the reason for the update – so we have to assume there is a potential security issue and update it reasonably quickly. As I’m maintaining plugin packages it seems most reasonable to keep maintaining my own packages of WordPress itself which I started doing some time ago then the version in Debian became outdated.

Now WordPress isn’t a convenient package to maintain, the design of it is that a user will upload it to their web space via FTP or whatever, it’s not designed to be managed by a packaging system with the option of rolling back upgrades that don’t work, tracking dependencies, etc. One example of this is the fact that it comes with a couple of plugins included in the package, of which Akismet is widely used. The Akismet package is periodically updated asynchronously from the updates to the WordPress package with the apparent expectation that you can just FTP the files. Of course I have to build a new WordPress package whenever Akismet is changed.

Now there is a new default theme for WordPress called TwentyTen [1]. This theme ships with WordPress and again has updates asynchronously. Just over a week ago my blog prompting me for an update to the theme even though I hadn’t consciously installed it – I have to update because I don’t know whether one of the other users on the same system has chosen it and because having a message about an update being required is annoying.

The Themes update page has no option for visiting the web site for the theme and only offered to send it to my server via FTP or SFTP, of course I’m not going to give WordPress access to change it’s own PHP files (and thus allow a trojan to be installed). So I had to do some Google searching to find the download page for TewntyTen – which happens to not be in the first few results from a Google search (even though those pages look like they should have a link to it and thus waste the time of anyone who just wants to download it).

After downloading the theme I had to build a new WordPress package containing it – I could have split it out into a separate package and have the WordPress package depend on it, but I’ve got enough little WordPress packages already. It doesn’t seem worth-while to put too much effort into my private repository of WordPress packages that possibly aren’t used by anyone other than me.

Plugins aren’t as bad, the list of plugins gives you a link to the main web page for each plugin which allows you to download it.

I wonder what portion of the WordPress user-base install via FTP to a server that they don’t understand and what portion of them use servers that are maintained properly with a packaging system, my guess is that with the possible exception of WordPress.com most bloggers are running on packaged code. It seems to me that optimising for Debian and CentOS is the smart thing to do for anyone who is developing a web service nowadays. That includes files managed by the packaging system, an option to downgrade (as well as upgrade) the database format (which changes with almost every release), and an option for upgrading the database from the command-line (so it can be done once for dozens or hundreds of users).

deb http://www.coker.com.au lenny wordpress

I have a repository of WordPress packages that anyone can use with the above APT sources.list line. There is no reason why they shouldn’t work with Testing or Unstable (the packaging process mostly involves copying PHP files to the correct locations) but I only test them on Lenny.

Pre-Meeting Lightning Talks

This evening I arrived at the LUV [1] meeting half an hour before it started. I was one of about a dozen people sitting in the room waiting, some of us had laptops and were reading email but others just sat quietly – the venue is sometimes open as much as an hour before the event starts and in bad weather some people arrive early because it’s more comfortable than anywhere else that they might hang out.

So I went to the front and suggested that instead of just doing nothing we get some short talks about random Linux things to fill the time. This seems to be a good opportunity for people to practice their public speaking skills, share things that interest them with a small and friendly audience, and keep everyone else entertained.

With some prompting a few members of the audience got up and spoke about Linux things that they were doing or had recently read about. They were all interesting and I learned a few things. I considered giving a talk myself (my plan B was to just speak for 15 minutes about random Linux stuff I’m doing) but decided that it would be best if I just encouraged other people to give talks.

I have suggested to the committee that we plan to do this in future and maybe have a mention of it on the web site to encourage people who are interested in such things (either speaking or listening) to attend early enough.

I think that this concept has been demonstrated to work and should also work well in most other user group meetings of a suitable size. At LUV we typically have about 60 people attend the main meeting and maybe a dozen arrive really early so people who would be nervous about speaking to an audience of 60 may feel more comfortable. For a significantly larger group (where you have maybe 300 people attend the main meeting and 60 arrive early) the dynamic would be quite different, instead of having more nervous people give talks you might find that a membership of 300 gives a significant number of people who have enough confidence to give an impromptu short lecture to an audience of 60.

As an aside the Connected Community Hackerspace [2] is having a meeting tonight to decide what to do about an office in a central Melbourne area. One of the many things that a Hackerspace can be used for is a meeting venue for lightning talks etc.

Yubikeys Have Arrived

Picture of lots of Yubikeys

In my previous post about the Yubikey I suggested that computer users’ groups should arrange bulk purchases to get the best prices [1]. I ran such a buying club for Linux users in Australia as well as members of SAGE-AU [2].

The keys have arrived and I now have to start posting them out. Above is a picture of two boxes that each contain 100 keys. Presumably if you buy a smaller number of keys then you get more fancy packing.

Thanks to Yubico for giving us a greater discount than the usual discount rate for boxes of 100 keys!

Creating a SE Linux Chroot environment

Why use a Chroot environment?

A large part of the use of chroot environments is for the purpose of security, it used to be the only way of isolating a user from a section of the files on a server. In many of the cases where a chroot used to be used for security it is now common practice to use a virtual server. Also another thing to note is that SE Linux provides greater access restrictions to most daemons than a chroot environment would so in many case using SE Linux with a sensible policy is a better option than using a chroot environment to restrict a daemon. So it seems to me that the security benefits that can be obtained by using a chroot environment have been dramatically decreased over the last 5+ years.

One significant benefit of a chroot environment is that of running multiple different versions of software on one system. If for example you have several daemons that won’t run correctly on the same distribution and if you don’t want to have separate virtual machines (either because you don’t run a virtualisation technology or because the resources/expense of having multiple virtual servers is unacceptable) then running multiple chroot environments is a reasonable option.

The Simplest Solution

The simplest case is when all the chroot environments are equally trusted, that means among many other things that they all have the latest security patches applied. Then you can run them all with the same labels, so every file in the chroot environment will have the same label as it’s counterpart in the real root – this will mean that for example a user from the real root could run /chroot/bin/passwd and possibly get results you don’t desire. But it’s generally regarded that the correct thing to do is to have a chroot environment on a filesystem that’s mounted nosuid which will deal with most instances of such problems. One thing to note however is that the nosuid mount option also prevents SE Linux domain transitions, so it’s not such a good option when you use SE Linux as domain transitions are often used to reduce the privileges assigned to the process.

There are two programs for labeling files in SE Linux, restorecon is the most commonly used one but there is also setfiles which although being the same executable (restorecon is a symlink to setfiles) has some different command-line options. The following command on a default configuration of a Debian/Lenny system will label a chroot environment under /chroot with the same labels as the main environment:

setfiles -r /chroot /etc/selinux/default/contexts/files/file_contexts /chroot

I am considering adding an option to support chroot environments to restorecon, if I do that then I will probably back-port it to Lenny, but that won’t happen for a while.

For a simple chroot once the filesystem is labelled it’s ready to go, then you can start daemons in the chroot environment in the usual way.

Less trusted Chroot environments

A reasonably common case is where the chroot environment is not as trusted. One example is when you run an image of an old server in a chroot environment. A good way of dealing with this is to selectively label parts of the filesystem as required. The following shell code instructs semanage to add file contexts entries for a chroot environment that is used for the purpose of running Apache. Note that I have given specific labels to device nodes null and urandom and the socket file log in the /dev directory of the chroot environment (these are the only things that are really required under /dev), and I have also put in a rule to specify that no other files or devices under /dev should be labelled. If /dev is bind mounted to /chroot/dev then it’s important to not relabel all the devices to avoid messing up the real root environment – and it’s impractical to put in a specific rule for every possible device node. Note that the following is for a RHEL4 chroot environment, other distributions will vary a little some of the file names.

semanage -i – << END
fcontext -a -t root_t -f -d /chroot
fcontext -a -t bin_t “/chroot/bin.*”
fcontext -a -t usr_t “/chroot/usr.*”
fcontext -a -t usr_t “/chroot/opt.*”
fcontext -a -f -d /chroot/dev
fcontext -a -f -s -t devlog_t /chroot/dev/log
fcontext -a -f -c -t null_device_t /chroot/dev/null
fcontext -a -f -c -t urandom_device_t /chroot/dev/urandom
fcontext -a -t "<<none>>" "/chroot/dev/.*"
fcontext -a -t "<<none>>" "/chroot/proc.*"
fcontext -a -t lib_t “/chroot/lib.*”
fcontext -a -t lib_t “/chroot/usr/lib.*”
fcontext -a -t bin_t “/chroot/usr/bin.*”
fcontext -a -t httpd_exec_t -d — /chroot/usr/bin/httpd
fcontext -a -t var_t “/chroot/var.*”
fcontext -a -t var_lib_t “/chroot/var/lib.*”
fcontext -a -t httpd_var_lib_t “/chroot/var/lib/php.*”
fcontext -a -t var_log_t “/chroot/var/log.*”
fcontext -a -t var_log_t -f — “/chroot/var/log/horde.log.*”
fcontext -a -t httpd_log_t “/chroot/var/log/httpd.*”
fcontext -a -t var_run_t “/chroot/var/run.*”
fcontext -a -t httpd_var_run_t -f — /chroot/var/run/httpd.pid
fcontext -a -t httpd_sys_content_t “/chroot/var/www.*”
END

You could create a shell script to run the above commands multiple times for multiple separate Apache chroot environments.

If there is a need to isolate the various Apache instances from each other (as opposed to just protecting the rest of the system from a rogue Apache process) then you could start each copy of Apache with a different MCS sensitivity label which will provide adequate isolation for most purposes as long as no sensitivity label dominates the low level of any of the others. If you do that then the semanage commands require the -r option to specify the range. You could have one chroot environment under /chroot-0 with the sensitivity label of s0:c0 for it’s files and another under /chroot-1 with the sensitivity label of s0:c1 for it’s files. To start one environment you would use a command such as the following:

runcon -l s0:c0 setsid chroot /chroot-0 /usr/sbin/httpd

Links July 2010

David Byrne gave an interesting TED talk about how changes to architecture drove changes to musical styles [1]. I think he does stretch the point a little. To a certain extent people develop the most complex instruments and the largest music halls that can be supported by the level of technology in their society – people with a hunter-gatherer civilisation play drums because they can build them and can carry them.

The NY Times has an interesting article about paternity leave in Sweden [2]. The Swedish government pays for a total of 13 months leave that can be split between parents for every child. Of those 13 months 2 months can only be taken by the father – and that is likely to increase to a minimum of 4 months of paternity leave after the next election.

Dan Meyer gave an interesting TEDX talk about how the current math curriculum in the US (as well as Australia and lots of other countries that do the same thing) is totally wrong [3]. His main point is that maths problems should be based on real-world use cases where not all needed data is immediately available and there is also useless data that must be discarded. He believes that the most important thing is developing mathematical problem solving skills – basically the things that I did for fun when I was in primary school are skills that need to be taught to high-school students…

The Atlantic magazine has an amusing article by Daniel Byman and Christine Fair about the incompetent Islamic terrorists [4]. In Afghanistan half the suicide bombers kill only themselves and the US government has a lot of evidence of Taliban soldiers practicing bestiality and collecting porn. Islamic extremist groups are staffed by people who are bad soldiers and bad Muslims.

Jon Masters wrote an interesting post titled “What Would Jesus Buy” about ethical purchasing decisions [5]. Jon references The Church of Stop Shopping which isn’t a real religious organisation but a street theatre activist group.

ZeroHedga has an insightful article comparing corporations and the US government to street gangs [6]. The conclusion is that when gangs take over a neighbourhood everyone has to join a gang for their own protection.

Hillel Cooperman gave an interesting TED talk about being obsessed with Lego [7]. He compares Lego fans to Furries and makes a good case for this comparison.

Marian Bantjes gave an interesting TED talk about her graphic art / graphic design work [8]. I’ve never seen anything quite like this.

Business Insider has an interesting article about oil cleanup, it seems that most people who worked on the Exxon Valdez disaster are now dead [9], s opposed to most people who worked in almost every other occupation at that time who are either still working or enjoying their retirement. The current gulf disaster is bigger, will require more workers for the cleanup, and can be expected to have a higher death toll. Some people claim that measures to reduce oil efficiency will impact the economy, how will millions of people who are chronically ill for the rest of their lives impact the economy?

The NY Times has an interesting article on “circle lenses” [10], contact lenses designed to make the eyes look larger. It’s illegal to sell contact lenses in the US without a prescription, but the latest trend is for women to buy them online in a variety of colors. The FDA should probably approve them, it would be better to have the quality controls you expect from a medical supply company instead of having people rely on Malaysian mail-order companies for the safety of their eyes.

Don Marti has written an interesting article about the economic decline in the US, he suggests making pension funds invest in local jobs [11]. Companies are supposed to act on behalf of their stock-holders, but US companies often have the majority of their stock owned by the pension funds of workers but they act on behalf of a small number of rich people who own a minority of the stock. Don’s article was inspired by Andy Grove’s article in Bloomberg about the stagnation in technological development that has been caused by off-shoring the manufacturing [12].

Neil Brown has completed a test release of a new Linux software RAID feature for arrays with multiple redundancy that have bad sectors [13]. When a disk gets a bad sector the current behavior is to kick it out of the array, if you have two such errors on a 3 disk RAID-1 or a RAID-6 array then you lose all redundancy and are at risk of catastrophic failure even though in most cases both disks will still mostly work. With this patch some regions of the disk may be excluded but it can provide redundancy for other stripes. Thanks Neil for your great work here, and all your previous work over the last 10+ years!

The RSPCA has a new campaign titled “Close the Puppy Factories” [14]. Dogs are kept in very poor conditions and forced to churn out puppies for their entire lives to supply pet stores. The RSPCA recommends that people buy puppies from registered dog breeders (not “registered dog breeding companies”) and ask to see the dog’s parents. They also recommend not buying from classified adverts or pet stores. Animal shelters have to euthenise huge numbers of unwanted animals, you can buy a pet dog or cat from an animal shelter for a small fee that covers the expenses related to housing and spaying it – and save that animal from being euthenised!

Maureen Dowd criticises the Catholic Church properly in an article for the New York Times [15]. The Catholic Church officially regards ordaining a woman and raping a child to be equally bad offenses.

Frank Rich wrote an interesting column for the New York Times about Mel Gibson [16]. He describes the destruction of Mel Gibson’s reputation as a symptom of changes in the culture in the US and also links it to the fall of Ted Haggard (who supported Gibson’s most notorious movie The Passion of the Christ).

SE Linux status in Debian/Squeeze

ffmpeg

I’ve updated my SE Linux repository for Squeeze to include a modified version of the ffmpeg packages without MMX support for the i386 architecture. When MMX support is enabled it uses assembler code which requires text relocations (see Ulrich Drepper’s documentation for the explanation of this [1]). This makes it possible to run programs such as mplayer under SE Linux without granting excessive access – something which we really desire because mplayer will usually be dealing with untrusted data. In my past tests with such changes to ffmpeg on my EeePC701 have resulted in no difference to my ability to watch movies from my collection, the ones that could be played without quality loss on a system with such a slow CPU could still be viewed correctly with the patched ffmpeg.

$ mplayer
mplayer: error while loading shared libraries: /usr/lib/i686/cmov/libswscale.so.0: cannot restore segment prot after reloc: Permission denied

The AMD64 architecture has no need for such patches, presumably due to having plenty of registers. I don’t know whether other architectures need such patches, they might – the symptom is having mplayer abort with an error such as the above when running in Enforcing Mode.

The below apt sources.list line can be used to add my SE Linux repository:

deb http://www.coker.com.au squeeze selinux

dpkg

In my repository for i386 and AMD64 architectures I have included a build of dpkg that fixes bug #587949. This bug causes some sym-links and directories to be given the wrong label by dpkg when a package is installed. Usually this doesn’t impact the operation of the system and I was unable to think of a situation where it could be a security hole, but it can deny access in situations where it should be granted. I would appreciate some help in getting the patch in a form that can be accepted by the main dpkg developers, the patch I sent in the bug report probably isn’t ideal even though it works quite well – someone who knows absolutely nothing about SE Linux but is a good C coder with some knowledge of dpkg could beat it into shape.

In my repository I don’t currently provide any support for architectures other than i386 and AMD64. I could be persuaded to do so if there is a demand. How many people are using Debian SE Linux on other architectures? Of course there’s nothing stopping someone from downloading the source from my AMD64 repository and building it for another architecture, I would be happy to refer people to an APT repository that someone established for the purpose of porting my SE Linux packages to another architecture.

Policy

selinux-policy-default version 20100524-2 is now in Testing. It’s got a lot of little fixes and among other things allows sepolgen-ifgen to work without error which allows using the -R option of audit2allowsee my post about audit2allow and creating the policy for milters for defails [2].

I have uploaded selinux-policy-default version 20100524-3 to Unstable. It has a bunch of little fixes that are mostly related to desktop use. You can now run KDE4 on Unstable in enforcing mode, login via kdm and expect that everything will work – probably some things won’t work, but some of my desktop systems work well with it. I have to admit that not all of my desktop systems run my latest SE Linux code, I simply can’t have all my systems run unstable and risk outages.

Let me know if you find any problems with desktop use of the latest SE Linux code, it’s the focus of my current work. But if you find problems with chrome (from Google) or the Debian package chromium-browser then don’t report them to me. They each use their own version of ffmpeg in the shared object /usr/lib/chromium-browser/libffmpegsumo.so which has text relocations and I don’t have time to rebuild chromium-browser without text relocations – I’ll make sure it does the right thing when they get it working with the standard ffmpeg libraries. That said the text relocation problem doesn’t seem to impact the use of Chromium, Youtube doesn’t work even when the browser is run in permissive mode.

GNOME is a lower priority than KDE for me at this time. But the only area where problems are likely to occur is with gdm and everything associated with logging in. Once your X session starts up GNOME and KDE look pretty similar in terms of access control. I would appreciate it if someone could test gdm and let me know how it goes. I’ll do it eventually if no-one else does, but I’ve got some other things to fix first.

SE Linux audit2allow -R and Milter policy

Since the earliest days there has been a command named audit2allow that takes audit messages of operations that SE Linux denied and produces policy that will permit those operations. A lesser known option for this program is the “-R” option to use the interfaces from the Reference Policy (the newer version of the policy that was introduced a few years ago). I have updated my SE Linux repository for Lenny [1] with new packages of policy and python-sepolgen that fix some bugs that stopped this from being usable.

To use the -R option you have to install the selinux-policy-dev package and then run the command sepolgen-ifgen to generate the list of interfaces (for Squeeze I will probably make the postinst script of selinux-policy-dev do this). Doing this on Lenny requires selinux-policy-default version 0.0.20080702-20 or better and doing this on Debian/Unstable now requires selinux-policy-default version 0.2.20100524-2 (which is now in Testing) or better.

Would it be useful if I maintained my own repository of SE Linux packages from Debian/Unstable that can be used with Debian/Testing? You can use preferences to get a few packages from Unstable with the majority from Testing, but that’s inconvenient and anyone who wants to test the latest SE Linux stuff would need to include all SE Linux related packages to avoid missing an important update. If I was to use my own repository I would only include packages that provide a significant difference and let the trivial changes migrate through Testing in the normal way.

The new Lenny policy includes a back-port of the new Milter policy from Unstable, this makes it a lot easier to write policy for milters. Here is an example of the basic policy for two milters, it allows the milters (with domains foo_milter_t and bar_milter_t) to start, to receive connections from mail servers, and to create PID files and Unix domain sockets.

policy_module(localmilter,1.0.0)

milter_template(foo)
files_pid_filetrans(foo_milter_t, foo_milter_data_t, { sock_file file })

milter_template(bar)
files_pid_filetrans(bar_milter_t, bar_milter_data_t, { sock_file file })
allow bar_milter_t self:process signull;
type bar_milter_tmp_t;
files_tmp_file(bar_milter_tmp_t)
files_tmp_filetrans(bar_milter_t, bar_milter_tmp_t, file)
manage_files_pattern(bar_milter_t, tmp_t, bar_milter_tmp_t)

After generating that policy I ran a test system in permissive mode and sent a test message. I ran audit2allow on the resulting AVC messages from /var/log/audit/audit.log and got the following output:

#============= bar_milter_t ==============
allow bar_milter_t bin_t:dir search;
allow bar_milter_t bin_t:file getattr;
allow bar_milter_t home_root_t:dir search;
allow bar_milter_t ld_so_cache_t:file { read getattr };
allow bar_milter_t lib_t:file execute;
allow bar_milter_t mysqld_port_t:tcp_socket name_connect;
allow bar_milter_t net_conf_t:file { read getattr ioctl };
allow bar_milter_t self:process signal;
allow bar_milter_t self:tcp_socket { read write create connect setopt };
allow bar_milter_t unlabeled_t:association { recvfrom sendto };
allow bar_milter_t unlabeled_t:packet { recv send };
allow bar_milter_t urandom_device_t:chr_file read;
allow bar_milter_t usr_t:file { read getattr ioctl };
allow bar_milter_t usr_t:lnk_file read;
#============= foo_milter_t ==============
allow foo_milter_t ld_so_cache_t:file { read getattr };
allow foo_milter_t lib_t:file execute;
allow foo_milter_t mysqld_port_t:tcp_socket name_connect;
allow foo_milter_t net_conf_t:file { read getattr };
allow foo_milter_t self:capability { setuid setgid };
allow foo_milter_t self:tcp_socket { write setopt shutdown read create connect };
allow foo_milter_t unlabeled_t:association { recvfrom sendto };
allow foo_milter_t unlabeled_t:packet { recv send };

Running the audit2allow command with the “-R” option gives the following output, it includes the require section that is needed for generating policy modules:
require {
type sshd_t;
type ld_so_cache_t;
type bar_milter_t;
type foo_milter_t;
class process signal;
class tcp_socket { setopt read create write connect shutdown };
class capability { setuid setgid };
class fd use;
class file { read getattr };
}
#============= bar_milter_t ==============
allow bar_milter_t ld_so_cache_t:file { read getattr };
allow bar_milter_t self:process signal;
allow bar_milter_t self:tcp_socket { read write create connect setopt };
corecmd_getattr_sbin_files(bar_milter_t)
corecmd_search_sbin(bar_milter_t)
corenet_sendrecv_unlabeled_packets(bar_milter_t)
corenet_tcp_connect_mysqld_port(bar_milter_t)
dev_read_urand(bar_milter_t)
files_read_usr_files(bar_milter_t)
files_read_usr_symlinks(bar_milter_t)
files_search_home(bar_milter_t)
kernel_sendrecv_unlabeled_association(bar_milter_t)
libs_exec_lib_files(bar_milter_t)
sysnet_read_config(bar_milter_t)
#============= foo_milter_t ==============
allow foo_milter_t ld_so_cache_t:file { read getattr };
allow foo_milter_t self:capability { setuid setgid };
allow foo_milter_t self:tcp_socket { write setopt shutdown read create connect };
corenet_sendrecv_unlabeled_packets(foo_milter_t)
corenet_tcp_connect_mysqld_port(foo_milter_t)
kernel_sendrecv_unlabeled_association(foo_milter_t)
libs_exec_lib_files(foo_milter_t)
sysnet_read_config(foo_milter_t)

To get this working I removed the require lines for foo_milter_t and bar_milter_t as it’s not permitted to both define a type and require it in the same module. Then I replaced the set of tcp_socket operations { write setopt shutdown read create connect } with create_socket_perms as it’s easiest to allow all the operations in that set and doesn’t give any security risks.

Finally I replaced the mysql lines such as corenet_tcp_connect_mysqld_port(foo_milter_t) with sections such as the following:
mysql_tcp_connect(foo_milter_t)
optional_policy(`
mysql_stream_connect(foo_milter_t)
‘)

This gives it all the access it needs and additionally the optional policy will allow Unix domain socket connections for the case where the mysqld is running on localhost.

Digital Video Cameras

I’ve just done some quick research on Digital Video Cameras for some relatives. It seems to me that the main feature that is necessary is Full HD (1920*1080) resolution as everyone seems to be getting 1920*1080 resolution monitors (getting smaller doesn’t save enough money to be worth-while). Resolutions higher than 1920*1080 will probably available in affordable monitors in the next few years, so the ability of programs like mplayer to zoom videos will probably be required even for Full HD video soon. Saving maybe $300 on a video camera while getting a lower resolution doesn’t seem like a good idea.

The next feature is optical zoom, most cameras are advertised with features such as “advanced zoom” to try and trick customers, cameras which advertise 60* or better zoom often turn out to only have 20* zoom. I think that about 20* optical zoom should be considered the minimum, not that there is anything special about 20* zoom, it’s just that there is a good range of cameras with better zoom capacity.

Image stabilisation is a required feature, no-one can keep their hand perfectly steady and the typically a DVC only gets hand-held use – most people who own them don’t even own a tripod! Digital image stabilisation is apparently not nearly as good as optical image stabilisation, and image stabilisation that involves moving the CCD is apparently somewhere in between.

Finally it’s good to have the ability to take quality photos as few people will want to carry a Digital Camera and a Digital Video Camera.

I did a search for DVCs on the web site of Ted’s Camera store (a chain of camera stores in Australia that generally provide good service at a competitive price – but not the cheapest price). The best of the Ted’s options seems to be the Panasonic SD60 HD Video [1] which does 25* optical zoom, 1920*1080i video, 5 megapixel still photography, and optical image stabilisation – it costs $750 from Ted’s.

The next best option seems to be the Sony Handycam HDR-CX110 HD [2] which does 25* optical zoom, 1920*1080i video, 3.1 megapixel 2048*1536 still photography, and digital image stabilisation. The Panasonic seems to be a better option due to having optical image stabilisation and a higher resolution for still photographs. It is also $750 from Ted’s.

Now there’s the issue of how well the cameras work on Linux. A quick Google search indicated that the Sony cameras present themselves as USB card readers and can be mounted on a Linux system, I couldn’t discover anything about the Panasonic. If I was going to buy one I would take my Netbook to the store and do a quick test.

I don’t have enough information to recommend either of those cameras, they may have some awful defects that are only apparent when you use them. But in terms of features they seem pretty good. The Panasonic SD60 HD Video should be a good benchmark when comparing cameras in the store. If nothing else the camera store staff seem to not be very helpful if asked generic questions such as “which camera is best”, but if asked questions such as “how is this other camera better than the one I’m looking at” they can usually give good answers.

If anyone has any other advice for purchasing a DVC then please let me know. Either generic advice or specific examples of Linux-friendly DVCs that have been purchased recently.