2

Never Trust a DRM Vendor

I was reading an interesting post about predicting the results of the invasion of Iraq [1]. One of the points made was that the author rejected every statement by a known liar (which includes all the world leaders who wanted the invasion). So basically regarding every statement by a known liar as potentially a lie led to a better than average prediction.

It seems to me that a similar principle can be applied to other areas. A vendor or advocate of DRM (Digital Restrictions Management) [2] has already proven that they are not concerned about the best interests of the people who own the computers in question (who paid for the hardware and software) – they are in fact interested in preventing the owner of the computer from using it in ways that they desire. This focus on security in opposition to the best interests of the user can not result in a piece of software that protects the user’s best interests.

I believe that any software with DRM features (other than those which are requested by people who purchase the software) should be considered to be malware. The best example of this is the Sony BMG copy protection scandal [3].

Note that I refer to the interests of the people who own the computers – not those who use them. It is quite acceptable to write software that serves the buyer not the end-user. Two examples are companies that want to prevent their employees doing “inappropriate” things and parents who want to restrict what their children can do. We can debate about what restrictions on the actions of employees and children are appropriate, but it seems clear that when software is purchased for their use the purchaser has the right to restrict their access.

Also note that when I refer to someone owning a computer I refer to an honest relationship (such as parent/child or employer/worker) not the abusive relationships that mobile phone companies typically have with their customers (give the customer a “free” phone but lock it down and claim that they don’t own it). Mobile phones ARE computers, just rather limited ones at this time.

4

OpenID Delegation

I’ve just installed Eran Sandler’s OpenID Delegation Plugin [1]. This means that I can now use my blog URL for OpenID authentication. I’ve also included the plugin in my WordPress repository (which among other things has the latest version of WordPress). One thing that I consider to be a bug in Eran’s plugin is the fact that it only adds the OpenID links to the main URL. This means that for example if I write a blog comment and want to refer to one of my own blog posts on the same topic (which is reasonably common – after more than two years of blogging and almost 700 posts I’ve probably written a post that is related to every topic I might want to comment on) then I can’t put the comment in the URL field. The problem here is that URLs in the body of a blog comment generally increase the spam-score (I use this term loosely to refer to a variety of anti-spam measures – I am not aware of anything like SpamAssassin being used on blog comments), and not having OpenID registration also does the same. So it seems that with the current functionality of Eran’s plugin I will potentially suffer in some way any time I want to enter a blog comment that refers to a particular post I wrote.

deb http://www.coker.com.au etch wordpress

My WordPress Debian repository is currently available with the above APT repository. While it specifies etch it works with Lenny too (my blog currently runs on Lenny). I will eventually change it to use lenny in the name.

For the OpenID server I am currently using the OpenID service provided by Yubico as part of the support for their Yubikey authentication token [2] (of which I will write more at a later date). I think that running their own OpenID server was a great idea, it doesn’t cost much to run such a service and it gives customers an immediate way of using their key. I expect that there are more than a few people who would be prepared to buy a Yubikey for the sole purpose of OpenID authentication and signing in to a blog server (which can also be via OpenID if you want to do it that way). I plan to use my Yubikey for logging in to my blog, but I still have to figure out the best way of doing it.

One thing that has been discussed periodically over the years has been the topic of using smart-cards (or some similar devices) for accessing Debian servers and securing access to GPG keys used for Debian work by developers who are traveling. Based on recent events I would hazard a guess that such discussions are happening within the Fedora project and within Red Hat right now (if I worked for Red Hat I would be advocating such things). It seems that when such an idea is adopted a logical extension is to support services that users want such as OpenID at the same time, if nothing else it will make people more prone to use such devices.

Disclaimer: Yubico gave me a free Yubikey for the purpose of review.

Update: The OpenIDEnabled.com tool to test OpenID is useful when implementing such things [3].

An Update on DKIM Signing and SE Linux Policy

In my previous post about DKIM [1] I forgot to mention one critical item, how to get Postfix to actually talk to the DKIM milter. This wasn’t a bad thing because it turned out that I hadn’t got it right.

I had configured the DKIM milter on the same line as the milters for ClamAV and SpamAssassin – in the smtpd_milters section. This was fine for relaying outbound mail via my server but didn’t work for locally generated mail. For locally generated mail Postfix has a directive named non_smtpd_milters which you need to use. So it seems that a fully functional Postfix DKIM milter configuration requires adding the following two lines to /etc/postfix/main.cf:

smtpd_milters = unix:/var/run/dkim-filter/dkim-filter.sock
non_smtpd_milters = unix:/var/run/dkim-filter/dkim-filter.sock

This also required an update to the SE Linux policy. When I was working on setting up DKIM I also wrote SE Linux policy to allow it and also wrote policy for the ClamAV milter. That policy is now in Debian/Unstable and has been approved for Lenny. So I now need to build a new policy package that allows the non_smtpd_milter access to the DKIM milter and apply for it to be included in Lenny.

SE Linux in Lenny is going to be really good. I think that I’ve already made the SE Linux support in the pre-release (*) of Lenny significantly better than Etch plus all my extra updates is. More testers would be appreciated, and more people joining the coding would be appreciated even more.

(*) I use the term pre-release to refer to the fact that the Lenny repository is available for anyone to download packages.

Play Machine Update

My Play Machine [1] was offline for most of the past 48 hours (it’s up again now). I have upgraded the hardware for the Dom0 used to run it so that it now has the ability to run more DomU’s. I can now run at least 5 DomUs while previously I could only run 3. I have several plans that involve running multiple Play Machines with different configurations and for running SE Linux training.

The upgrade didn’t need to take two days, but I had some other things that diverted me during the middle of the job (running the Play Machine isn’t my highest priority). I’ve been doing some significant updates to the SE Linux policy for Lenny including some important changes to the policy related to mail servers. Among other things I created a new domain for DKIM (which I previously wrote about) [2]. The chain of dependencies was that a client wanted me to do some urgent DKIM work and I needed my own mail server to be a test-bed. I installed DKIM and then of course I had to write the SE Linux policy. Now that my client’s network is running the way it should be I’ve got a little more time for other SE Linux work.

7

Installing DKIM and Postfix in Debian

I have just installed Domain Key Identified Mail (DKIM) [1] on my mail server. In summary the purpose is to allow public-key signing of all mail that goes out from your domain so that the recipient can verify it’s authenticity (and optionally reject forgeries). It also means that you can verify inbound mail. A weakness of DKIM is that it is based on the DNS system (which has many issues and will continue to have them until DNSSEC becomes widely adopted). But it’s better than nothing and it’s not overly difficult to install.

The first thing to do before installing DKIM is to get a Gmail account. Gmail gives free accounts and does DKIM checks. If you use Iceweasel or another well supported browser then you can click on “Show Details” from the message view which then gives fields “mailed-by” and “signed-by” which indicate the DKIM status. If you use a less supported browser such as Konqueror then you have to click on the “Show Original” link to see the headers and inspect the DKIM status there (you want to see dkim=pass in the Authentication-Results header). Also Gmail signs outbound mail so it can be used to test verification of received mail.

The next thing to do is to install the DKIM Milter. To make things exciting this is packaged for Debian under the name dkim-filter so that reasonable searches for such functionality (such as a search for milter or dkim-milter – the upstream project name) will fail.

After installing the package you must generate a key, I used the command “dkim-genkey -d coker.com.au -s 2008” to generate a key for my domain. It seems that the domain is currently only used as a comment but I prefer to use all reasonable parameters for such things. The -s option is for a selector, which is a way of specifying multiple valid signing keys. It’s apparently fairly common to use a different key every year. But other options include having multiple mail servers for a domain and giving each one a selector. The dkim-genkey command produces two files, one is named 2008.txt and can be copied into a BIND zone file. The other is named 2008.private and is used by the DKIM signing server.

Here is a sample of the most relevant parts of the config file /etc/dkim-filter.conf for signing mail for a single domain:

Domain coker.com.au
KeyFile /etc/dkim/2008.private
Selector 2008

The file /etc/default/dkim-filter needs to be modified to specify how it will listen for connections from the MTA, I uncommented the line SOCKET=”local:/var/run/dkim-filter/dkim-filter.sock”.

One issue is that the Unix domain socket file will by default not be accessible to Postfix, I devised a work-around for this and documented it in Debian bug report #499364 [2] (I’ve hacked a chgrp command into the init script, ideally the GID would be an option in a config file).

A basic configuration of dkim-milter will sign mail for one domain. If you want to sign mail for more than one domain you have to comment out the configuration for a single domain in /etc/dkim-filter.conf and instead use the option KeyList file to specify a file with a list of domains (the dkim-filter.conf(5) man page documents this). The one confusing issue is that the selector is taken to be the basename of the file which contains the secret key (they really should have added an extra field). This means that if you have an obvious naming scheme for selectors (such as the current year) then you need a different directory for each domain to contain the key.

As an example here is the line from the KeyList file for my domain:
*@coker.com.au:coker.com.au:/etc/dkim/coker.com.au/2008

Now one problem that we have is that list servers will usually append text to the body of a message and thus break the signature. The correct way of solving this is to have the list server sign the mail it sends out and have a header indicating the signature status of the original message. But there are a lot of list servers that won’t be updated for a long time.

The work-around is to put the following line in /etc/dkim-filter.conf:
BodyLengths yes

This means that the signature will cover a specified number of bytes of body data, and any extra which is appended will be ignored when it comes time to verify the message. This means of course that a hostile third party could append some bogus data without breaking the signature. In the case of plain text this isn’t so bad, but when the recipient defaults to having HTML email it could have some interesting possibilities. I wonder whether it would be prudent to configure my MUA to always send both HTML and plain-text versions of my mail so that an attacker can’t append hostile HTML.

It’s a pity that Gmail (which appears to have the most popular implementation of DKIM) doesn’t allow setting that option. So far the only message I have received that failed DKIM checks was sent from a Gmail account to a Debian mailing list.

Ideally it would be possible to have the messages sent to mailing lists not be signed or have the length field used. That would require a signing practice based on recipient which is functionality is not available in dkim-milter (but which possibly could be implemented as Postfix configuration although I don’t know how). Implementing this would not necessarily require knowing all the lists that mail might be sent to, it seems that a large portion of the world’s list traffic is sent to addresses that match the string “@lists.” which can be easily recognised. For a service such as Gmail it would be easy to recognise list traffic from the headers of received messages and then treat messages sent to those domains differently.

As the signature status is based on the sending address it would be possible for me to use different addresses for sending to mailing lists to avoid the signatures (the number of email addresses in use in my domain is small enough that having a line for each address to sign will not be a great inconvenience). Most MUAs have some sort of functionality to automatically choose a sender address that is in some way based on the recipient address. I’ll probably do this eventually, but for the moment I’ll just use the BodyLengths option, while it does reduce the security a bit it’s still a lot better than having no DKIM checks.

Fixing execmod (textrel) Problems in Lenny

I’ve just updated my repository of SE Linux related packages for Lenny [1] to include a set of ffmpeg packages modified to not need text relocations (execmod access under SE Linux). I haven’t checked to make sure that I fixed all issues in those packages, but I have fixed all the issues that prevented Mplayer from working in a default configuration of SE Linux.

I had to patch the file libswscale/rgb2rgb.c to disable the MMX assembly code as the --disable-mmx option doesn’t work for that file. I changed the build script so that when it generates the code for the shared and cmov targets in i386 mode it adds -DPIC and -DBROKEN_RELOCATIONS to the CFLAGS and also added LIBOBJFLAGS=-fPIC to the ./configure run. There might have been a better way of doing this, but the current implementation basically works.

Long term I think that the ideal solution to this would be to have separate versions of the library packages for people who prefer extra security to a possible 15% performance benefit.

While using these libraries on an EeePC 701 (the least powerful of all the machines I own which could be used to play video) I was able to play full-screen video downloaded from ted.com without any glitches so it seems that a 15% performance loss is not a problem.

1

Execmod and SE Linux – i386 Must Die

I have previously written about the execmod permission check in SE Linux [1] and in a post about SE Linux on the desktop I linked to some bug reports about it [2] (which probably won’t be fixed in Debian).

One thing I didn’t mention is the proof of the implication of this. When running a program in the unconfined_t domain on a SE Linux system (the domain for login sessions on a default configuration), if you set the boolean allow_execmod then the following four tests from paxtest will be listed as vulnerable:

Executable bss (mprotect)
Executable data (mprotect)
Executable shared library bss (mprotect)
Executable shared library data (mprotect)

This means that if you have a single shared object which uses text relocations and therefore needs the execmod permission then the range of possible vectors for attack against bugs in the application has just increased by four. This doesn’t necessarily require that the library in question is actually being used either! If a program is linked against many shared objects that it might use, then even if it is not going to ever use the library in question it will still need execmod access to start and thus give extra possibilities to the attacker.

For reference when comparing a Debian system that doesn’t run SE Linux (or has SE Linux in permissive mode) to a SE Linux system with execmod enabled the following tests fail (are reported as vulnerable):

Executable anonymous mapping (mprotect)
Executable heap (mprotect)
Executable stack (mprotect)
Writable text segments

If you set the booleans allow_execstack and allow_execheap then you lose those protections. But if you use the default settings of all three booleans then a program running in the unconfined_t domain will be protected against 8 different memory based attacks.

Based on discussions with other programmers I get the impression that fixing all the execmod issues on i386 is not going to be possible. The desire for a 15% performance boost (the expected result of using an extra register) is greater than the desire for secure systems among the people who matter most (the developers).

Of course we could solve some of these issues by using statically linked programs and have statically linked versions of the libraries in question which can use the extra register without any issues. This does of course mean that updates to the libraries (including security updates) will require rebuilding the statically linked applications in question – if a rebuild was missed then this could be reduce the security of the system.

To totally resolve that issue we need to have i386 machines (the cause of the problem due to their lack of registers) go away. Fortunately in the mainstream server, desktop, and laptop markets that is already pretty much done. I’m still running a bunch of P3 servers (and I know many other people who have similar servers), but they are not used for tasks that involve running programs that are partially written in assembly code (mplayer etc).

One problem is that there are still new machines being released with the i386 ISA as the only instruction set. For example the AMD Geode CPU [2] is used by the One Laptop Per Child (OLPC) project [3] and the new Intel Atom line of CPUs [4] apparently only supports the AMD64 ISA on the “desktop” models and the versions used in ultra-mobile PCs are i386 only.

I think that these issues are particularly difficult in regard to the OLPC. It’s usually not difficult to run “yum upgrade” or “apt-get dist-upgrade” on an EeePC or similar ultra-mobile PC. But getting an OLPC machine upgraded in some of the remote places where they may be deployed might be more difficult. Past experience has shown that viruses and trojans can get to machines that are supposed to be on isolated networks, so it seems that malware can get access to machines that can not communicate with servers that contain security patches… One mitigating factor however is that the OLPC OS is based on Fedora, and Fedora seems to be taking the strongest efforts to improve security of any mainstream distribution, a choice between 15% performance and security seems to be a no-brainer for the Fedora developers.

SE Linux in Lenny status – Achieved Level 1

I previously described the goals for SE Linux development in Lenny and assigned numbers to the levels of support [1]. I have just uploaded a new policy to unstable which I hope to get in Lenny that will solve all the major issues for level 1 of support (default configuration with the unconfined_t domain for all user sessions – much like the old “targeted” policy). The policy in question is in my Lenny SE Linux repository [2] (for those who don’t want to wait for it to get into Unstable or Lenny).

11

Random Opinions, Expert Opinions, and Facts about AppArmor

My previous post titled AppArmor is Dead [1] has inspired a number of reactions. Some of them have been unsubstantiated opinions, well everyone has an opinion so this doesn’t mean much. I believe that opinions of experts matter more, Crispin responded to my post and made some reasonable points [2] (although I believe that he is overstating the ease of use case). I take Crispin’s response a lot more seriously than most of the responses because of his significant experience in commercial computer security work. The opinion of someone who has relevant experience in the field in question matters a lot more than the opinion of random computer users!

Finally there is the issue of facts. Of the people who don’t agree with me, Crispin seems to be the first to acknowledge that Novell laying off AppArmor developers and adding SE Linux support are both bad signs for AppArmor. The fact that Red Hat and Tresys have been assigning more people to SE Linux development in the same time period that SUSE has been laying people off AppArmor development seems to be a clear indication of the way that things are going.

One thing that Crispin and I understand is the amount of work involved in maintaining a security system. You can’t just develop something and throw it to the distributions. There is ongoing work required in tracking kernel code changes, and when there is application support there is also a need to track changes to application code (and replacements of system programs). Also there is a need to add new features. Currently the most significant new feature development in SE Linux is related to X access controls – this is something that every security system for Linux needs to do (currently none of them do it). It’s a huge amount of work, but the end result will be that compromising one X client that is running on your desktop will not automatically grant access to all the other windows.

The CNET article about Novell laying off the AppArmor developers [3] says ‘“An open-source AppArmor community has developed. We’ll continue to partner with this community,” though the company will continue to develop aspects of AppArmor‘ and attributes that to Novell spokesman Bruce Lowry.

Currently there doesn’t seem to be an AppArmor community, the Freshmeat page for AppArmor still lists Crispin as the owner and has not been updated since 2006 [4], it also links to hosting on Novell’s site. The Wikipedia page for AppArmor also lists no upstream site other than Novell [4].

The AppArmor development list hosted by SUSE is getting less than 10 posts per month recently [6]. The AppArmor general list had a good month in January with a total of 23 messages (including SPAM) [7], but generally gets only a few messages a month.

The fact that Crispin is still listed as the project leader [8] says a lot about how the project is managed at Novell!

So the question is, how can AppArmor’s prospects be improved? A post on linsec.ca notes that Mandriva is using AppArmor, getting more distribution support would be good [9], but the most important thing in that regard will be contributing patches back and dedicating people to do upstream work (Red Hat does a huge amount of upstream development for SE Linux and a significant portion of my Debian work goes upstream).

It seems to me that the most important thing is to have an active community. Have a primary web site (maybe hosted by Novell, maybe SourceForge or something else) that is accurate and current. Have people giving talks about AppArmor at conferences to promote it to developers. Then try to do something to get some buzz about the technology, my SE Linux Play Machines inspired a lot of interest in the SE Linux technology [10]. If something similar was done with AppArmor then it would get some interest.

I’m not interested in killing AppArmor (I suspect that Crispin’s insinuations were aimed at others). If my posts on this topic inspire more work on AppArmor and Linux security in general then I’m happy. As Crispin notes the real enemy is his employer (he doesn’t quite say that – but it’s my interpretation of his post).

8

Google Chrome – the Security Implications

Google have announced a new web browser – Chrome [1]. It is not available for download yet, currently there is only a comic book explaining how it will work [2]. The comic is of very high quality and will help in teaching novices about how computers work. I think it would be good if we had a set of comics that explained all the aspects of how computers work.

One noteworthy feature is the process model of Chrome. Most browsers seem to aim to have all tabs and windows in the same process which means that they can all crash together. Chrome has a separate process for each tab so when a web site is a resource hog it will be apparent which tab is causing the performance problem. Also when you navigate from site A to site B they will apparently execute a new process (this will make the back-arrow a little more complex to implement).

A stated aim of the process model is to execute a new process for each site to clear out the memory address space. This is similar to the design feature of SE Linux where a process execution is needed to change security context so that a clean address space is provided (preventing leaks of confidential data and attacks on process integrity). The use of multiple processes in Chrome is just begging to have SE Linux support added. Having tabs opened with different security contexts based on the contents of the site in question and also having multiple stores of cookie data and password caches labeled with different contexts is an obvious development.

Without having seen the code I can’t guess at how difficult it will be to implement such features. But I hope that when a clean code base is provided by a group of good programmers (Google has hired some really good people) then the result would be a program that is extensible.

They describe Chrome as having a sandbox based security model (as opposed to the Vista modem which is based on the Biba Integrity Model [3]).

It’s yet to be determined whether Chrome will live up to the hype (although I think that Google has a good record of delivering what they promise). But even if Chrome isn’t as good as I hope, they have set new expectations of browser features and facilities that will drive the market.

Update: Chrome is now released [4]!

Thanks to Martin for pointing out that I had misread the security section. It’s Vista not Chrome that has the three-level Biba implementation.