Archives

Categories

Bugs in Google Chrome

I’m currently running google-chrome-beta version 5.0.375.55-r47796 on Debian/Unstable. It’s the fastest web browser I’ve used in recent times – it’s the first time that I’ve run a browser that feels faster than my recollection of running IBM WebExplorer for OS/2 on a 486-66 system! It has a good feature set, and it’s the only browser I’ve used that in a typical configuration will make proper use of the screen space by not having a permanent status bar at the bottom of the sceen and by having tabs in the title-bar. But it’s not perfect, here is a list of some bugs:

Chrome Titlebar when maximisedChrome Titlebar when not maximisedRight of Chrome Titlebar when not maximised

Above are three partial screen captures of Chrome, the first is when maximised and the second is when the window isn’t maximised. Notice the extra vertical space above the tab in the title bar in the second picture. The third picture shows the right side of the titlebar and you can see a space below the three buttons where you can drag the window around – no matter how many tabs you open that space below the three buttons is reserved. If the Chrome developers had removed the extra vertical space in the titlebar and reserved slightly more horizontal space then you would be able to drag the window around. While an anonymous commentator made a good point that the extra vertical space can be used to drag the window around when the maximum number of tabs are open, it seems that there are other ways of achieving that goal without wasting ~18 vertical pixels. Doing so would be a lot less ugly than what they did with finding text in the page.

When I visit a web site that uses cookies from an Incognito Window (which means that cookies etc aren’t stored) there is no option to say “allow all cookies”. This is really annoying when you get to a web site such as the IBM one which stores 5 cookies when you first load the page and then at least one new cookie write for every page you visit. Given that cookie data will be discarded as soon as the window is closed it seems like a good idea to have an option to allow all cookies for Incognito Windows even if all cookies aren’t allowed for regular windows. Blocking all cookies would be OK too, anything but having to click on Block or Allow multiple times for each page load.

The J and K keys don’t work in a view of Venus version 0~bzr95-2+lenny1 (the latest version in Debian/Lenny).

I once had a situation where I entered a ‘.’ at the end of a domain name (which is quite legal – there is always an implied dot) and Chrome then wouldn’t take note of my request to accept all cookies from the domain. I haven’t been able to reproduce that bug, but I have noticed that it stores the settings for whether cookies should be stored separately for domains that end in ‘.’, so “www.cnn.com.” is different from “www.cnn.com” . Iceweasel seems to just quietly strip the trailing dot. Of course this is better than Konqueror which won’t even load a URL with a dot at the end.

Chrome can be relied on to restore all windows rapidly after a crash, unlike Iceweasel which restores them at it’s normal load speed (slow) and Konqueror which doesn’t tend to restore windows. This is good as it does seem to crash regularly. In a response to my post about Chrome and SE Linux [1] Ben Hutchings pointed out that the --no-sandbox option to chrome disables the creation of a PID namespace and therefore makes debugging a lot easier, if I get a lot of spare time I’ll try and track down some of the Chrome SEGVs.

The JavaScript compiler is either buggy or it’s not buggy in situations where people expect IE bugs. When using the Dell Australia web site I can’t always order all options. When trying to order a Dell R300 1RU server with hot-plug disks in a hardware RAID array it seems impossible to get all the necessary jumpers – which is a precondition to completing the order – fortunately I only wanted to blog about how cheap Dell servers are so I don’t actually need to complete an order. Dell’s web site is also difficult in Iceweasel on occasion, so it’s obviously more demanding than most sites. It might be a good test site for people who work on browsers as it’s both demanding and important.

When I select a URL to be opened in a new window (or when JavaScript does this) then the new tab is opened with about:blank listed as the URL. If the URL is for a PDF file (or something else that is to be downloaded) then the URL entry field is never updated to give the real URL. I believe that this is wrong, either the new tab shouldn’t be opened or it should have the correct URL on display – there is no benefit with a tab open to show nothing but about:blank in the URL entry field. Also if a URL takes some time to load then it may keep about:blank in the URL entry field for some time. This means that if you use the middle mouse button to rapidly open a few new tabs you won’t be able to see what is to be loaded in each one. Sometimes I have several tabs loading and I’m happy to close some unimportant ones if they are slow but some are worth waiting for.

Overall that’s not too bad. I can use Dell’s site in Iceweasel, so the only critical bug is the cookies issue in Incognito Windows which makes the Incognito feature almost unusable for some sites.

Securely Killing Processes

Joey Hess wrote on Debian-devel about the problem of init scripts not doing adequate checks before using the data from a PID file under /var/run to determine which process to kill [1]. Unfortunately that still doesn’t quite solve the problem, there is still the issue of a race condition causing a process to die while you are doing the checks and then be replaced by another process.

Below I have included the source code to a little program that will repeatedly fork() until it finds a particular PID and then have it’s child call sleep(). So you can run a command such as “kill -9 1234 ; ./a.out 1234” and then have this program take over the PID 1234.

From testing with this it seems that when you have a shell with it’s current working directory as /proc/1234 then once process 1234 is killed the current directory is empty, “ls -l” returns 0 entries. This isn’t surprising, it’s the standard Unix behavior when the working directory is removed.

So if a program (or even a shell script) changes directory to /proc/1234 it can then verify all attributes of the process (it’s CWD, it’s root directory, the executable used to run it, it’s UID, GID, supplemental groups, it’s SE Linux context, and lots of other things all atomically. The only possibility for confusion is that a process might execute a SETUID or SETGID program or a program that has a label which triggers a SE Linux domain transition. It might also change some attributes without executing a new process, for example by using the setuid(), setgid(), setgroups(), or other similar system calls. For the purposes of killing a process I don’t think that the possibility of it changing it’s own attributes or executing a new program are serious problems, if you want to kill a process then you probably want to kill it after it has called setuid() etc.

It seems to me that it would be useful to have a file named /proc/PID/signal (or something similar) to which you could write a number and then have the kernel send the signal in question to that process. So the commands “kill -9 1234” and “echo 9 > /proc/1234/signal” would give the same result. But you could run the command “cd /proc/1234” and then some time later you could run “echo 9 > signal” and know that you would kill the original process 1234 if it was still running and not some other process that replaced it.

What do you think? Is this worthy of adding a new feature to the proc filesystem?

The Source

Run the following program with a single parameter which is the PID that you want it to take. It will keep looping through the PID space until it gets the one you specify, if the one you specify isn’t available (EG you give it “1” as a parameter) then it will run forever. It will take some time to get a result on a slower system. On my P3-900MHz test system it took up to 72 seconds to get a result.

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>

int main(int argc, char **argv)
{
  if(argc != 2)
  {
    printf("Specify the desired PID on the command-line\n");
    return 1;
  }
  pid_t wanted = (pid_t)atoi(argv[1]);
  int rc;
  int status;
  while((rc = fork()) != wanted && rc != 0)
  {
    if(rc == -1)
    {
      printf("fork error\n");
      return 1;
    }
    if(wait(&status) == -1)
      printf("wait error\n");
  }
  if(rc == 0)
  {
    if(getpid() == wanted)
    {
      printf("Got pid %d, sleeping\n", wanted);
      sleep(200);
    }
    return 0;
  }
  wait(&status);
  return 0;
}

Can SE Linux Implement Traditional Unix Users and Groups?

I was asked by email whether SE Linux could implement traditional Unix users and groups.

The Strictly Literal Answer to that Question

The core of the SE Linux access control is the domain-type model where every process has a domain and every object that a process can access (including other processes) has a type. Domains are not strongly differentiated from types.

It would be possible to create a SE Linux policy that created a domain for every combination of UID and GID that is valid for a user shell given that such combinations are chosen by the sysadmin who could limit them to some degree. There are about 2^32 possible UIDs and about 2^32 possible GIDs, as every domain is listed in kernel memory we obviously can’t have 2^64 domains, but we could have enough to map a typical system that’s in use. Of course the possible combinations of supplemental groups probably makes this impossible for even relatively small systems, but we can use a simpler model that doesn’t emulate supplemental groups.

For files there are more possible combinations because anyone who is a member of a group can create a SETGID directory and let other users create files in it. But in a typical system the number of groups is not much greater than the number of users – the maximum number of groups is typically the number of users plus about 60. So if we had 100 users then the number of combinations of UID and GID would be something like 100*(100+60)=16,000 – it should be possible to have that many domains in a SE Linux policy (but not desirable).

Then all that would be needed is rules specifying that each domain (which is based on a combination of UID and GID) can have certain types of access to certain other types based on them having either the same UID or the same GID.

Such a policy would be large, it would waste a lot of kernel memory, it would need to be regenerated whenever a user is added, and it’s generally something you don’t want to use. No-one has considered implementing such a policy, I merely describe it to illustrate why certain configuration options are not desirable. The rest of this post is about realistic things that you can do with SE Linux policy and how it will be implemented in Debian/Squeeze.

My previous post titled “Is SE Linux Unixish?” addresses this issue at a more conceptual level and also considers MAC vs DAC [1].

The History of mapping Unix Accounts to SE Linux Access Control

In the early releases of SE Linux (long before it was included in Fedora) every user who could login to a system needed to have their user-name compiled into the policy. The policy specified which roles the user could access, the roles specified which domains could be accessed, and therefore what the user could do. The identity of files on disk was used for two purposes, one was logging (you could see who created a file) and the other was a permission check for the SE Linux patched version of Vixie cron which would not execute any command on behalf of a user unless the identity on the crontab file in the cron spool matched the identity used to run it – this is an analogy of the checks that Vixie cron makes on the Unix UID of the crontab file (some other cron daemons do fewer checks).

Having to recompile policy source every time you added a user was annoying. So a later development was to allow arbitrary mappings between Unix account names and SE Linux Identities (which included a default identity) and another later development was to have a utility program semanage to map particular Unix account and group names to SE Linux identities. This was all done years ago. Fedora Core 5 which was released in 2006 had the modular policy which included these features (apart from mapping Unix groups to SE Linux identities which was more recent).

Fedora Core 5 also introduced MCS which was comprised of a set of categories that a security context may have. The sysadmin would configure the set of categories that each account would have.

A recent development has been a concept named UBAC (User Based Access Control) which basically means that a process running directly on behalf of a regular user (IE with a SE Linux identity that’s not system_u) can only access files that have an identity of system_u or which have the same identity as the process. This means that you can only access your own files or system files – not files of other users which may have inappropriate Unix permissions. So for example if a user with a SE Linux identity of “john” gives their home directory the Unix permission mode of 0777 then a user with a SE Linux identity of “jane” can’t access their files. Of course this means that if you have a group of people working together on a project then they probably need to all have the same Identity and in practice you would probably end up with everyone having the same identity. I’ve given up on the idea of using UBAC in Debian.

The Current Plan for Users and SE Linux Access Control in Debian

My plan is to have things work in Squeeze in much the same way as in Lenny.

You have a SE Linux identity assigned to a login session and everything related to it (including cron jobs) based on the Unix account name or possibly the Unix group name (if there are login entries for both the user-name and the group-name then the user-name entry has precedence). The mapping between Unix accounts and SE Linux identities is configured by the sysadmin and SE Linux identities don’t matter much for the Targeted configuration (which is what most people use).

The identity determines which roles may be used and also has a limit on the MCS categories. The MCS categories are also specified in the login configuration which has to be a sub-set of the categories used by the identity record.

So for example the following is the output of a couple of commands run on a Debian/Unstable system. They show that the “test” Unix account is assigned a SE Linux identity of “staff_u” and an MCS range of “s0-s0:c1” (this means it creates files by default at level “s0” and can also write to other files at that level, but can also have read/write access to files at the level “s0:c1”). The “staff_u” identity (as shown in the output of “semanage user -l” can be used with all categories in the set “s0:c0.c1023” where the dot means the set of categories from c0 to c1023 inclusive) but in the case of the “test” user only one category will be used. The “test” group however (as expressed with “%test”) is given the identity “user_u” and is not permitted to use any categories.

# semanage login -l
Login Name    SELinux User    MLS/MCS Range            
%test         user_u          s0                       
__default__   unconfined_u    s0-s0:c0.c1023           
root          unconfined_u    s0-s0:c0.c1023           
system_u      system_u        s0-s0:c0.c1023           
test          staff_u         s0-s0:c1               

# semanage user -l
             Labeling   MLS/       MLS/                          
SELinux User Prefix     MCS Level  MCS Range         SELinux Roles
root         sysadm     s0         s0-s0:c0.c1023    staff_r sysadm_r system_r
staff_u      staff      s0         s0-s0:c0.c1023    staff_r sysadm_r
sysadm_u     sysadm     s0         s0-s0:c0.c1023    sysadm_r
system_u     user       s0         s0-s0:c0.c1023    system_r
unconfined_u unconfined s0         s0-s0:c0.c1023    system_r unconfined_r
user_u       user       s0         s0                user_r

I hope to get the policy written to support multiple user roles in time for the release of Squeeze. If I don’t make it then I will put such a policy on my own web site and try to get it included in an update. The policy currently basically works for a Targeted configuration where the users are not confined (apart from MCS).

How MMCS Basically Works

The vast majority of SE Linux users run with the MCS policy rather than the MLS policy. For Debian I have written a modified version of MCS that I call MMCS. MMCS is mandatory (you can’t relabel files to escape it) and it prevents write-down.

If a process has the range s0-s0:c1,c3 then it has full access to files labelled as s0, s0:c1, s0:c3, and s0:c1,c3 – and any files it creates will be labeled as s0.

If a process has the range s0:c1-s0:c1,c3 then it has read-only access to files labelled as s0 and s0:c3 and read-write access to files labelled as s0:c1 and s0:c1,c3. This means that any secret data it accesses that was labelled with category c1 can’t be leaked down to a file that is not labelled with that category.

Now MCS currently has no network access controls, so there’s nothing stopping a user from using scp or other network utilities to transfer files. But that’s the way with most usable systems. I don’t think that this is necessarily a problem, the almost total lack of network access controls in a traditional Unix model doesn’t seem to concern most people.

Now to REALLY Answer that Question

SE Linux is a Mandatory Access Control (MAC) system, this makes it inherently different to a Discretionary Access Control (DAC) system such as traditional Unix access controls.

Unix permissions are based on each file having a UID, a GID, and a set of permissions and each process having a UID, a GID, and a set of supplementary GIDs. If a user runs a setuid or setgid program then the process will have extra privileges. It also has a lot of stuff that most people aren’t aware of such as real vs effective UIDs, the tag bit, setgid directories, and lots more – including some quite arbitrary things like making ports <1024 special.

SE Linux is based on every object (process, file, socket, etc) having a single security label which includes an identity, a role, a type, and a sensitivity label (MCS categories or an MLS range). There is no support for an object to have more than one label. The SE Linux equivalent to setuid/setgid files is a label for a file which triggers a domain transition when it’s executed. This differs from setuid files in that the domain transition is complete (the old privileges can’t be restored) and the transition is generally not to a strict super-set of the access (usually a different sub-set of possible access and sometimes to lesser access).

These differences are quite fundamental. So really SE Linux can’t implement traditional Unix access control. What SE Linux is designed to do is to provide a second layer of defense and to also provide access controls that have different aims than that of Unix permissions – such as being mandatory and implementing features such as MLS.

Logging in as Root

Martin Meredith wrote a blog post about logging in as root and the people who so strongly advocate against it [1]. The question is whether you should ssh directly to the root account on a remote server or whether you should ssh to a non-root account and use sudo or su to gain administrative privileges.

Does sudo/su make your system more secure?

Some years ago the administrator of a SE Linux Play Machine used the same home directory for play users to login as for administrative logins as for his own logins – he used newrole to gain administrative access (like su or sudo but for SE Linux).

His machine was owned by one of his friends who created a shell function named newrole in one of his login scripts that used netcat to send the administrative password out over the net. He didn’t immediately realise that this was a problem until his friend changed the password and locked him out! This is one example of a system being 0wned due to having the double-authentication – of course if he had logged in directly with administrative privs while using the same home directory that the attacker could write to then he would still have lost but the attacker would have had to do a little more work.

When you login you have lots of shell scripts run on your behalf which have the ability to totally control your environment, if someone has taken over those scripts then they can control everything you see, when you think you run sudo or something they can get the password. When you ssh in to a server your security relies on the security of the client end-point, the encryption of the ssh protocol (including keeping all keys secure to prevent MITM attacks), and the integrity of all the programs that are executed before you have control of the remote system.

One benefit for using sshd to spawn a session without full privileges is in the case where you fear an exploit against sshd and are running SE Linux or some other security system that goes way beyond Unix permissions. It is possible to configure SE Linux in the “strict” configuration to deny administrative rights to any shell that is launched directly by the sshd. Therefore someone who cracks sshd could only wait until an administrator logs in and runs newrole and they wouldn’t be able to immediately take over the system. If the sysadmin suspected that a sshd compromise is possible then a sysadmin could login through some other method (maybe visit the server and login at the console) to upgrade the sshd. This is however a very unusual scenario and I suspect that most people who advocate using sudo exclusively don’t use a SE Linux strict configuration.

Does su/sudo improve auditing?

If you have multiple people with root access to one system it can be difficult to determine who did what. If you force everyone to use su or sudo then you will have a record of which Unix account was used to start the root session. Of course if multiple people start root shells via su and leave them running then it can be difficult to determine which of the people who had such shells running made the mistake – but at least that reduces the list of suspects.

If you put “PermitUserEnvironment yes” in /etc/ssh/sshd_config then you have the option of setting environment variables by ssh authorized_keys entries, so you could have an entry such as the following:

environment=”ORIG_USER=john@example.com” ssh-rsa AAAAB3Nz[…]/w== john@example.com

Then you could have the .bashrc file (or a similar file for your favorite shell) have code such as the following to log the relevant data to syslogd:
if [ "$SSH_TTY" = "" ]; then
  logger -p auth.info "user $ORIG_USER ran command \"$BASH_EXECUTION_STRING\" as root"
else
  logger -p auth.info "user $ORIG_USER logged in as root on tty $(tty)"
fi

I think that forcing the use of su or sudo might improve the ability to track other sysadmins if the system is not well configured. But it seems obvious that the same level of tracking can be implemented in other ways with a small amount of effort. It took me about 30 minutes to devise the above shell code and configuration options, it should take people who read this blog post about 5 minutes to implement it (or maybe 10 minutes if they use a different shell or have some other combination of Bash configuration that results in non-obvious use of initialisation scripts (EG if you have a .bash_profile file then .bashrc may not be executed).

Once you have the above infrastructure for logging root login sessions it wouldn’t be difficult to run a little script that asks the sysadmin “what is the purpose for your root login” and logs what they type. If several sysadmins are logged in at the same time and one of them describes the purpose of their login as “to reconfigure LDAP” then you know who to talk to if your LDAP server stops working!

Should you run commands with minimum privilege?

It’s generally regarded that running each command with the minimum privilege is a good idea. But if the only reason you login to a server is to do root tasks (restarting daemons, writing to files that are owned by root, etc) then there really isn’t a lot of potential for achieving anything by doing so. If you need to use a client for a particular service (EG a web browser to test the functionality of a web server or proxy server) then you can login to a different account for that purpose – the typical sysadmin desktop has a dozen xterms open at once, using one for root access to do the work and another for non-root access to do the testing is probably a good option.

Can root be used for local access?

Linux Journal has an article about the distribution that used to be known as Lindows (later Linspire) which used root as the default login for desktop use [2]. It suggests using a non-root account because “If someone can trick you into running a program or if a virus somehow runs while you are logged in, that program then has the ability to do anything at all” – of course someone could trick you into running a program or virus that attempts to run sudo (to see if you enabled it without password checks) and if that doesn’t work waits until you run sudo and sniffs the password (using pty interception or X event sniffing). The article does correctly note that you can easily accidentally damage your system as root. Given that the skills of typical Linux desktop users are significantly less than those of typical system administrators it seems reasonable to assume that certain risks of mistake which are significant for desktop users aren’t a big deal with skilled sysadmins.

I think that it was a bad decision by the Lindows people to use root for everything due to the risk of errors. If you make a mistake on a desktop system as non-root then if your home directory was backed up recently and you use IMAP or caching IMAP for email access then you probably won’t lose much of note. But if you make a serious mistake as root then the minimum damage is being forced to do a complete reinstall, which is time consuming and annoying even if you have the installation media handy and your Internet connection has enough quota for the month to complete the process.

Finally there are some services that seek out people who use the root account for desktop use. Debian has some support channels on IRC [3] and I decided to use the root account from my SE Linux Play Machine [4] to see how they go. #debian has banned strings matching root. #linpeople didn’t like me because “Deopped you on channel #linpeople because it is registered with channel services“. #linuxhelp and #help let me in, but nothing seemed to be happening in those channels. Last time I tried this experiment I had a minor debate with someone who repeated a mantra about not using root and didn’t show any interest in reading my explanation of why root:user_r:user_t is safe for IRC.

I can’t imagine that good the #debian people expect to gain from denying people the ability to access that channel with an IRC client that reports itself to be running as root. Doing so precludes the possibility of educating them if you think that they are doing something wrong (such as running a distribution like Lindows/Linspire).

Conclusion

I routinely ssh directly to servers as root. I’ve been doing so for as long as ssh has been around and I used to telnet to systems as root before that. Logging in to a server as root without encryption is in most cases a really bad idea, but before ssh was invented it was the only option that was available.

For the vast majority of server deployments I think that there is no good reason to avoid sshing directly as root.

Brother MFC-9120CN Color LASER Printer

I have just bought a Brother MFC-9120CN Multi-Function Color LED LASER Printer for a relative. It was a replacement for the Lexmark printer which turned out not to support Linux properly [1].

This printer cost about $545. I bought it from OfficeWorks [2] under their price-matching deal. If you find a better price anywhere else they will beat it by 5%. I went to StaticIce.com.au and found the cheapest online store in Australia that sold the printer and then took the URL of the online store to OfficeWorks on a USB stick. After they verified the price they sold me the printer for 5% less than the online cost plus the delivery cost, which saved my relative a little more than $50.

Craig Sanders had convinced me to choose a LASER printer because the toner doesn’t have a short shelf-life unlike the ink for ink-jet printers. My parents have been using a LASER printer for more than 12 years and each toner cartridge lasts at least 4 years which is a much better result than all the ink-jet printers I’ve supported which tend to regularly need more expensive ink. I guess I’ll find out over the next few years whether this printer lives up to the general reputation of LASER printers in this regard.

The LED printers use LEDs for the LASER light, this apparently makes them more reliable and efficient but means that they tend to have a lower resolution, and often the horizontal and vertical resolutions are not equal. The printer I got is listed as 600*2400dpi resolution but that might end up giving much the same result as a 600*600dpi printer. But 600*600dpi should be good enough for a long time anyway. A4 paper (the standard size for office paper in Australia) is 210*297mm, that is about 8.27*11.69 inches or 4961*7015 pixels at 600dpi. Even if we assume that 10% of the width and height is wasted on margins that would take a 28 megapixel camera to produce a picture that can actually use 600dpi for the most common case where high quality is needed for home use – printing a single photo on an A4 sheet.

The printer ships with 64M of RAM which was not enough to print some pictures that I sent it, it has a slot for a 144pin SO-DIMM (laptop RAM) for memory expansion, it can take one SO-DIMM of up to 512M capacity that is at least PC-100. I’ve got a spare 256M PC-133 memory module that I will install in it, hopefully that will be enough to print pictures. Buying PC-100/PC-133 RAM nowadays probably isn’t going to be easy, particularly not 512M modules as many of the laptops which used PC-100/PC-133 RAM didn’t support that capacity (I believe that my ancient Thinkpads which used such memory didn’t support 512M modules).

The requirement was for a printer that could print photos in reasonable quality, could make photo-copies, and ideally work as a scanner. I got CUPS to talk to it without much effort, I just installed a PPD file from the Brother Solutions Center web site [3] and it just worked. It occurred to me later that I should have tried configuring it before installing the PPD file – maybe the version of CUPS in Debian/Squeeze supports the Brother printer natively.

So the current state of the printer is that it prints documents very well, it doesn’t print photos but that should be solved when I add more RAM, and I just have to try and get scanning to work. Everyone is happy!

The only down-side is that the printer is huge. It takes a lot of desk space to run it (they will need a new desk in their computer room), and when it’s in it’s box it’s much larger than most things that you will normally transport by car.

Update: I’ve installed a 256M PC-133 SO-DIMM and can now print full color pictures. Thanks for Rodney Brown for giving me some Thinkpad parts which included RAM.

UBAC and SE Linux in Debian

A recent development in SE Linux policy is the concept of UBAC (User Based Access Control) which prevents SE Linux users (identitied) from accessing each other’s files.

SE Linux user identities may map 1:1 to Unix users (as was required in the early versions of SE Linux), you might have unique identities for special users and a default identity for all the other users, or you might have an identity per group – or use some other method of assigning identities to groups.

The UBAC constraints in the upstream reference policy prevent a process with a SE Linux identity other than system_u from accessing any files with an identity other than system_u. So basically any regular user can access files from the system but not other users and system processes (daemons) can access files from all users. Of course this is just one layer of protection, so while the UBAC constraint doesn’t prevent a user from accessing any system files the domain-type access controls may do so.

If you used a unique SE Linux identity for each Unix account then UBAC would prevent any user from accessing a file created by another user.

For my current policy that I am considering uploading to Debian/Unstable I have allowed the identity unconfined_u to access files owned by all identities. This means that unconfined_u is an identity for administrators, if I proceed on this path then I will grant the same rights to sysadm_u.

UBAC was not enabled in Fedora last time I checked, so I’m wondering whether there is any point in including it – I don’t feel obliged to copy everything that Fedora does, but there is some benefit in maintaining compatibility across distributions.

For protecting users from each other it seems that MCS (which is Mandatory in the Debian policy) is adequate. MCS allows a much better level of access control. For example I could assign categories c0 to c10 to a set of different projects and allow the person who manages all the projects to be assigned all those categories when they login. That user could then use the command “runcon -l s0:c1 bash” to start a shell for the purpose of managing files from project 1 and any file or process created by that command would have the category c1 and be prevented from writing to a file with a different category.

Of course the down-side to removing UBAC is that since RBAC was removed there is no other way of separating SE Linux users, while MCS is good for what it does it wasn’t designed for the purpose of isolating different types of user. So I’ll really want to get RBAC reinstalled before Squeeze is released if I remove UBAC.

Regardless of this I will need to get RBAC working on Squeeze eventually anyway. I’ve had a SE Linux Play Machine running with every release of SE Linux for the last 8 years and I don’t plan to stop now.

Links May 2010

AdRevenge is an interesting concept to pay for Google Adsense adverts about how companies suck [1]. If a suitably large group of people pay to warn you about a company then it’s a good signal that the company is actually doing the wrong thing.

A guest post by Mili on Charles Stross’ blog has an interesting analysis of the economcs of “Intellectual Property” and concludes that content is a public good [2].

New Age Terrorists Develop Homeopathic Bomb [3], an amusing satire of medical fraud and security theatre. The sit has a lot of other good satire too.

Mark Shuttleworth wrote an interesting post about new window management changes that will soon go into Ubuntu [4]. He points out that the bottom status bar in applications is a throw-back to Windows 3.1 and notes that a large part of the incentive for removing it (and using the title-bar for the status) is the work on the Netbook version of Ubuntu. This is really ironic given that the resolution of current Netbooks is quite similar to that of desktop systems that were current when Windows 3.1 came out.

Omar Ahmad gave an insightful TED talk about the benefit of using a pen and paper to send a letter to a politician [5].

Sebastian Wernicke gave an amusing and informative TED talk about how to give a good TED talk [6]. His talk gives some useful ideas for public speaking that are worth considering.

Catherine Mohr gave a brief and interesting TED talk about how to build an energy efficient house with low embodied energy [7]. Her blog at www.301monroe.com has the details.

Stephen Wolfram (of Mathematica fame) gave an interesting TED talk [8]. He covers a lot of interesting things that can be done with computers, primarily based on the Wolfram Alpha [9] platform which allows natural language queries of a large data set. He also talks about the search for a Theory of Everything.

Esther Duflo gave an interesting TED talk about using social experiments to fight poverty [10]. She describes how scientific tests have been used to determine the effectiveness of various ways of preventing disease and encouraging education in developing countries. One example of the effectiveness of such research is the DeWormTheWorld.org project which was founded after it was discovered that treating intestinal worms was the most cost effective way of getting African children to spend longer at school.

David L. Rosenhan wrote an interesting research paper “On Being Sane In Insane Places” about pseudo-patients admitted to psychiatric hospitals [11]. It seems that psychiatric staff were totally unable to recognise a sane person who was admitted even though other patients could do so. It also documents how psychiatric patients were treated as sub-human. One would hope that things had improved since 1973, but it seems likely that many modern psychiatric hospitals are as bad as was typical in 1973. It’s also worth considering the issue of the treatment in society of people who have been diagnosed with a mental illness, it seems likely that the way people are treated in the community would have similar bad results to that which was documented for treatment in psychiatric hospitals – even the sanest people will act strangely if treated in an insane manner! Also it seems to me that there could be potential for using a panel of patients assembled via the Delphi Method as part of the psychiatric assessment process as it has been demonstrated that patients can sometimes assess other patients more accurately than psychiatrists!

Simon Sinek gave an inspiring TED talk about how great leaders inspire action [12]. Of course the ideas he describes don’t just apply to great leaders, they should apply to ordinary people who just want to convince others to adopt their ideas.

Stephen Collins write a good article summarising the main reasons why the proposed great firewall of Australia is a bad idea [13].

Lenore Skenazy who is famous for letting her 9yo son catch the metro alone during broad daylight on a pre-planned route home has created a web site about Free Range Kids [14]. She seems to be starting a movement to oppose Helicopter Parenting and has already written a book about her ideas for parenting. The incidence of crime has been steadily increasing, as has the ability of the police to apprehend criminals and recover abducted children. There’s no reason for children to be prevented from doing most of the things that children did when I was young!

GM Food and Vaccines

Michael Specter gave an interesting TED talk about the dangers of science-denial [1]. Most of his talk is about the people who oppose vaccines, such as the former Playboy model Jenny McCarthy who thinks that she knows more about medicine than people who do medical research. He notes that a doctor who advocates vaccination has been receiving threats from the anti-vaccine lobby, including threats to his children. An good new development is that Andrew Wakefield (the British ex-Doctor behind the discredited research linking Autism and Vaccination) has been barred from practicing by Britain’s General Medical Council [2].

Michael also mentions the opposition to GM food which has the potential to save many lives in developing countries that have food shortages. This convinced me to reduce my opposition to GM food, it’s really not GM food that I’m opposed to but the poor testing, the bad features (such as the Terminator Gene), and the Intellectual Property controls which allow GM companies to sue farmers who accidentally have GM crops grow on their land due to wind-borne seeds. It’s also a pity that there is no work being done on GM versions of any food crop which is only used for feeding poor people. Every GM plant is one that is used to provide food for rich people and is essentially a way for farmers in first-world countries to make more money. But GM versions of Cassava (with less of the toxic chemicals among other things) and Sorghum would improve the situation of many poor people.

One interesting related development is that Craig Venter has just announced the creation of the first synthetic life [3]. This technical development could lead to dramatic changes in the production of basic foods, such as algae that produce proteins that have the ideal mixture of all the essential amino acids needed for humans as well as the semi-essential ones that children need. While feeding pond slime to children isn’t going to be glamorous it would be a lot better than the current situation where a significant number of children in developing countries have their physical and mental development stunted due to malnutrition. Craig mentions the possibility of using his research to develop vaccines much faster, including perhaps the possibility of vaccinating people against fast evolving viruses such as the common cold!

A School IP Project

The music industry seems fairly aggressive in taking legal action against children when they break the licence terms of copyright material. I think it would be good to teach children about how the IP industry really works.

It seems to me that you could have a school project that involves an entire year level (maybe 100 students depending on the size of the school) each of whom can produce copyright material (everything they do in art and English classes would be suitable as a start). Then they could register their work (make digital photographs and then store them in a school database that records the entry date) and sue anyone who infringes their work.

Every student would receive licence fees for their work, but if they are sued for infringement they would have to pay all revenue plus damages. Other students could work as lawyers and take a portion of the proceeds of any successful law suit, and finally some students could run recording companies and spend their time hunting for infringing work for the purpose of launching legal action.

In terms of the licence fees paid, this could be done by just allocating a fixed value per item to each student as a way to get the system running without regard to the fact that some students just aren’t able to create good art. It could however have a large portion of the value coming from what other students choose to spend, every student gets to “spend” $10 per week on art and they can choose from the database what they want to “buy” copies of. The most popular art could then be printed on every notice-board in the school as an incentive for students to vote with their play-money for something that they don’t mind seeing all the time. It’s obvious that popularity would be a significant factor in the success of some artists, but that’s OK, a casual review of the chart topping music reveals that it’s quite obviously not created by the world’s best musicians so it seems that rewarding popularity rather than skill just adds some realism.

One possibility would be to allow the students to elect representatives to create their own IP laws. It would be interesting to see how the IP laws voted on by representatives of the students (who are all in some way involved with the process of creating, buying, selling, and distributing artistic products) differ from those which we have foisted upon us in the real-world. Also an interesting possibility would be to allow corruption in the election process and observe how the results differ from year levels where corruption is not permitted. I expect that teaching children how political corruption works would be a little controversial, but it’s nothing that they can’t learn from reading news reports about what the “entertainment” industry is really doing. Really being a corrupt politician for a school project shouldn’t be as bad as playing a murderer in a school play!

Naturally this couldn’t be done with real money, but giving higher marks at the end of the year to the students who accumulate the most play money would be quite reasonable. I don’t think that there would be a problem with giving higher marks to a student who succeeded through political corruption – as long as they gave a good written report of how they did so and the implications for society.

Please note that I am not suggesting this for a subject that is used for university entrance, I think it would be a good project for years 8-10 which in Australia have no relevance to university entrance. So the marks would just be letters on a bit of paper that might make parents happy or unhappy and otherwise mean nothing.

I anticipate responses from people who believe that educating children about how the world works is not appropriate for a school. Such people are never going to convince me, but if anyone thinks that they can make a good point to convince some of the readers then I encourage them to write it up in the comments section if it’s short or on their own blog if it’s longer.

Google Chrome and SE Linux

Google Chrome saying aw snap when it crashes

[107108.433300] chrome[12262]: segfault at bbadbeef ip 0000000000fbea18 sp 00007fffcf348100 error 6 in chrome[400000+27ad000]

When I first tried running the Google Chrome web browser [1] on SE Linux it recursively displayed the error message in the above picture, it first displayed the error and then displayed another error while trying to display a web page to describe the error. The kernel message log included messages such as the above message, it seems that some pointers are initialised to the value 0xbbadbeef to make debugging easier and more amusing.

V8 error: V8 is no longer usable (v8::V8::AddMessageListener())

When I ran Chrome from the command-line it gave the above error message (which was presumably somewhere in the 8MB ~/.xsession-errors file generated from a few hours of running a KDE4 session).

type=AVC msg=audit(1274070733.648:145): avc: denied { execmem } for pid=12833 comm=”chrome” scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=process
type=SYSCALL msg=audit(1274070733.648:145): arch=c000003e syscall=9 success=no exit=-131938567594024 a0=7fd863b41000 a1=40000 a2=7 a3=32 items=0 ppid=1 pid=12833 auid=4294967295 uid=1001 gid=1001 euid=1001 suid=1001 fsuid=1001 egid=1001 sgid=1001 fsgid=1001 tty=pts4 ses=4294967295 comm=”chrome” exe=”/opt/google/chrome/chrome” subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)
type=ANOM_ABEND msg=audit(1274070733.648:146): auid=4294967295 uid=1001 gid=1001 ses=4294967295 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 pid=12833 comm=”chrome” sig=11

V8 is the Google Javascript system which compiles JavaScript code and thus apparently needs read/write/execute access to memory [2]. In /var/log/audit/audit.log I saw the above messages (which would have been in the kernel message log as displayed by dmesg if I didn’t have auditd running). The most relevant parts are that execmem access was requested and that it was by system call 9. From linux/x86_64/syscallent.h in the strace source I discovered that system call 9 on the AMD64 architecture is sys_mmap. Does anyone know a good way to discover which system call has a given number on a particular architecture without reading strace source code?

Attempts to strace the Google Chrome process failed, Chrome gave the error “Failed to move to new PID namespace” after clone() failed. Clone was passed the flag 0x20000000 which according to /usr/include/linux/sched.h is CLONE_NEWPID. It seems that programs which create a new PID namespace (as Google Chrome does) can’t be straced as the clone() call fails. It’s a pity that Chrome doesn’t have an option to run without using this feature, losing the ability to strace it really decreases my ability to find and report bugs in the program – I’m sure that the Google developers want people like me to be able to help them find bugs in their code without undue effort.

Anyway the solution to this problem to allow it to run on the SE Linux Targeted configuration is to run the command “chcon -t unconfined_execmem_exec_t /opt/google/chrome/chrome” which causes the Chrome browser to run in the domain unconfined_execmem_t which is allowed to do such things. Of course we don’t want Chrome processes to run unconfined, I think that the idea I had in 2008 for running Chrome processes in different SE Linux contexts is viable and should be implemented [3].

As a general rule if you are running a program from the command-line on SE Linux with the Targeted configuration (the default and most common configuration) then any time you see an execmem failure logged to the kernel message log or the audit subsystem then you can change the context of the program to unconfined_execmem_exec_t to make the problem go away. Note that this isn’t necessarily a good thing to do, sometimes it’s best to change the program to not require such access. But it seems that in this case the design of V8 requires write/execute memory access to pre-compile JavaScript code.