Archives

Categories

Is the PC Dying?

I just read an interesting article about the dispute between Microsoft and Apple about types of PC [1]. Steve Jobs predicted a switch from desktop PCs to portable devices, while Steve Ballmer of Microsoft claimed that the iPad is just a new PC.

Defining a PC

I think that the defining characteristic of the IBM Compatible PC was it’s open architecture. Right from the start the PC could have it’s hardware expanded by adding new circuit boards into slots on the motherboard (similar to other PC systems of that era such as the Apple 2 and the S-100 bus). The deal with IBM included Intel sharing all it’s CPU designs with other manufacturers such as NEC and AMD from the 8086 until the mid-90’s. AMD specialised in chips that were close copies of Intel chips at low prices and higher clock rates while NEC added new instructions. Compaq started the PC clone market as well as the laptop market, and system software for the IBM compatible PCs was primarily available from IBM and Microsoft in the early days, along with less popular variants such as CP/M86, Novell Netware and others. In the late 80’s there was OS/2 as an alternate OS and Windows as one of several optional GUI environments to run on top of MS-DOS or PC-DOS. In the mid 90’s PCs were used for running protected mode OSs such as Linux and Windows/NT.

Now if we look at a system such as a Netbook then it clearly misses some of the defining characteristics of the desktop PC. I can’t upgrade a Netbook in any meaningful way – changing a storage device or adding more RAM does not compare to adding an ISA/MCA/EISA/VL-Bus/PCI/PCIe expansion card. With my EeePC 701 I don’t even have an option of replacing the storage as it is soldered to the motherboard! A laptop allows me to add a PCMCIA or PC-Card device to expand it, but with a maximum of two cards and a high price this isn’t a great option.

What is Best for Home Users?

For a while now my parents have been using 3G net access for their home Internet use [2]. So it seems that a laptop provides greater benefits for their use now than it previously did when they used Cable and ADSL net access. My parents have been considering getting a new monitor (1920*1080 resolution monitors are getting insanely cheap nowadays) and driving such a monitor effectively might require a more capable PC. I recently bought myself a nice refurbished Thinkpad for $796 [3], it seems likely that I could find a refurbished Thinkpad at auction which is a little older and slower for a lower price, even buying an old T41p would be a reasonable option. This would give my parents not only the option of using the Internet when on holidays, but also in a different part of their house when they are at home.

The Apple iPad would probably be quite a reasonable Internet platform for my parents if it wasn’t for the fact that it uses DRM. While it’s not a great platform for writing, my parents probably don’t do enough that it would be a huge problem for them. So I might look for a less restrictive tablet platform for my parents. At the moment the best resolution for a tablet seems to be 1024*768, but I expect that some tablets (maybe with a hybrid tablet/laptop design like the Always Innovating Smartbook [4]) with a higher resolution will be released soon. I hope that the iPad and other closed devices don’t get any serious market share, but it seems likely that OSs such as Android which are only slightly more open will have a significant market share.

Ultra-Mobile Design vs PCs Design

One significant problem with ultra-mobile devices is that they make significant engineering trade-offs to get the small size. For a desktop system there are lots of ways of doing things inefficiently, running the AMD64 or i386 architecture which is wasteful of energy and having lots of unused space inside the box in case you decide to upgrade it. But for a laptop there are few opportunities for being inefficient, and for a tablet or smart phone everything has to be optimised. When the optimisation of a device starts by choosing a CPU that’s unlike most other systems (note that there is a significant range of ARM CPUs that are not fully compatible with each other) it makes it very difficult to produce free software to run it. I can salvage a desktop PC from a rubbish bin and run Linux on it (and I’ve done that many times), but I wouldn’t even bother trying to run Linux on an old mobile phone.

It seems that in the near future my parents (and many other people with similar needs) will be best suited by having a limited device such as a tablet that stores all data on the Internet and not having anything that greatly resembles a PC. In many ways it would be easier for me to support my parents by storing their data in the cloud and then automatically backing it up to removable SATA disks than with my current situation of supporting a fully capable PC and backing it up to a USB device whenever I visit them.

I’m also considering what to do for some relatives who are about to go on a holiday in Europe, they want to be able to send email etc. It might not be possible just yet, but it seems like an ideal way of doing this would be to provide them with something like an iPad that they can use with a local 3G SIM for the country that they stay in and they could then upload all their best photos to some server that I can backup and send email to everyone they know. An iPad isn’t good for this now as you don’t want to go on holidays in another country while carrying something that is really desirable to thieves.

Ultra Mobile Devices are Killing PCs

It seems to me that Google Android and the Apple iPad/iPhone OS are taking over significant parts of the PC market. The people who are doing traditional PC things are increasingly using Laptops and Netbooks, and the number of people who get the freedom that a PC user did in the 80’s and 90’s is decreasing rapidly.

I predict that by 2012 the majority of Linux systems will be running Google Android on hardware that doesn’t easily allow upgrading to more open software. At the moment probably the majority of Linux systems are wireless routers and other embedded devices that people don’t generally think about. But when iPad type devices running a locked-down Linux installation start replacing Ubuntu and Fedora desktop systems people will take notice.

I don’t think that the death of the PC platform as we know it will kill Linux, but it certainly won’t do us any good. If there were smarter people at Microsoft then they would be trying to work with the Linux community on developing innovative new ways of using desktop PCs. Of all the attempts that Microsoft has made to leave the PC platform the only success has been the X-Box which is apparently doing well.

Tablet devices such as the iPad could work really well in a corporate environment (where MS makes most of it’s money). On many occasions I’ve been in a meeting and we had to adjourn due to someone needing to go to their desk to look something up. If everyone had an iPad type device at their desk that used a wired network when it was available and encrypted wireless otherwise then for a meeting everyone could take their tablet without it’s keyboard and be able to consult all the usual sources of data without any interruption.

Could a high-resolution version of the iPad kill MS-Windows in the corporate environment?

Bugs in Google Chrome

I’m currently running google-chrome-beta version 5.0.375.55-r47796 on Debian/Unstable. It’s the fastest web browser I’ve used in recent times – it’s the first time that I’ve run a browser that feels faster than my recollection of running IBM WebExplorer for OS/2 on a 486-66 system! It has a good feature set, and it’s the only browser I’ve used that in a typical configuration will make proper use of the screen space by not having a permanent status bar at the bottom of the sceen and by having tabs in the title-bar. But it’s not perfect, here is a list of some bugs:

Chrome Titlebar when maximisedChrome Titlebar when not maximisedRight of Chrome Titlebar when not maximised

Above are three partial screen captures of Chrome, the first is when maximised and the second is when the window isn’t maximised. Notice the extra vertical space above the tab in the title bar in the second picture. The third picture shows the right side of the titlebar and you can see a space below the three buttons where you can drag the window around – no matter how many tabs you open that space below the three buttons is reserved. If the Chrome developers had removed the extra vertical space in the titlebar and reserved slightly more horizontal space then you would be able to drag the window around. While an anonymous commentator made a good point that the extra vertical space can be used to drag the window around when the maximum number of tabs are open, it seems that there are other ways of achieving that goal without wasting ~18 vertical pixels. Doing so would be a lot less ugly than what they did with finding text in the page.

When I visit a web site that uses cookies from an Incognito Window (which means that cookies etc aren’t stored) there is no option to say “allow all cookies”. This is really annoying when you get to a web site such as the IBM one which stores 5 cookies when you first load the page and then at least one new cookie write for every page you visit. Given that cookie data will be discarded as soon as the window is closed it seems like a good idea to have an option to allow all cookies for Incognito Windows even if all cookies aren’t allowed for regular windows. Blocking all cookies would be OK too, anything but having to click on Block or Allow multiple times for each page load.

The J and K keys don’t work in a view of Venus version 0~bzr95-2+lenny1 (the latest version in Debian/Lenny).

I once had a situation where I entered a ‘.’ at the end of a domain name (which is quite legal – there is always an implied dot) and Chrome then wouldn’t take note of my request to accept all cookies from the domain. I haven’t been able to reproduce that bug, but I have noticed that it stores the settings for whether cookies should be stored separately for domains that end in ‘.’, so “www.cnn.com.” is different from “www.cnn.com” . Iceweasel seems to just quietly strip the trailing dot. Of course this is better than Konqueror which won’t even load a URL with a dot at the end.

Chrome can be relied on to restore all windows rapidly after a crash, unlike Iceweasel which restores them at it’s normal load speed (slow) and Konqueror which doesn’t tend to restore windows. This is good as it does seem to crash regularly. In a response to my post about Chrome and SE Linux [1] Ben Hutchings pointed out that the --no-sandbox option to chrome disables the creation of a PID namespace and therefore makes debugging a lot easier, if I get a lot of spare time I’ll try and track down some of the Chrome SEGVs.

The JavaScript compiler is either buggy or it’s not buggy in situations where people expect IE bugs. When using the Dell Australia web site I can’t always order all options. When trying to order a Dell R300 1RU server with hot-plug disks in a hardware RAID array it seems impossible to get all the necessary jumpers – which is a precondition to completing the order – fortunately I only wanted to blog about how cheap Dell servers are so I don’t actually need to complete an order. Dell’s web site is also difficult in Iceweasel on occasion, so it’s obviously more demanding than most sites. It might be a good test site for people who work on browsers as it’s both demanding and important.

When I select a URL to be opened in a new window (or when JavaScript does this) then the new tab is opened with about:blank listed as the URL. If the URL is for a PDF file (or something else that is to be downloaded) then the URL entry field is never updated to give the real URL. I believe that this is wrong, either the new tab shouldn’t be opened or it should have the correct URL on display – there is no benefit with a tab open to show nothing but about:blank in the URL entry field. Also if a URL takes some time to load then it may keep about:blank in the URL entry field for some time. This means that if you use the middle mouse button to rapidly open a few new tabs you won’t be able to see what is to be loaded in each one. Sometimes I have several tabs loading and I’m happy to close some unimportant ones if they are slow but some are worth waiting for.

Overall that’s not too bad. I can use Dell’s site in Iceweasel, so the only critical bug is the cookies issue in Incognito Windows which makes the Incognito feature almost unusable for some sites.

Securely Killing Processes

Joey Hess wrote on Debian-devel about the problem of init scripts not doing adequate checks before using the data from a PID file under /var/run to determine which process to kill [1]. Unfortunately that still doesn’t quite solve the problem, there is still the issue of a race condition causing a process to die while you are doing the checks and then be replaced by another process.

Below I have included the source code to a little program that will repeatedly fork() until it finds a particular PID and then have it’s child call sleep(). So you can run a command such as “kill -9 1234 ; ./a.out 1234” and then have this program take over the PID 1234.

From testing with this it seems that when you have a shell with it’s current working directory as /proc/1234 then once process 1234 is killed the current directory is empty, “ls -l” returns 0 entries. This isn’t surprising, it’s the standard Unix behavior when the working directory is removed.

So if a program (or even a shell script) changes directory to /proc/1234 it can then verify all attributes of the process (it’s CWD, it’s root directory, the executable used to run it, it’s UID, GID, supplemental groups, it’s SE Linux context, and lots of other things all atomically. The only possibility for confusion is that a process might execute a SETUID or SETGID program or a program that has a label which triggers a SE Linux domain transition. It might also change some attributes without executing a new process, for example by using the setuid(), setgid(), setgroups(), or other similar system calls. For the purposes of killing a process I don’t think that the possibility of it changing it’s own attributes or executing a new program are serious problems, if you want to kill a process then you probably want to kill it after it has called setuid() etc.

It seems to me that it would be useful to have a file named /proc/PID/signal (or something similar) to which you could write a number and then have the kernel send the signal in question to that process. So the commands “kill -9 1234” and “echo 9 > /proc/1234/signal” would give the same result. But you could run the command “cd /proc/1234” and then some time later you could run “echo 9 > signal” and know that you would kill the original process 1234 if it was still running and not some other process that replaced it.

What do you think? Is this worthy of adding a new feature to the proc filesystem?

The Source

Run the following program with a single parameter which is the PID that you want it to take. It will keep looping through the PID space until it gets the one you specify, if the one you specify isn’t available (EG you give it “1” as a parameter) then it will run forever. It will take some time to get a result on a slower system. On my P3-900MHz test system it took up to 72 seconds to get a result.

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>

int main(int argc, char **argv)
{
  if(argc != 2)
  {
    printf("Specify the desired PID on the command-line\n");
    return 1;
  }
  pid_t wanted = (pid_t)atoi(argv[1]);
  int rc;
  int status;
  while((rc = fork()) != wanted && rc != 0)
  {
    if(rc == -1)
    {
      printf("fork error\n");
      return 1;
    }
    if(wait(&status) == -1)
      printf("wait error\n");
  }
  if(rc == 0)
  {
    if(getpid() == wanted)
    {
      printf("Got pid %d, sleeping\n", wanted);
      sleep(200);
    }
    return 0;
  }
  wait(&status);
  return 0;
}

Can SE Linux Implement Traditional Unix Users and Groups?

I was asked by email whether SE Linux could implement traditional Unix users and groups.

The Strictly Literal Answer to that Question

The core of the SE Linux access control is the domain-type model where every process has a domain and every object that a process can access (including other processes) has a type. Domains are not strongly differentiated from types.

It would be possible to create a SE Linux policy that created a domain for every combination of UID and GID that is valid for a user shell given that such combinations are chosen by the sysadmin who could limit them to some degree. There are about 2^32 possible UIDs and about 2^32 possible GIDs, as every domain is listed in kernel memory we obviously can’t have 2^64 domains, but we could have enough to map a typical system that’s in use. Of course the possible combinations of supplemental groups probably makes this impossible for even relatively small systems, but we can use a simpler model that doesn’t emulate supplemental groups.

For files there are more possible combinations because anyone who is a member of a group can create a SETGID directory and let other users create files in it. But in a typical system the number of groups is not much greater than the number of users – the maximum number of groups is typically the number of users plus about 60. So if we had 100 users then the number of combinations of UID and GID would be something like 100*(100+60)=16,000 – it should be possible to have that many domains in a SE Linux policy (but not desirable).

Then all that would be needed is rules specifying that each domain (which is based on a combination of UID and GID) can have certain types of access to certain other types based on them having either the same UID or the same GID.

Such a policy would be large, it would waste a lot of kernel memory, it would need to be regenerated whenever a user is added, and it’s generally something you don’t want to use. No-one has considered implementing such a policy, I merely describe it to illustrate why certain configuration options are not desirable. The rest of this post is about realistic things that you can do with SE Linux policy and how it will be implemented in Debian/Squeeze.

My previous post titled “Is SE Linux Unixish?” addresses this issue at a more conceptual level and also considers MAC vs DAC [1].

The History of mapping Unix Accounts to SE Linux Access Control

In the early releases of SE Linux (long before it was included in Fedora) every user who could login to a system needed to have their user-name compiled into the policy. The policy specified which roles the user could access, the roles specified which domains could be accessed, and therefore what the user could do. The identity of files on disk was used for two purposes, one was logging (you could see who created a file) and the other was a permission check for the SE Linux patched version of Vixie cron which would not execute any command on behalf of a user unless the identity on the crontab file in the cron spool matched the identity used to run it – this is an analogy of the checks that Vixie cron makes on the Unix UID of the crontab file (some other cron daemons do fewer checks).

Having to recompile policy source every time you added a user was annoying. So a later development was to allow arbitrary mappings between Unix account names and SE Linux Identities (which included a default identity) and another later development was to have a utility program semanage to map particular Unix account and group names to SE Linux identities. This was all done years ago. Fedora Core 5 which was released in 2006 had the modular policy which included these features (apart from mapping Unix groups to SE Linux identities which was more recent).

Fedora Core 5 also introduced MCS which was comprised of a set of categories that a security context may have. The sysadmin would configure the set of categories that each account would have.

A recent development has been a concept named UBAC (User Based Access Control) which basically means that a process running directly on behalf of a regular user (IE with a SE Linux identity that’s not system_u) can only access files that have an identity of system_u or which have the same identity as the process. This means that you can only access your own files or system files – not files of other users which may have inappropriate Unix permissions. So for example if a user with a SE Linux identity of “john” gives their home directory the Unix permission mode of 0777 then a user with a SE Linux identity of “jane” can’t access their files. Of course this means that if you have a group of people working together on a project then they probably need to all have the same Identity and in practice you would probably end up with everyone having the same identity. I’ve given up on the idea of using UBAC in Debian.

The Current Plan for Users and SE Linux Access Control in Debian

My plan is to have things work in Squeeze in much the same way as in Lenny.

You have a SE Linux identity assigned to a login session and everything related to it (including cron jobs) based on the Unix account name or possibly the Unix group name (if there are login entries for both the user-name and the group-name then the user-name entry has precedence). The mapping between Unix accounts and SE Linux identities is configured by the sysadmin and SE Linux identities don’t matter much for the Targeted configuration (which is what most people use).

The identity determines which roles may be used and also has a limit on the MCS categories. The MCS categories are also specified in the login configuration which has to be a sub-set of the categories used by the identity record.

So for example the following is the output of a couple of commands run on a Debian/Unstable system. They show that the “test” Unix account is assigned a SE Linux identity of “staff_u” and an MCS range of “s0-s0:c1” (this means it creates files by default at level “s0” and can also write to other files at that level, but can also have read/write access to files at the level “s0:c1”). The “staff_u” identity (as shown in the output of “semanage user -l” can be used with all categories in the set “s0:c0.c1023” where the dot means the set of categories from c0 to c1023 inclusive) but in the case of the “test” user only one category will be used. The “test” group however (as expressed with “%test”) is given the identity “user_u” and is not permitted to use any categories.

# semanage login -l
Login Name    SELinux User    MLS/MCS Range            
%test         user_u          s0                       
__default__   unconfined_u    s0-s0:c0.c1023           
root          unconfined_u    s0-s0:c0.c1023           
system_u      system_u        s0-s0:c0.c1023           
test          staff_u         s0-s0:c1               

# semanage user -l
             Labeling   MLS/       MLS/                          
SELinux User Prefix     MCS Level  MCS Range         SELinux Roles
root         sysadm     s0         s0-s0:c0.c1023    staff_r sysadm_r system_r
staff_u      staff      s0         s0-s0:c0.c1023    staff_r sysadm_r
sysadm_u     sysadm     s0         s0-s0:c0.c1023    sysadm_r
system_u     user       s0         s0-s0:c0.c1023    system_r
unconfined_u unconfined s0         s0-s0:c0.c1023    system_r unconfined_r
user_u       user       s0         s0                user_r

I hope to get the policy written to support multiple user roles in time for the release of Squeeze. If I don’t make it then I will put such a policy on my own web site and try to get it included in an update. The policy currently basically works for a Targeted configuration where the users are not confined (apart from MCS).

How MMCS Basically Works

The vast majority of SE Linux users run with the MCS policy rather than the MLS policy. For Debian I have written a modified version of MCS that I call MMCS. MMCS is mandatory (you can’t relabel files to escape it) and it prevents write-down.

If a process has the range s0-s0:c1,c3 then it has full access to files labelled as s0, s0:c1, s0:c3, and s0:c1,c3 – and any files it creates will be labeled as s0.

If a process has the range s0:c1-s0:c1,c3 then it has read-only access to files labelled as s0 and s0:c3 and read-write access to files labelled as s0:c1 and s0:c1,c3. This means that any secret data it accesses that was labelled with category c1 can’t be leaked down to a file that is not labelled with that category.

Now MCS currently has no network access controls, so there’s nothing stopping a user from using scp or other network utilities to transfer files. But that’s the way with most usable systems. I don’t think that this is necessarily a problem, the almost total lack of network access controls in a traditional Unix model doesn’t seem to concern most people.

Now to REALLY Answer that Question

SE Linux is a Mandatory Access Control (MAC) system, this makes it inherently different to a Discretionary Access Control (DAC) system such as traditional Unix access controls.

Unix permissions are based on each file having a UID, a GID, and a set of permissions and each process having a UID, a GID, and a set of supplementary GIDs. If a user runs a setuid or setgid program then the process will have extra privileges. It also has a lot of stuff that most people aren’t aware of such as real vs effective UIDs, the tag bit, setgid directories, and lots more – including some quite arbitrary things like making ports <1024 special.

SE Linux is based on every object (process, file, socket, etc) having a single security label which includes an identity, a role, a type, and a sensitivity label (MCS categories or an MLS range). There is no support for an object to have more than one label. The SE Linux equivalent to setuid/setgid files is a label for a file which triggers a domain transition when it’s executed. This differs from setuid files in that the domain transition is complete (the old privileges can’t be restored) and the transition is generally not to a strict super-set of the access (usually a different sub-set of possible access and sometimes to lesser access).

These differences are quite fundamental. So really SE Linux can’t implement traditional Unix access control. What SE Linux is designed to do is to provide a second layer of defense and to also provide access controls that have different aims than that of Unix permissions – such as being mandatory and implementing features such as MLS.

Logging in as Root

Martin Meredith wrote a blog post about logging in as root and the people who so strongly advocate against it [1]. The question is whether you should ssh directly to the root account on a remote server or whether you should ssh to a non-root account and use sudo or su to gain administrative privileges.

Does sudo/su make your system more secure?

Some years ago the administrator of a SE Linux Play Machine used the same home directory for play users to login as for administrative logins as for his own logins – he used newrole to gain administrative access (like su or sudo but for SE Linux).

His machine was owned by one of his friends who created a shell function named newrole in one of his login scripts that used netcat to send the administrative password out over the net. He didn’t immediately realise that this was a problem until his friend changed the password and locked him out! This is one example of a system being 0wned due to having the double-authentication – of course if he had logged in directly with administrative privs while using the same home directory that the attacker could write to then he would still have lost but the attacker would have had to do a little more work.

When you login you have lots of shell scripts run on your behalf which have the ability to totally control your environment, if someone has taken over those scripts then they can control everything you see, when you think you run sudo or something they can get the password. When you ssh in to a server your security relies on the security of the client end-point, the encryption of the ssh protocol (including keeping all keys secure to prevent MITM attacks), and the integrity of all the programs that are executed before you have control of the remote system.

One benefit for using sshd to spawn a session without full privileges is in the case where you fear an exploit against sshd and are running SE Linux or some other security system that goes way beyond Unix permissions. It is possible to configure SE Linux in the “strict” configuration to deny administrative rights to any shell that is launched directly by the sshd. Therefore someone who cracks sshd could only wait until an administrator logs in and runs newrole and they wouldn’t be able to immediately take over the system. If the sysadmin suspected that a sshd compromise is possible then a sysadmin could login through some other method (maybe visit the server and login at the console) to upgrade the sshd. This is however a very unusual scenario and I suspect that most people who advocate using sudo exclusively don’t use a SE Linux strict configuration.

Does su/sudo improve auditing?

If you have multiple people with root access to one system it can be difficult to determine who did what. If you force everyone to use su or sudo then you will have a record of which Unix account was used to start the root session. Of course if multiple people start root shells via su and leave them running then it can be difficult to determine which of the people who had such shells running made the mistake – but at least that reduces the list of suspects.

If you put “PermitUserEnvironment yes” in /etc/ssh/sshd_config then you have the option of setting environment variables by ssh authorized_keys entries, so you could have an entry such as the following:

environment=”ORIG_USER=john@example.com” ssh-rsa AAAAB3Nz[…]/w== john@example.com

Then you could have the .bashrc file (or a similar file for your favorite shell) have code such as the following to log the relevant data to syslogd:
if [ "$SSH_TTY" = "" ]; then
  logger -p auth.info "user $ORIG_USER ran command \"$BASH_EXECUTION_STRING\" as root"
else
  logger -p auth.info "user $ORIG_USER logged in as root on tty $(tty)"
fi

I think that forcing the use of su or sudo might improve the ability to track other sysadmins if the system is not well configured. But it seems obvious that the same level of tracking can be implemented in other ways with a small amount of effort. It took me about 30 minutes to devise the above shell code and configuration options, it should take people who read this blog post about 5 minutes to implement it (or maybe 10 minutes if they use a different shell or have some other combination of Bash configuration that results in non-obvious use of initialisation scripts (EG if you have a .bash_profile file then .bashrc may not be executed).

Once you have the above infrastructure for logging root login sessions it wouldn’t be difficult to run a little script that asks the sysadmin “what is the purpose for your root login” and logs what they type. If several sysadmins are logged in at the same time and one of them describes the purpose of their login as “to reconfigure LDAP” then you know who to talk to if your LDAP server stops working!

Should you run commands with minimum privilege?

It’s generally regarded that running each command with the minimum privilege is a good idea. But if the only reason you login to a server is to do root tasks (restarting daemons, writing to files that are owned by root, etc) then there really isn’t a lot of potential for achieving anything by doing so. If you need to use a client for a particular service (EG a web browser to test the functionality of a web server or proxy server) then you can login to a different account for that purpose – the typical sysadmin desktop has a dozen xterms open at once, using one for root access to do the work and another for non-root access to do the testing is probably a good option.

Can root be used for local access?

Linux Journal has an article about the distribution that used to be known as Lindows (later Linspire) which used root as the default login for desktop use [2]. It suggests using a non-root account because “If someone can trick you into running a program or if a virus somehow runs while you are logged in, that program then has the ability to do anything at all” – of course someone could trick you into running a program or virus that attempts to run sudo (to see if you enabled it without password checks) and if that doesn’t work waits until you run sudo and sniffs the password (using pty interception or X event sniffing). The article does correctly note that you can easily accidentally damage your system as root. Given that the skills of typical Linux desktop users are significantly less than those of typical system administrators it seems reasonable to assume that certain risks of mistake which are significant for desktop users aren’t a big deal with skilled sysadmins.

I think that it was a bad decision by the Lindows people to use root for everything due to the risk of errors. If you make a mistake on a desktop system as non-root then if your home directory was backed up recently and you use IMAP or caching IMAP for email access then you probably won’t lose much of note. But if you make a serious mistake as root then the minimum damage is being forced to do a complete reinstall, which is time consuming and annoying even if you have the installation media handy and your Internet connection has enough quota for the month to complete the process.

Finally there are some services that seek out people who use the root account for desktop use. Debian has some support channels on IRC [3] and I decided to use the root account from my SE Linux Play Machine [4] to see how they go. #debian has banned strings matching root. #linpeople didn’t like me because “Deopped you on channel #linpeople because it is registered with channel services“. #linuxhelp and #help let me in, but nothing seemed to be happening in those channels. Last time I tried this experiment I had a minor debate with someone who repeated a mantra about not using root and didn’t show any interest in reading my explanation of why root:user_r:user_t is safe for IRC.

I can’t imagine that good the #debian people expect to gain from denying people the ability to access that channel with an IRC client that reports itself to be running as root. Doing so precludes the possibility of educating them if you think that they are doing something wrong (such as running a distribution like Lindows/Linspire).

Conclusion

I routinely ssh directly to servers as root. I’ve been doing so for as long as ssh has been around and I used to telnet to systems as root before that. Logging in to a server as root without encryption is in most cases a really bad idea, but before ssh was invented it was the only option that was available.

For the vast majority of server deployments I think that there is no good reason to avoid sshing directly as root.

Brother MFC-9120CN Color LASER Printer

I have just bought a Brother MFC-9120CN Multi-Function Color LED LASER Printer for a relative. It was a replacement for the Lexmark printer which turned out not to support Linux properly [1].

This printer cost about $545. I bought it from OfficeWorks [2] under their price-matching deal. If you find a better price anywhere else they will beat it by 5%. I went to StaticIce.com.au and found the cheapest online store in Australia that sold the printer and then took the URL of the online store to OfficeWorks on a USB stick. After they verified the price they sold me the printer for 5% less than the online cost plus the delivery cost, which saved my relative a little more than $50.

Craig Sanders had convinced me to choose a LASER printer because the toner doesn’t have a short shelf-life unlike the ink for ink-jet printers. My parents have been using a LASER printer for more than 12 years and each toner cartridge lasts at least 4 years which is a much better result than all the ink-jet printers I’ve supported which tend to regularly need more expensive ink. I guess I’ll find out over the next few years whether this printer lives up to the general reputation of LASER printers in this regard.

The LED printers use LEDs for the LASER light, this apparently makes them more reliable and efficient but means that they tend to have a lower resolution, and often the horizontal and vertical resolutions are not equal. The printer I got is listed as 600*2400dpi resolution but that might end up giving much the same result as a 600*600dpi printer. But 600*600dpi should be good enough for a long time anyway. A4 paper (the standard size for office paper in Australia) is 210*297mm, that is about 8.27*11.69 inches or 4961*7015 pixels at 600dpi. Even if we assume that 10% of the width and height is wasted on margins that would take a 28 megapixel camera to produce a picture that can actually use 600dpi for the most common case where high quality is needed for home use – printing a single photo on an A4 sheet.

The printer ships with 64M of RAM which was not enough to print some pictures that I sent it, it has a slot for a 144pin SO-DIMM (laptop RAM) for memory expansion, it can take one SO-DIMM of up to 512M capacity that is at least PC-100. I’ve got a spare 256M PC-133 memory module that I will install in it, hopefully that will be enough to print pictures. Buying PC-100/PC-133 RAM nowadays probably isn’t going to be easy, particularly not 512M modules as many of the laptops which used PC-100/PC-133 RAM didn’t support that capacity (I believe that my ancient Thinkpads which used such memory didn’t support 512M modules).

The requirement was for a printer that could print photos in reasonable quality, could make photo-copies, and ideally work as a scanner. I got CUPS to talk to it without much effort, I just installed a PPD file from the Brother Solutions Center web site [3] and it just worked. It occurred to me later that I should have tried configuring it before installing the PPD file – maybe the version of CUPS in Debian/Squeeze supports the Brother printer natively.

So the current state of the printer is that it prints documents very well, it doesn’t print photos but that should be solved when I add more RAM, and I just have to try and get scanning to work. Everyone is happy!

The only down-side is that the printer is huge. It takes a lot of desk space to run it (they will need a new desk in their computer room), and when it’s in it’s box it’s much larger than most things that you will normally transport by car.

Update: I’ve installed a 256M PC-133 SO-DIMM and can now print full color pictures. Thanks for Rodney Brown for giving me some Thinkpad parts which included RAM.

UBAC and SE Linux in Debian

A recent development in SE Linux policy is the concept of UBAC (User Based Access Control) which prevents SE Linux users (identitied) from accessing each other’s files.

SE Linux user identities may map 1:1 to Unix users (as was required in the early versions of SE Linux), you might have unique identities for special users and a default identity for all the other users, or you might have an identity per group – or use some other method of assigning identities to groups.

The UBAC constraints in the upstream reference policy prevent a process with a SE Linux identity other than system_u from accessing any files with an identity other than system_u. So basically any regular user can access files from the system but not other users and system processes (daemons) can access files from all users. Of course this is just one layer of protection, so while the UBAC constraint doesn’t prevent a user from accessing any system files the domain-type access controls may do so.

If you used a unique SE Linux identity for each Unix account then UBAC would prevent any user from accessing a file created by another user.

For my current policy that I am considering uploading to Debian/Unstable I have allowed the identity unconfined_u to access files owned by all identities. This means that unconfined_u is an identity for administrators, if I proceed on this path then I will grant the same rights to sysadm_u.

UBAC was not enabled in Fedora last time I checked, so I’m wondering whether there is any point in including it – I don’t feel obliged to copy everything that Fedora does, but there is some benefit in maintaining compatibility across distributions.

For protecting users from each other it seems that MCS (which is Mandatory in the Debian policy) is adequate. MCS allows a much better level of access control. For example I could assign categories c0 to c10 to a set of different projects and allow the person who manages all the projects to be assigned all those categories when they login. That user could then use the command “runcon -l s0:c1 bash” to start a shell for the purpose of managing files from project 1 and any file or process created by that command would have the category c1 and be prevented from writing to a file with a different category.

Of course the down-side to removing UBAC is that since RBAC was removed there is no other way of separating SE Linux users, while MCS is good for what it does it wasn’t designed for the purpose of isolating different types of user. So I’ll really want to get RBAC reinstalled before Squeeze is released if I remove UBAC.

Regardless of this I will need to get RBAC working on Squeeze eventually anyway. I’ve had a SE Linux Play Machine running with every release of SE Linux for the last 8 years and I don’t plan to stop now.

Links May 2010

AdRevenge is an interesting concept to pay for Google Adsense adverts about how companies suck [1]. If a suitably large group of people pay to warn you about a company then it’s a good signal that the company is actually doing the wrong thing.

A guest post by Mili on Charles Stross’ blog has an interesting analysis of the economcs of “Intellectual Property” and concludes that content is a public good [2].

New Age Terrorists Develop Homeopathic Bomb [3], an amusing satire of medical fraud and security theatre. The sit has a lot of other good satire too.

Mark Shuttleworth wrote an interesting post about new window management changes that will soon go into Ubuntu [4]. He points out that the bottom status bar in applications is a throw-back to Windows 3.1 and notes that a large part of the incentive for removing it (and using the title-bar for the status) is the work on the Netbook version of Ubuntu. This is really ironic given that the resolution of current Netbooks is quite similar to that of desktop systems that were current when Windows 3.1 came out.

Omar Ahmad gave an insightful TED talk about the benefit of using a pen and paper to send a letter to a politician [5].

Sebastian Wernicke gave an amusing and informative TED talk about how to give a good TED talk [6]. His talk gives some useful ideas for public speaking that are worth considering.

Catherine Mohr gave a brief and interesting TED talk about how to build an energy efficient house with low embodied energy [7]. Her blog at www.301monroe.com has the details.

Stephen Wolfram (of Mathematica fame) gave an interesting TED talk [8]. He covers a lot of interesting things that can be done with computers, primarily based on the Wolfram Alpha [9] platform which allows natural language queries of a large data set. He also talks about the search for a Theory of Everything.

Esther Duflo gave an interesting TED talk about using social experiments to fight poverty [10]. She describes how scientific tests have been used to determine the effectiveness of various ways of preventing disease and encouraging education in developing countries. One example of the effectiveness of such research is the DeWormTheWorld.org project which was founded after it was discovered that treating intestinal worms was the most cost effective way of getting African children to spend longer at school.

David L. Rosenhan wrote an interesting research paper “On Being Sane In Insane Places” about pseudo-patients admitted to psychiatric hospitals [11]. It seems that psychiatric staff were totally unable to recognise a sane person who was admitted even though other patients could do so. It also documents how psychiatric patients were treated as sub-human. One would hope that things had improved since 1973, but it seems likely that many modern psychiatric hospitals are as bad as was typical in 1973. It’s also worth considering the issue of the treatment in society of people who have been diagnosed with a mental illness, it seems likely that the way people are treated in the community would have similar bad results to that which was documented for treatment in psychiatric hospitals – even the sanest people will act strangely if treated in an insane manner! Also it seems to me that there could be potential for using a panel of patients assembled via the Delphi Method as part of the psychiatric assessment process as it has been demonstrated that patients can sometimes assess other patients more accurately than psychiatrists!

Simon Sinek gave an inspiring TED talk about how great leaders inspire action [12]. Of course the ideas he describes don’t just apply to great leaders, they should apply to ordinary people who just want to convince others to adopt their ideas.

Stephen Collins write a good article summarising the main reasons why the proposed great firewall of Australia is a bad idea [13].

Lenore Skenazy who is famous for letting her 9yo son catch the metro alone during broad daylight on a pre-planned route home has created a web site about Free Range Kids [14]. She seems to be starting a movement to oppose Helicopter Parenting and has already written a book about her ideas for parenting. The incidence of crime has been steadily increasing, as has the ability of the police to apprehend criminals and recover abducted children. There’s no reason for children to be prevented from doing most of the things that children did when I was young!

GM Food and Vaccines

Michael Specter gave an interesting TED talk about the dangers of science-denial [1]. Most of his talk is about the people who oppose vaccines, such as the former Playboy model Jenny McCarthy who thinks that she knows more about medicine than people who do medical research. He notes that a doctor who advocates vaccination has been receiving threats from the anti-vaccine lobby, including threats to his children. An good new development is that Andrew Wakefield (the British ex-Doctor behind the discredited research linking Autism and Vaccination) has been barred from practicing by Britain’s General Medical Council [2].

Michael also mentions the opposition to GM food which has the potential to save many lives in developing countries that have food shortages. This convinced me to reduce my opposition to GM food, it’s really not GM food that I’m opposed to but the poor testing, the bad features (such as the Terminator Gene), and the Intellectual Property controls which allow GM companies to sue farmers who accidentally have GM crops grow on their land due to wind-borne seeds. It’s also a pity that there is no work being done on GM versions of any food crop which is only used for feeding poor people. Every GM plant is one that is used to provide food for rich people and is essentially a way for farmers in first-world countries to make more money. But GM versions of Cassava (with less of the toxic chemicals among other things) and Sorghum would improve the situation of many poor people.

One interesting related development is that Craig Venter has just announced the creation of the first synthetic life [3]. This technical development could lead to dramatic changes in the production of basic foods, such as algae that produce proteins that have the ideal mixture of all the essential amino acids needed for humans as well as the semi-essential ones that children need. While feeding pond slime to children isn’t going to be glamorous it would be a lot better than the current situation where a significant number of children in developing countries have their physical and mental development stunted due to malnutrition. Craig mentions the possibility of using his research to develop vaccines much faster, including perhaps the possibility of vaccinating people against fast evolving viruses such as the common cold!

A School IP Project

The music industry seems fairly aggressive in taking legal action against children when they break the licence terms of copyright material. I think it would be good to teach children about how the IP industry really works.

It seems to me that you could have a school project that involves an entire year level (maybe 100 students depending on the size of the school) each of whom can produce copyright material (everything they do in art and English classes would be suitable as a start). Then they could register their work (make digital photographs and then store them in a school database that records the entry date) and sue anyone who infringes their work.

Every student would receive licence fees for their work, but if they are sued for infringement they would have to pay all revenue plus damages. Other students could work as lawyers and take a portion of the proceeds of any successful law suit, and finally some students could run recording companies and spend their time hunting for infringing work for the purpose of launching legal action.

In terms of the licence fees paid, this could be done by just allocating a fixed value per item to each student as a way to get the system running without regard to the fact that some students just aren’t able to create good art. It could however have a large portion of the value coming from what other students choose to spend, every student gets to “spend” $10 per week on art and they can choose from the database what they want to “buy” copies of. The most popular art could then be printed on every notice-board in the school as an incentive for students to vote with their play-money for something that they don’t mind seeing all the time. It’s obvious that popularity would be a significant factor in the success of some artists, but that’s OK, a casual review of the chart topping music reveals that it’s quite obviously not created by the world’s best musicians so it seems that rewarding popularity rather than skill just adds some realism.

One possibility would be to allow the students to elect representatives to create their own IP laws. It would be interesting to see how the IP laws voted on by representatives of the students (who are all in some way involved with the process of creating, buying, selling, and distributing artistic products) differ from those which we have foisted upon us in the real-world. Also an interesting possibility would be to allow corruption in the election process and observe how the results differ from year levels where corruption is not permitted. I expect that teaching children how political corruption works would be a little controversial, but it’s nothing that they can’t learn from reading news reports about what the “entertainment” industry is really doing. Really being a corrupt politician for a school project shouldn’t be as bad as playing a murderer in a school play!

Naturally this couldn’t be done with real money, but giving higher marks at the end of the year to the students who accumulate the most play money would be quite reasonable. I don’t think that there would be a problem with giving higher marks to a student who succeeded through political corruption – as long as they gave a good written report of how they did so and the implications for society.

Please note that I am not suggesting this for a subject that is used for university entrance, I think it would be a good project for years 8-10 which in Australia have no relevance to university entrance. So the marks would just be letters on a bit of paper that might make parents happy or unhappy and otherwise mean nothing.

I anticipate responses from people who believe that educating children about how the world works is not appropriate for a school. Such people are never going to convince me, but if anyone thinks that they can make a good point to convince some of the readers then I encourage them to write it up in the comments section if it’s short or on their own blog if it’s longer.