Archives

Categories

Laptop Computer Features

It’s not easy to choose a laptop, and part of the problem is that most people don’t seem to start from the use of the laptop. I believe that the following four categories cover the vast majority of the modern use of mobile computers.

  1. PDA [1] – can be held in one hand and generally uses a touch-screen. They generally do not resemble a laptop in shape or design.
  2. Subnotebook [2] AKA Netbook [3] – a very small laptop that is designed to be very portable. Typically weighing 1KG or less and having multiple design trade-offs to give light weight (such as a small screen and a slow CPU) while still being more powerful than a PDA. The EeePC [4] is a well known example.
  3. Laptop [5] – now apparently defined to mean a medium size portable computer, light enough to be carried around but with a big enough screen and keyboard that many people will be happy to use them for 8 hours a day. The word is also used to mean all portable computers that could possibly fit on someone’s lap.
  4. Desktop Replacement [6] – a big heavy laptop that is not carried much.

There is some disagreement about the exact number of categories and which category is most appropriate for each machine. There is a range of machines between the Subnotebook and Laptop categories. There is some amount of personal preference involved in determining which category a machine might fall in. For example I find a Thinkpad T series to fit into the “Laptop” category (and I expect that most people would agree with me). But when comparing the weight and height of an average 10yo child to an adult it seems that a 10yo would find an EeePC to be as much effort to carry as a T series Thinkpad is for an adult.

It seems to me that the first thing that you need to do when choosing a laptop is to decide which of the above categories is most appropriate. While the boundaries between the categories are blurry and to some extent are limited by personal preference it’s an easy second step to determine which machines fit the category you have selected (in your opinion) once you have made a firm decision on the category. It’s also possible to choose a half-way point, for example if you wanted something on the border of the “Laptop” and NetBook categories then a Thinkpad X series might do the job.

The next step of course is to determine which OSs and applications you want to run. There are some situations where the choice of OS and/or applications may force you to choose a category that has more powerful hardware (a CPU with more speed or features, more RAM, or more storage). For example a PDA generally won’t run a regular OS well (if at all) due to the limited options available for input devices and the very limited screen resolution. Even a NetBook has limitations as to what software runs well (for example many applications require a minimum resolution of 800×600 and don’t work well on an EeePC 701). Also Xen can not be used on the low-end CPUs used in some NetBooks which lack PAE.

Once you have chosen a category you have to look for features which make sense for that category. A major criteria for a PDA is how fast you can turn it on, it should be possible to go from standby to full use in less than one second. Another major criteria is how long the battery lasts, it should compare to a mobile phone (several days on standby and 8 hours of active use). A criteria that is important to some people is the ability to use both portrait and landscape views for different actions (I use portrait for editing and landscape for reading).

A NetBook is likely to be used in many places and needs to have a screen that will work well in adverse lighting conditions (a shiny reflective screen is a really bad thing), it also needs to be reasonably resilient as it is going to get bumped if it is transported a lot (a solid state disk is a good feature). It should also be as light as possible while having enough hardware to run a regular OS (an EeePC 701 with 512M of RAM and 4G of storage is about the minimum hardware for running a regular distribution of Linux).

A desktop replacement needs to have all the features, lots of RAM, a fast CPU and video hardware, and a big screen – it also needs a good keyboard (test by typing your name several times). The “Laptop” category is much the same as the desktop replacement, but a bit smaller, a lot lighter, and better battery life.

It seems very difficult to give any specific advice as to which laptop to buy when the person who wants the advice has not chosen a category (which is often the case).

Not All Opinions Are Equal

It seems to be a common idea by non-bloggers that the comment they enter on a blog is somehow special and should be taken seriously by the author of the blog (everyone is a legend in their own mind). In a recent discussion one anonymous commentator seemed offended that I didn’t take his comments seriously and didn’t understand why I would take little notice of an anonymous comment while taking note of a later comment on the same issue by the author of the project in question.

In most forums (and I use the term in the broadest way) an anonymous comment is taken with a weight that is close to zero. That doesn’t mean that it will be ignored, it just means that the requirement for providing supporting evidence or of having a special insight and explaining it is much greater.

One example of this is the comment weighting system used by Slashdot.org (AKA “/.”). The /. FAQ has a question “Why should I log in?” with the answer including “Posting in Discussions at Score:1 instead of Score:0 means twice as many people will see your comments” [1]. /. uses the term “Anonymous Coward” as the identification of users who are not logged in, this gives an idea of how they are regarded.

Advogato uses a rating method for blog posts which shows you only posts from blogs that you directly rank well or which match the trust metric (based on rankings of people you rank) [2].

I believe that the automated systems developed by /. and other online forums emulate to a some extent the practices that occur off-line. For any discussion in a public place a comment from someone who does not introduce themself (or gives an introduction that gives no reason to expect quality) will be treated with much less weight than one from someone who is known. When someone makes a comment their background will be considered by people who hear it. If a comment is entirely a matter of opinion and can not be substantiated by facts and logical analysis then the acceptance of the comment is solely based on the background of the author (and little things like spelling errors can count against the author).

Therefore if you want your blog comments to be considered by blog authors and readers you need to make sure that you are known. Using your full name is one way of not being as anonymous but most names are not unique on the Internet (I’ve previously described some ways of ensuring that you beat other people with the same name in Google rankings [3]). The person who owns the blog can use the email address that is associated with the comment to identify the author (if it’s a real email address and it’s known by Google). But for other readers the only option is the “Website” field. The most common practice is to use the “Website” field in the comment to store the URL of your blog (most blog comments are written by bloggers). But there is nothing stopping you from using any other URL, if you are not a blogger and want to write comments on blogs you could create a personal web page to use for the comments. If the web page you use for such purposes gives links to references as to your relevant experience then that would help. Someone who has skills in several areas could create a web page for each one and reference the appropriate page in their comment.

One problem we face is that it is very easy to lie on the net. There is no technical obstacle to impersonation on the net, while I haven’t seen any evidence of people impersonating others in an attempt to add credibility to blog comments I expect it’s only a matter of time before that happens (I expect that people do it already but the evidence of them getting caught has not been published anywhere that I’ve read). People often claim university education to add weight to their comments (usually in email but sometimes in blog comments too). One problem with this is that anyone could falsely claim to have a university degree and no-one could disprove their claim without unreasonable effort, another is that a university degree actually doesn’t mean much (lots of people remain stupid after graduating). One way in which adding a URL to a comment adds weight is that for a small web site the author will check a reasonable portion of the sites that link to them, so if someone impersonates me and has a link to my web site in the comment then there’s a good chance that I will notice this.

OpenID [4] has the potential to alleviate this by making it more difficult to forge an association with a web site. One thing that I am working on is enabling OpenID on all the web sites that are directly associated with me. I plan to use a hardware device to authenticate myself with the OpenID server (so I can securely enter blog comments from any location). I expect that it will become the standard practice that comments will not be accepted by most blogs if they are associated with a URL that is OpenID enabled unless the author of the comment authenticates themself via OpenID.

Even when we get OpenID enabled everywhere there is still the issue of domain specific expertise. While I am well enough known for my work on SE Linux that most people will accept comments about it simply because I wrote them, the same can not be said for most topics that I write about. When writing about topics where I am not likely to be accepted as an expert I try and substantiate my main points with external web pages. Comments are likely to be regarded as spam if they have too many links so it seems best to only use one link per comment – which therefore has to be on an issue that is important to the conclusion and which might be doubted if evidence was not provided. The other thing that is needed is a reasonable chain of deduction. Simply stating your opinion means little, listing a series of logical steps that led you to the opinion and are based on provable facts will hold more weight.

These issues are not only restricted to blog comments, I believe that they apply (to differing degrees) to all areas of online discussion.

Fixing execmod (textrel) Problems in Lenny

I’ve just updated my repository of SE Linux related packages for Lenny [1] to include a set of ffmpeg packages modified to not need text relocations (execmod access under SE Linux). I haven’t checked to make sure that I fixed all issues in those packages, but I have fixed all the issues that prevented Mplayer from working in a default configuration of SE Linux.

I had to patch the file libswscale/rgb2rgb.c to disable the MMX assembly code as the --disable-mmx option doesn’t work for that file. I changed the build script so that when it generates the code for the shared and cmov targets in i386 mode it adds -DPIC and -DBROKEN_RELOCATIONS to the CFLAGS and also added LIBOBJFLAGS=-fPIC to the ./configure run. There might have been a better way of doing this, but the current implementation basically works.

Long term I think that the ideal solution to this would be to have separate versions of the library packages for people who prefer extra security to a possible 15% performance benefit.

While using these libraries on an EeePC 701 (the least powerful of all the machines I own which could be used to play video) I was able to play full-screen video downloaded from ted.com without any glitches so it seems that a 15% performance loss is not a problem.

Noise in Computer Rooms

Some people think that you can recognise a good restaurant by the presence of obscure dishes on the menu or having high prices. The reality is that there are two ways of quickly identifying a good restaurant, one is the Michelin Guide [1] (or a comparable guide – if such a thing exists), the other is how quiet the restaurant is.

By a quiet restaurant I certainly don’t mean a restaurant with no customers (which may become very noisy once customers arrive). I mean a restaurant which when full will still be reasonably quiet. Making a restaurant quiet is not in itself a sufficient criteria to be a good restaurant – but it’s something that is usually done after the other criteria (such as hiring good staff and preparing a good menu) are met.

The first thing to do to make a room quiet is to have good carpet. Floor boards are easy to clean and the ratio of investment to lifetime is very good (particularly for hard wood), but they reflect sound and the movement of chairs and feet makes noise. A thick carpet with a good underlay is necessary to absorb sound. Booths are also good for containing sound if the walls extend above head height. Decorations on the walls such as curtains and thick wallpaper also absorb sound. A quiet environment allows people to talk at a normal volume which improves the dining experience.

It seems to me that the same benefits apply to server rooms and offices, with the benefit being more efficient work. I found it exciting when I first had my desk in a server room (surrounded by tens of millions of pounds worth of computer gear). But as I got older I found it less interesting to work in that type of environment just as I found it less interesting to have dinner in a noisy bar – and for the same reasons.

For a server room there is no escaping the fact that it will be noisy. But if the noise can be minimised then it will allow better communication between the people who are there and less distraction which should result in higher quality of work – which matters if you want good uptime! One thing I have observed is that physically larger servers tend to make less noise per volume and per compute power. For example a 2RU server with four CPUs seems to always make less noise than two 1RU servers that each have two CPUs. I believe that this is because a fan with a larger diameter can operate at a lower rotational speed which results in less bearing noise and the larger fans also give less turbulence. While it’s obvious that using fewer servers via virtualisation has the potential to avoid noise (both directly through fans and disks and indirectly through the cooling system for the server room [2]). A less obvious way of reducing noise is to swap two 1RU servers for one 2RU server – although my experience is that for machines in a similar price band, a 2RU server often has comparable compute power (in terms of RAM and disk capacity) to three or four 1RU servers.

To reduce noise both directly and indirectly it is a requirement to increase disk IO capacity (in terms of the number of random IOs per second) without increasing the number of spindles (disks). I just read an interesting Sun blog covering some concepts related to using Solid State Disks (SSDs) on ZFS for best performance [3]. It seems that using such techniques is one way of significantly increasing the IO capacity per server (and thus allowing more virtual servers on one physical machine) – it’s a pity that we currently don’t have access to ZFS or a similar filesystem for Linux servers (ZFS has license issues and the GPL alternatives are all in a beta state AFAIK). Another possibility that seems to have some potential is the use of NetApp Filers [4] for the main storage of virtual machines. A NetApp Filer gives a better ratio of IO requests per second to the number of spindles used than most storage array products due to the way they use NVRAM caching and their advanced filesystem features (which also incidentally gives some good options for backups and for detecting and correcting errors). So a set of 2RU servers that have the maximum amount of RAM installed and which use a NetApp Filer (or two if you want redundancy) for the storage with the greatest performance requirements should give the greatest density of virtual machines.

Blade servers also have potential to reduce noise in the server room. The most significant way that they do this is by reducing the number of power supplies, instead of having one PSU per server (or two if you want redundancy) you might have three or five PSUs for a blade enclosure that has 8 or more blades. HP blade enclosures support shutting down some PSUs when the blades are idling and don’t need much power (I don’t know whether blade enclosures from other vendors do this – I expect that some do).

A bigger problem however is the noise in offices where people work. It seems that the major responsible for this is the cheap cubicles that are used in most offices (and almost all computer companies). More expensive cubicles that are at almost head-height (for someone who is standing) and which have a cloth surface absorb sound better significantly improve the office environment, and separate offices are better still. One thing I would like to see is more use of shared desktop computers, it’s not difficult to set up a desktop machine with multiple video cards, so with appropriate software support (which is really difficult) you could have one desktop machine for two, or even four users which would save electricity and reduce noise.

Better quality carpet on the floors would also be a good thing. While office carpet wears out fast adding some underlay would not increase the long-term cost (it can remain as the top layer gets replaced).

Better windows in offices are necessary to provide a quiet working environment. The use of double-glazed windows with reflective plastic film significantly decreases the amount of heating and cooling that is required in the office. This would permit a lower speed of air flow for heating and cooling which means less noise. Also an office in a central city area will have a noise problem outside the building, again double (or even triple) glazed windows help a lot.

Some people seem to believe that an operations room should have no obstacles (one ops room where I once worked had all desks facing a set of large screens that displayed network statistics and the desks were like school desks with no dividers), I think that even for an ops room there should be some effort made to reduce the ambient noise. If the room is generally reasonably quiet then it should be easy to shout the news of an outage so that everyone can hear it.

Let’s assume for the sake of discussion that a quieter working environment can increase productivity by 5% (I think this is a conservative assumption). For an office full of skilled people who are doing computer work the average salary may be about $70,000, and it’s widely regarded that to factor in the management costs etc you should double the salary – so the average cost of an employee would be about $140,000. If there are 50 people in the office then the work of those employees has a cost of $7,000,000 per annum. A 5% increase in that would be worth $350,000 per annum – you could buy a lot of windows for that!

Islamophobia

I recently wasted a bit of time reading some right-wing blogs. One thing I noted was the repeated references to news reports about young women from an Islamic background being beaten (and in some cases killed) by their fathers (and other male relatives) for not conforming to some weird cultural ideas that some people associate with Islam. These are spun as examples of Islam being bad and therefore opposing immigration policies that allow Muslims into countries identified as “Christendom” or “The West” (never mind the fact that the vast majority of the population in “Christendom” don’t even attend church twice a year and the fact that Australia is directly south of China, Russia, and North Korea).

It seems to me that when young people follow the cultural standards of the country where they live rather than the standards of the country that their parents came from then it’s evidence of “multiculturalism” working. When young Muslim women are beaten by their fathers whether it’s considered an example of Muslims being bad (and who therefore should be excluded) or an example of Muslims as victims who should be protected is a matter of interpretation. It’s not as if there is any shortage of domestic violence cases from any religious or cultural group.

It’s often claimed that fundamentalist Muslims hate our culture, strangely the same people seem to claim that our culture will be destroyed by radical Islam. These two ideas seem to conflict, if our culture (the pro-science, free-speech, few inhibitions on clothing standards, do what you want but don’t hurt others culture that most readers of my blog enjoy) can be destroyed by radical Islam then they wouldn’t hate it. I think that the reason why fundamentalist religious people (Christians and Muslims) dislike our culture is because it is so strong. Our culture offers a way of life that is simply better than that which fundamentalist religious groups offer. Any religious person can choose to take a liberal approach to their religion (emphasising the positive aspects of giving to charity, being nice to others, etc) and enjoy our culture. Our culture is based around wide-spread communication, mass media, mobile phones, the Internet, custom clothing design, etc. It can do to religions what the sea does to rocks.

It seems that the strongest efforts at attacking our culture come from Christian groups. For example the Exclusive Bretheren [1] runs a high school in my area, according to a local paper it distinguishes itself by having no students enter a university course! The Exclusive Bretheren (and some other radical Christian groups) have a deliberate policy of keeping children stupid with the idea that people who think may decide to change their religion.

Some time ago I had a taxi driver start an unsolicited discussion of religion by telling me how much he hated Muslims. I pointed out the fact that there are Muslims of all races and asked why he thought that I was not a Muslim. After that the rest of the journey was very quiet.

The mainstream media would have us believe that Muslims have some sort of monopoly on terrorism. Noam Chomsky’s paper “Terror and Just Response” [2] is one of many that he has written on this issue. I realise that many people don’t want to acknowledge the involvement of the US government (and it’s allies such as Australia) in international terrorism. But please read Noam’s position (which is compelling) or read his wikipedia page which lists his extensive accomplishments [3] (if it’s the background of an author that impresses you).

Execmod and SE Linux – i386 Must Die

I have previously written about the execmod permission check in SE Linux [1] and in a post about SE Linux on the desktop I linked to some bug reports about it [2] (which probably won’t be fixed in Debian).

One thing I didn’t mention is the proof of the implication of this. When running a program in the unconfined_t domain on a SE Linux system (the domain for login sessions on a default configuration), if you set the boolean allow_execmod then the following four tests from paxtest will be listed as vulnerable:

Executable bss (mprotect)
Executable data (mprotect)
Executable shared library bss (mprotect)
Executable shared library data (mprotect)

This means that if you have a single shared object which uses text relocations and therefore needs the execmod permission then the range of possible vectors for attack against bugs in the application has just increased by four. This doesn’t necessarily require that the library in question is actually being used either! If a program is linked against many shared objects that it might use, then even if it is not going to ever use the library in question it will still need execmod access to start and thus give extra possibilities to the attacker.

For reference when comparing a Debian system that doesn’t run SE Linux (or has SE Linux in permissive mode) to a SE Linux system with execmod enabled the following tests fail (are reported as vulnerable):

Executable anonymous mapping (mprotect)
Executable heap (mprotect)
Executable stack (mprotect)
Writable text segments

If you set the booleans allow_execstack and allow_execheap then you lose those protections. But if you use the default settings of all three booleans then a program running in the unconfined_t domain will be protected against 8 different memory based attacks.

Based on discussions with other programmers I get the impression that fixing all the execmod issues on i386 is not going to be possible. The desire for a 15% performance boost (the expected result of using an extra register) is greater than the desire for secure systems among the people who matter most (the developers).

Of course we could solve some of these issues by using statically linked programs and have statically linked versions of the libraries in question which can use the extra register without any issues. This does of course mean that updates to the libraries (including security updates) will require rebuilding the statically linked applications in question – if a rebuild was missed then this could be reduce the security of the system.

To totally resolve that issue we need to have i386 machines (the cause of the problem due to their lack of registers) go away. Fortunately in the mainstream server, desktop, and laptop markets that is already pretty much done. I’m still running a bunch of P3 servers (and I know many other people who have similar servers), but they are not used for tasks that involve running programs that are partially written in assembly code (mplayer etc).

One problem is that there are still new machines being released with the i386 ISA as the only instruction set. For example the AMD Geode CPU [2] is used by the One Laptop Per Child (OLPC) project [3] and the new Intel Atom line of CPUs [4] apparently only supports the AMD64 ISA on the “desktop” models and the versions used in ultra-mobile PCs are i386 only.

I think that these issues are particularly difficult in regard to the OLPC. It’s usually not difficult to run “yum upgrade” or “apt-get dist-upgrade” on an EeePC or similar ultra-mobile PC. But getting an OLPC machine upgraded in some of the remote places where they may be deployed might be more difficult. Past experience has shown that viruses and trojans can get to machines that are supposed to be on isolated networks, so it seems that malware can get access to machines that can not communicate with servers that contain security patches… One mitigating factor however is that the OLPC OS is based on Fedora, and Fedora seems to be taking the strongest efforts to improve security of any mainstream distribution, a choice between 15% performance and security seems to be a no-brainer for the Fedora developers.

Efficiency of Cooling Servers

One thing I had wondered was why home air-conditioning systems are more efficient than air-conditioning systems for server rooms. I received some advice on this matter from the manager of a small server room (which houses about 30 racks of very powerful and power hungry servers).

The first issue is terminology, the efficiency of a “chiller” is regarded as the number of Watts of heat energy removed divided by the number of Watts of electricity consumed by the chiller. For example when using a 200% efficient air cooling plant, a 100W light bulb is rated as being a 150W heat source. 100W to Heat it, 50W from the cooling plant to cool it.

For domestic cooling I believe that 300% is fairly common for modern “split systems” (it’s the specifications for the air-conditioning on my house and the other air-conditioners on display had similar ratings). For high-density server rooms with free air cooling I have been told that a typical efficiency range is between 80% and 110%! So it’s possible to use MORE electricity on cooling than on running the servers!

One difficulty in cooling a server room is that the air often can’t flow freely (unlike a big open space such as the lounge room of your house). Another is the range of temperatures and the density of heat production in some parts (a 2RU server can dissipate 1000W of heat in a small space). These factors can be minimised by extracting hot air at the top and/or rear of racks and forcing cold air in the bottom and/or the front and by being very careful when planning where to place equipment. HP offers some services related to designing a server room to increase cooling efficiency, one of the services is using computational fluid dynamics to simulate the air-flow in the server-room [1]! CFD is difficult and expensive (the complete package from HP for a small server room costs more than some new cars), I believe that the fact that it is necessary for correct operation of some server rooms is an indication of the difficulty of the problem.

The most effective ways of cooling servers involve tight coupling of chillers and servers. This often means using chilled water or another liquid to extract the heat. Chilled water refrigeration systems largely remove the problem of being unable to extract the heat from the right places, but instead you have some inefficiency in pumping the water and the servers are fixed in place. I have not seen or heard of chilled water being used for 2RU servers (I’m not saying that it doesn’t get used or that it wouldn’t make sense – merely that I haven’t seen it). When installing smaller servers (2RU and below) there is often a desire to move them and attaching a chilled-water cooling system would make such a move more difficult and expensive. When a server weighs a ton or more then you aren’t going to move it in a hurry (big servers have to be mostly disassembled before the shell can be moved, and the shell might require the efforts of four men to move it). Another issue related to water cooling is the weight. Managing a moderate amount of water involves a lot of heavy pipes (a leak would be really bad) and the water itself can weigh a lot. A server room that is based around 20Kg servers might have some issues with the extra weight of water cooling (particularly the older rooms), but a server room designed for a single rack that weighs a ton can probably cope.

I have been told that the cooling systems for low density server rooms are typically as efficient as those used for houses, and may even be more efficient. I expect that when designing an air-conditioner the engineering trade-offs when designing for home use favor low purchase price. But someone who approves the purchase of an industrial cooling system will be more concerned about the overall cost of operations and will be prepared to spend some extra money up-front and recover it over the course of a few years. The fact that server rooms run 24*7 also gives more opportunity to recover the money spent on the purchase (my home A-C system runs for about 3 months a year for considerably less than 24 hours a day).

So it seems that the way to cool servers efficiently is to have low density server rooms (to the largest extent possible). One step towards this goal would be to have servers nearer the end users. For example having workgroup servers near the workgroup (instead of in the server room). Of course physical security of those servers would be more challenging – but if all the users have regular desktop PCs that can be easily 0wned then having the server for them in the same room probably doesn’t make things any worse. Modern tower servers are more powerful than rack mounted servers that were available a few years ago while also being very quiet. A typical rack-mounted server is not something you would want near your desk, but one of the quiet tower servers works quite well.

SE Linux in Lenny status – Achieved Level 1

I previously described the goals for SE Linux development in Lenny and assigned numbers to the levels of support [1]. I have just uploaded a new policy to unstable which I hope to get in Lenny that will solve all the major issues for level 1 of support (default configuration with the unconfined_t domain for all user sessions – much like the old “targeted” policy). The policy in question is in my Lenny SE Linux repository [2] (for those who don’t want to wait for it to get into Unstable or Lenny).

Random Opinions, Expert Opinions, and Facts about AppArmor

My previous post titled AppArmor is Dead [1] has inspired a number of reactions. Some of them have been unsubstantiated opinions, well everyone has an opinion so this doesn’t mean much. I believe that opinions of experts matter more, Crispin responded to my post and made some reasonable points [2] (although I believe that he is overstating the ease of use case). I take Crispin’s response a lot more seriously than most of the responses because of his significant experience in commercial computer security work. The opinion of someone who has relevant experience in the field in question matters a lot more than the opinion of random computer users!

Finally there is the issue of facts. Of the people who don’t agree with me, Crispin seems to be the first to acknowledge that Novell laying off AppArmor developers and adding SE Linux support are both bad signs for AppArmor. The fact that Red Hat and Tresys have been assigning more people to SE Linux development in the same time period that SUSE has been laying people off AppArmor development seems to be a clear indication of the way that things are going.

One thing that Crispin and I understand is the amount of work involved in maintaining a security system. You can’t just develop something and throw it to the distributions. There is ongoing work required in tracking kernel code changes, and when there is application support there is also a need to track changes to application code (and replacements of system programs). Also there is a need to add new features. Currently the most significant new feature development in SE Linux is related to X access controls – this is something that every security system for Linux needs to do (currently none of them do it). It’s a huge amount of work, but the end result will be that compromising one X client that is running on your desktop will not automatically grant access to all the other windows.

The CNET article about Novell laying off the AppArmor developers [3] says ‘“An open-source AppArmor community has developed. We’ll continue to partner with this community,” though the company will continue to develop aspects of AppArmor‘ and attributes that to Novell spokesman Bruce Lowry.

Currently there doesn’t seem to be an AppArmor community, the Freshmeat page for AppArmor still lists Crispin as the owner and has not been updated since 2006 [4], it also links to hosting on Novell’s site. The Wikipedia page for AppArmor also lists no upstream site other than Novell [4].

The AppArmor development list hosted by SUSE is getting less than 10 posts per month recently [6]. The AppArmor general list had a good month in January with a total of 23 messages (including SPAM) [7], but generally gets only a few messages a month.

The fact that Crispin is still listed as the project leader [8] says a lot about how the project is managed at Novell!

So the question is, how can AppArmor’s prospects be improved? A post on linsec.ca notes that Mandriva is using AppArmor, getting more distribution support would be good [9], but the most important thing in that regard will be contributing patches back and dedicating people to do upstream work (Red Hat does a huge amount of upstream development for SE Linux and a significant portion of my Debian work goes upstream).

It seems to me that the most important thing is to have an active community. Have a primary web site (maybe hosted by Novell, maybe SourceForge or something else) that is accurate and current. Have people giving talks about AppArmor at conferences to promote it to developers. Then try to do something to get some buzz about the technology, my SE Linux Play Machines inspired a lot of interest in the SE Linux technology [10]. If something similar was done with AppArmor then it would get some interest.

I’m not interested in killing AppArmor (I suspect that Crispin’s insinuations were aimed at others). If my posts on this topic inspire more work on AppArmor and Linux security in general then I’m happy. As Crispin notes the real enemy is his employer (he doesn’t quite say that – but it’s my interpretation of his post).

Google Chrome – the Security Implications

Google have announced a new web browser – Chrome [1]. It is not available for download yet, currently there is only a comic book explaining how it will work [2]. The comic is of very high quality and will help in teaching novices about how computers work. I think it would be good if we had a set of comics that explained all the aspects of how computers work.

One noteworthy feature is the process model of Chrome. Most browsers seem to aim to have all tabs and windows in the same process which means that they can all crash together. Chrome has a separate process for each tab so when a web site is a resource hog it will be apparent which tab is causing the performance problem. Also when you navigate from site A to site B they will apparently execute a new process (this will make the back-arrow a little more complex to implement).

A stated aim of the process model is to execute a new process for each site to clear out the memory address space. This is similar to the design feature of SE Linux where a process execution is needed to change security context so that a clean address space is provided (preventing leaks of confidential data and attacks on process integrity). The use of multiple processes in Chrome is just begging to have SE Linux support added. Having tabs opened with different security contexts based on the contents of the site in question and also having multiple stores of cookie data and password caches labeled with different contexts is an obvious development.

Without having seen the code I can’t guess at how difficult it will be to implement such features. But I hope that when a clean code base is provided by a group of good programmers (Google has hired some really good people) then the result would be a program that is extensible.

They describe Chrome as having a sandbox based security model (as opposed to the Vista modem which is based on the Biba Integrity Model [3]).

It’s yet to be determined whether Chrome will live up to the hype (although I think that Google has a good record of delivering what they promise). But even if Chrome isn’t as good as I hope, they have set new expectations of browser features and facilities that will drive the market.

Update: Chrome is now released [4]!

Thanks to Martin for pointing out that I had misread the security section. It’s Vista not Chrome that has the three-level Biba implementation.