Archives

Categories

Can SE Linux Stop a Linux Storm

Bruce Schneier has just written about the Storm Worm [1] which has apparently been quietly 0wning some Windows machines for most of this year (see the Wikipedia page for more information [2]).

I have just been asked whether SE Linux would stop such a worm from the Linux environment. SE Linux does prevent many possible methods of getting local root. If a user who does not have the root password (or is not going to enter it from a user session) has their account taken over by a hostile party then the attacker is not going to get local root (unless there is a kernel vulnerability). Without local root access the activities of the attacker can be seen by a user logged in on another account – processes will be seen by all user sessions if using the SE Linux targeted policy and files and processes can be seen by the sys-admin.

If while a user account is 0wned the user runs “su –” (or an equivalent command) then in theory at least the attacker can sniff this and gain local root access (whether enough users do this to make attackers feel that it’s worth their effort to write the code in question is something I couldn’t even guess about). If the user is clueless then the attacker could immediately display a dialog with some message that sound urgent and demand the root password – some users would give it. If the user is even moderately smart the attacker could fake the GUI dialogues for installing updated packages (which have been in Red Hat distributions for ages and have appeared in Debian more recently) and tell the user that they need to enter the root password to install an important security update (oh the irony).

In conclusion I think that if a user is ill-educated enough to want to run a program that was sent to them in email by a random person then I expect that the program would have a good chance of coercing them into giving it local root access if the user in question had the capability of doing so.

Even if a Linux trojan did not have local root access then it could still do a lot of damage. Any server operations that don’t require ports <1024 (which means most things other than running a web, DNS, or mail server) can still be performed and client access will always work (including sending email). The trojan would have access to all of the user’s data (which for a corporate desktop machine usually means a huge network share of secret documents).

If a trojan only attempts to perform actions that SE Linux permits (running programs from the user’s home directory, accessing servers for DNS, HTTP, IRC, SMTP, and other protocols – a reasonable set of options for a trojan) then the default configuration of SE Linux (targeted policy) won’t stop it or even log anything. This is not a problem with SE Linux, just a direct result of the fact that in every situation a trojan can perform all operations that the user can perform – and if the trojan only wants to receive commands via web and IRC servers and send spam via the user’s regular mail server then it will be a small sub-set of the permitted operations for the user!

If however the trojan tries more aggressive methods then SE Linux will log some AVC messages about access being denied. If the sys-admin has good procedures for analysing log files they will notice such things, understand what they mean, and be able to contain the damage. Also there have been at least two cases where SE Linux prevented local root exploits.

Finally, in answer to the original question: SE Linux will stop some of the more aggressive methods that trojans might use. But there are still plenty of things that a trojan could do to cause harm which won’t be stopped or audited by SE Linux policy. When Linux gets more market share among users with a small amount of skill and no competent person to do sys-admin work for them we will see some Linux trojans and more Linux worms. It will be interesting to see what methods the trojan authors decide to use.

Executable Stack and Shared Objects

When running SE Linux you will notice that most applications are not permitted to run with an executable stack. One example of this is libsmpeg0 which is used by the game Freeciv [1]. When you attempt to run the Freeciv client program on a Debian/Etch system with a default SE Linux configuration (as described in my post on how to install SE Linux on Debian in 5 minutes [2]) then you will find that it doesn’t work.

When this happens the following will be logged to the kernel log and is available through dmesg and usually in /var/log/kern.log (Debian/Etch doesn’t have auditd included, the same problem on a Fedora, RHEL, or CentOS system in a typical configuration would be logged to /var/log/audit/audit.log):
audit(1191741164.671:974): avc: denied { execstack } for pid=30823 comm=”civclient” scontext=rjc:system_r:unconfined_t:s0 tcontext=rjc:system_r:unconfined_t:s0 tclass=process

The relevant parts are in bold. The problem with this message in the message log is that you don’t know which shared object caused the problem. As civclient is normally run from the GUI you are given no other information.

So the thing to do is to run it at the command-line (the avc message tells you that civclient is the name of the failing command) and you get the following result:
$ civclient
civclient: error while loading shared libraries: libsmpeg-0.4.so.0: cannot enable executable stack as shared object requires: Permission denied

This makes it clear which shared object is at fault. The next thing to do is to test the object by using execstack to set it to not need an executable stack. The command execstack -q /usr/lib/libsmpeg-0.4.so.0.1.4 will give an “X” as the first character of the output to indicate that the shared object requests an executable stack. The command execstack -c /usr/lib/libsmpeg-0.4.so.0.1.4 will change the shared object to not request an executable stack. After making such a change to a shared object the next thing to do is to test the application and see if it works correctly. In every case that I’ve seen the shared object has not needed such access and the application has worked correctly.

As an aside, there is a bug in execstack in that it will break sym-links. Make sure that the second parameter it is given is the shared object not the sym-link to it which was created by ldconfig. See Debian bug 445594 [3] and CentOS bug 2377 [4].

The correct thing to do is to fix the bug in the source (not just modify the resulting binary). On page 8 of Ulrich Drepper’s document about non-SE Linux security [5] there is a description of both the possible solutions to this problem. One is to add a line containing “.section .note.GNU-stack,"",@progbits” to the start of the assembler file in question (which is what I suggested in Debian bug report 445595 [6]). The other is to add the parameters “-Wa,--execstack” to the command-line for the GNU assembler – of course this doesn’t work if you use a different assembler.

In the near future I will establish an apt repository for Debian/Etch i386 packages related to SE Linux. One of the packages will be a libsmpeg0 package compiled to not need an executable stack. But it would be good if bug fixes such as this one could be included in future updates to Etch.

Reducing Automated Attacks

I read the logs from my servers. The amount of time I spend reading log summaries is determined by how important the server is. On the machines that are most important to me I carefully read log summaries and periodically scan the logs for anything that looks unusual.

The amount of time taken is obviously determined by the amount of data in the logs, so it is a benefit to me (in terms of spending less time) to have smaller logs. It’s also a benefit for me (and the other people who depend on those servers) that I spend my time on things that might be important instead of mindless failed attacks.

One thing that I do to reduce the size of my logs is to run sshd on a non-standard port. This requires using a Port directive in the file /etc/ssh/sshd_config and on the client machines I edit /etc/ssh/ssh_config to include a section such as the following to avoid the need to use the “-p” option for ssh (or the “-P” option for scp):
Host some_server
ForwardX11 no
Protocol 2
HostKeyAlgorithms ssh-rsa
Port 1234

Incidentally I disable X11 forwarding explicitly because it’s a dangerous option which usually isn’t needed, and I specify the ssh-rsa algorithm not because it’s any better than the other option of ssh-dsa but because the possibility of having a secondary option that is normally used adds the possibility that a MITM [1] attack can be performed by an attacker who forces the client to use the non-default protocol (thus giving an unknown host key message instead of a message about an invalid key).

Note that these settings can go in /etc/ssh/ssh_config to apply to all users or in ~/.ssh/config to apply to only one user (IE if you aren’t root on the machine in question).

The practice of avoiding attacks by using non-standard ports is not Security by Obscurity in that the security of my systems does not rely on attackers not knowing the port. Attackers can easily scan all ports and discover which one I use. The fact is that any attacker who does so is a more serious threat than all the attackers who scan port 22 and bother someone else when they discover that nothing is listening, such an attacker deserves to have some of my time used to read the log files related to their attempt.

Public Security Cameras

There is ongoing debate about the issue of security cameras, how many should there be, where should they be located, and who should be able to access the data.

I spent about a year living in London which probably has more security cameras and a greater ratio of cameras to people than any other city. I was never bothered by this. I believe that if implemented correctly security cameras increase public safety and will not have any serious problems.

A while ago I witnessed a violent assault (which could potentially have ended up as a manslaughter case – it was merely luck that ~200 people got off a train at the right time to scare the attackers off). AFAIK I was the only person who identified themself to the police and was prepared to stand as a witness, without security camera footage the case would not have gone anywhere (I only saw the attackers from behind as they ran off). Security camera footage allowed the police to identify the attackers, my testimony was not required and I was never informed as to how the case proceeded – but I know for a fact that the police investigation depended on security camera footage and that they did make progress in the case based on such footage.

There are current plans to increase the scope of security cameras in many cities under the guise of the “war on terror”. The problem is that once a terrorist is involved in an attack it’s too late for security cameras. Security cameras are really only good for catching criminals after an attack, in most cases they will be entirely ineffective against suicide bombers as the issue of catching them is moot. There have been cases where security cameras have enabled the authorities to identify people with terrorist ideas who were investigating military bases (but I wouldn’t call such lamers “terrorists” as all the available evidence suggests that they would be incapable of succeeding in an attack). However no-one is disputing the fact that military installations need to have good security.

Given that security cameras do provide significant benefits to public safety I don’t think it’s reasonable to oppose them as long as they are implemented in a sensible and responsible manner. Most of the current plans to install security cameras don’t seem to be sensible and have few controls on who can access the data. This makes them good targets for oppressive government actions, organised crime, and even terrorists. The countries that have serious terrorist problems always have problems of terrorists infiltrating government departments and bribing government officials. A centralised system that allows the police to watch anyone at any time would probably do more good for al Quaeda and the Mafia than it would for regular police action.

For the fastest possible response a security camera system needs to have humans able to monitor it’s output in real-time. Having a control-room where police officers can randomly switch between public cameras to see if a crime appears to be in progress is a good thing (and works well in the UK). Of course the actions of such police need to be monitored to make sure that they are actually doing their job (not checking out hotties on the camera – an ongoing problem with security cameras).

Finally there’s the issue of what level of surveillance can be expected in a public place. I think that most people agree that when you enter a government building it’s reasonable to expect that you will be on camera, and many private buildings have security cameras with a condition of entry being that you permit yourself to be watched and no-one seems to be boycotting shopping centres because of this. Significant public spaces such as main roads and public transport also seem like reasonable locations for security cameras.

One location that is widely disputed is that of streets in residential areas. Most people who are happy to be photographed when entering and leaving public buildings such as train stations and shopping centres are not happy to be photographed when entering and leaving their own home.

I think that a reasonable solution to these problems requires the following:

  1. Restrictions on the duration and scope of surveillance in residential areas (EG require police to get court orders for such surveillance that must be periodically renewed).
  2. Restricting the duration for which records may be kept by the police. Keeping any records for longer than the period in question (which would be a few weeks at most) would require a court order.
  3. Prohibiting private organisations from handling surveillance data from government property (including public roads, train stations, etc). There are problems with having a private company aggregate surveillance data from multiple private properties but I don’t think we can address this at the moment.

Ideas for a Home University

There seems to be a recent trend towards home-schooling. The failures of the default school system in most countries are quite apparent and the violence alone is enough of a reason to keep children away from high-schools, even without the education (or lack therof).

I have previously written about University degrees and whether they are needed [1].

The university I attended (which I won’t name in this context) did an OK job of teaching students. The main thing that struck me was that you would learn as much as you wished at university. It was possible to get really good marks without learning much (I have seen that demonstrated many times) or learn lots of interesting things while getting marks that are OK (which is what I did). So I have been considering whether it’s possible to learn as much as you would learn at university without attending one, and if so how to go about it.

Here are the ways I learned useful things at university:

  1. I spent a lot of time reading man pages and playing with the various Unix systems in the computer labs. It turned out that sys-admin work was one of my areas of interest (not really surprising given my history of running Fidonet BBS systems). It was unfortunate that my university (like almost all other universities) had no course on system-administration and therefore I was not able to get a sys-admin job until several years after graduating.
  2. I read lots of good text books (university libraries are well stocked).
  3. There were some good lectures that covered interesting material that I would not have otherwise learned (there were also some awful lectures that I could have missed – like the one which briefly covered computer security and mentioned NOTHING other than covert channels – probably the least useful thing that they could cover).
  4. I used to hang out with the staff who were both intelligent and friendly (of which there were unfortunately a small number). If I noticed some students hanging out in the office of one of the staff in question I would join them. Then we would have group discussions about many topics (most of which were related to computers and some of which were related to the subjects that we were taking), this would continue until the staff member decided that he had some work to do and kicked us out. Hanging out with smart students was also good.
  5. I did part-time work teaching at university. Teaching a class forces you to learn more about the subject than is needed to basically complete an assignment. This isn’t something that most people can do.

I expect that Children who don’t attend high-school will have more difficulty in getting admitted to a university (the entrance process is designed for the results of high-school). Also if you are going to avoid the public education system then it seems useful to try and avoid it for all education instead of just the worst part. Even for people who weren’t home-schooled I think that there are still potential benefits in some sort of home-university system.

Now a home-university system would not be anything like an Open University. One example of an Open University is Open Universities Australia [2], another is the UK Open University [3]. These are both merely correspondence systems for a regular university degree. So it gives a university degree without the benefit of hanging out with smart people. While they do give some good opportunities for people who can only study part-time, in general I don’t think that they are a good thing (although I have to note that there are some really good documentaries on BBC that came from Open University).

Now I am wondering how people could gain the same benefits without attending university. Here are my ideas of how the four main benefits that I believe are derived from university can be achieved without one (for a Computer Science degreee anyway):

  1. Computers are cheap, every OS that you would ever want to use (Linux, BSD, HURD, OpenSolaris, Minix, etc) is free. It is quite easy to install a selection of OSs with full source code and manuals and learn as much about them as you desire.
  2. University libraries tend not to require student ID to enter the building. While you can’t borrow books unless you are a student or staff member it is quite easy to walk in and read a book. It may be possible to arrange an inter-library loan of a book that interests you via your local library. Also if a friend is a university student then they can borrow books from the university library and lend them to you.
  3. There are videos of many great lectures available on the net. A recent resource that has been added is Youtube lectures from the University of California Berkely [4] (I haven’t viewed any of the lectures yet but I expect them to be of better than average quality). Some other sources for video lectures are Talks At Google [5] and TED – Ideas Worth Spreading [6].
  4. To provide the benefits of hanging out with smart people you would have to form your own group. Maybe a group of people from a LUG could meet regularly (EG twice a week or more) to discuss computers etc. Of course it would require that the members of such a group have a lot more drive and ambition than is typical of university students. Such a group could invite experts to give lectures for their members. I would be very interested in giving a talk about SE Linux (or anything else that I work on) to such a group of people who are in a convenient location.
  5. The benefits of teaching others can be obtained by giving presentations at LUG meetings and other forums. Also if a group was formed as suggested in my previous point then at every meeting one or more members could give a presentation on something interesting that they had recently learned.

The end result of such a process should be learning more than you would typically learn at university while having more flexible hours (whatever you can convince a group of like-minded people to agree to for the meetings) that will interfere less with full-time employment (if you want to work while studying). In Australia university degrees don’t seem to be highly regarded so convincing a potential employer that your home-university learning is better than a degree should not be that difficult.

If you do this and it works out then please write a blog post about it and link to this post.

Update:
StraighterLine offers as much tuition as you can handle over the Internet for $99 per month [7]. That sounds really good, but it does miss the benefits of meeting other people to discuss the work. Maybe if a group of friends signed up to StraighterLine [8] at the same time it would give the best result.

Xen Memory Use and Zope

I am currently considering what to do regarding a Zope server that I have converted to Xen. To best manage the servers I want to split the Zope instances into different DomU’s based on organisational boundaries. One reason for doing this is so that each sys-admin will only be granted access to the Zope instance that they run so that they can’t accidentally break anyone else’s configuration. Another reason is to give the same benefit in the situation where one sys-admin runs multiple instances, if a sys-admin is asked to do some work by user A and breaks something else running for user A then I think that user A will understand that when you request changes there is a small risk of things going wrong. If a sys-admin is doing work for user A and accidentally breaks something for user B then they won’t expect any great understanding because user B wanted nothing to be touched!

Some people who are involved with the server are hesitant about my ideas because the machine has limited RAM (12G maximum for the server before memory upgrades become unreasonably expensive) and they believe that Zope needs a lot of RAM and will run inefficiently without it.

Currently it seems that every Zope instance has 100M of memory allocated by a parent process running as root (of which 5.5M is resident) and ~500M allocated by a child process running as user “zope” (of which ~250M is resident). So it seems that each DomU would need a minimum of 255M of RAM plus the memory required for Apache and other system services with the ideal being about 600M. This means that I could (in theory at least) have something like 18 DomU’s for running Zope instances with Squid running as a front-end cache for all of them in Dom0.

What I am wondering about is how much memory Zope really needs, could I get better performance out of Zope if I allowed it to use more RAM?

The next issue is regarding Squid. I need to have multiple IP addresses used for the services due to administrative issues (each group wants to have their own IP), having Squid listen on multiple addresses should not be a big deal (but I’ve never set up Squid in a front-end proxy manner so there may be hidden problems). I also need to have some https operations on the same IP addresses. I am considering giving none of the Xen DomU’s public IP addresses and just using Net Filter to DNAT the connections to the right machines (a quick test indicates that if the DomU in question has no publicly visible IP address and routes the packets to the Dom0 then a simple DNAT in the PREROUTING table does the job).

Is there anything else I should be considering when dividing a server for running Zope under Xen?

Is it worth considering a single Apache instance that talks to multiple Zope instances in different DomU’s?

LUG Meetings etc

Recently I was talking to an employee at Safeway (an Australian supermarket chain) about Linux etc. He seemed interested in attending a meeting of my local LUG (which incidentally happens on the campus of the university where he studies). I have had a few conversations like that and it seems that it would be good to have some LUG business-cards.

It shouldn’t be difficult to make something that is similar in concept to the Debian business cards [1] for use by a Linux Users Group (LUG). That way when you tell someone about Linux you can hand them a card that has your name and email address along with the web site for your local LUG.

In other news I will be attending a meeting of the Linux Users of Victoria (LUV) [2] this evening and will have some Fair Trade Chocolate [3] to give away to people who arrive early. The chocolate in question is now sold by Safeway for a mere $4 per 100g (not much more expensive than the regular chocolate).

live.com – Useless Search

I’ve just started getting a lot of traffic referred by live.com. It seems that my post Porn for Children [1] is the second link returned from a live.com search for “porn” and my post Porn vs Rape [2] is the third link. These results occur in two of the three settings for “safe search” (the most safe one doesn’t return any matches for a query about “porn”). A query for “porn” and “research” (which would reasonably be expected to match a blog post concerning scientific research made my page the 8th listing (behind http://www.news.com.au/404, http://www.news.com.au/couriermail/404, and http://www.theaustralian.news.com.au/404). It seems strange that a query which should match my page gives it a lower ranking than three 404 error pages while a query which shouldn’t match my page (no-one who searches for “porn” on it’s own wants to read about scientific research) gives it a high ranking.

One very interesting thing about the live.com search is that it doesn’t filter out some of the least effective ways of gaming search engines. For example the URL http://gra.sdsu.edu/research.php has a huge number of links to porn pages and pages that apparently sell illegal pharmecuticals) that are not visible (view page source to see them). The links that I tested were all broken so it seems that the other sites (including http://www.hcs.harvard.edu/~hraaa, http://base.rutgers.edu/pgforms, http://www.wccs.edu/employees, http://base.rutgers.edu/mirna, http://www.calstatela.edu/faculty/jperezc/students/oalamra, and http://institute.beacon.edu/), were cleaned up long ago. There is probably some money to be made in running a service that downloads all content from a web site and/or has a firewall device that sniffs all content that is sent out and makes sure that it seems to be what is intended (about half the URLs in question appear to relate to content that is illegal under US law).

As an aside, I did a few other live.com searches for various sites and the word “porn” and found one Australian university running a forum with somewhat broken HTTP authentication that has some interesting posts about porn etc. I’m not going to provide a link because the content didn’t appear to violate Australian law and you expect some off-topic content on a forum.

But to be fair, live.com have significantly improved their service since last time I tested it [3]. Now a search for “bonnie” or “bonnie++” will give me the top two spots which is far better than the previous situation. Although I have to admit that the Google result of not giving me a high ranking for “bonnie” is probably better.

Blogging and Documents

It seems that the majority of blog traffic (at least in blogs I read) is time-based. It is personal-diary entry posts, references to current events, predictions about future events, bug reports, and other things that either become obsolete or for which it’s important to know the date. For such posts it makes sense to have the date be part of the Permalink URL, and in the normal course of events such posts will tend not to be updated after release.

Another type of blog traffic is posts that have ongoing reference value which will (ideally) be actively maintained to keep them current. For such posts it makes sense to have no date stamp in the Permalink – for example if I update a post about installing SE Linux on Etch once Lenny is released (a significant update) I don’t want people ignoring it when it comes up in search engines (or worse having search engines score it down) because the URL indicates that it was written some time before the release of Etch.

WordPress supports Pages as separate entities to Posts, and the names of Pages are direct links under the root of the WordPress installation. However there is no RSS feed for Pages (AFAIK – I may have missed something) and the WordPress themes often treat Pages differently (which may not be what you want for timeless posts). Also it is not unreasonable to have Pages and timeless posts.

I’m thinking of creating a separate WordPress installation for posts that I intend to manage for long periods of time with updates (such as documenting some aspects of software I have written). The management options for a blog server program provide significant benefits over editing HTML files. The other option would be to use a different CMS (a blog server being a sub-category of a CMS) to store such things.

What I want is a clear way of presenting the data with minimal effort from me (an advantage of WordPress for this is that I have already invested a significant amount of effort in learning how it works) and editing from remote sites (the offline blog editing tools that are just coming out is a positive point for using a blog server – particularly as I could use the same editor for blog posts and documents).

Any suggestions as to how to do this?

Then of course there’s the issue of how to syndicate this. For my document blog (for want of a better term) I am thinking of updating the time-stamp on a post every time I make a significant change. If you subscribe to the document feed than that would be because you want to receive new copies of the documents as they are edited. The other option would be to not change the time-stamp and then include the feed along with my regular blog feed (making two feeds be served as one is not a technical challenge). If I was to update the time stamps then I would have to write posts announcing the release of new documents.

Does anyone know of someone who writes essays or howto documents in a similar manner to Rick Moen [1] or Paul Graham [2] who also does daily blog posts? I’d like to see some examples of how others have solved these problems (if there are any).

Upgraded to WordPress 2.3

I just upgraded to WordPress 2.3. When using Konqueror (my favourite browser) the comment approval is slightly broken (when I tag a comment as spam it usually just turns red and doesn’t disappear from the main Comments tab) and I have to refresh that window more often than usual to make sure I got the result I desired. Also the Sidebar Widget editing is totally broken in Konqueror, I guess I’ll have to login with Firefox to get the new tags feature working.

Also I have got a few WordPress errors about the table “$table_prefix” . “post2cat” not existing. The table in question doesn’t exist in any of my blogs.

So far this is the worst WordPress upgrade experience I’ve had (I started on 2.0).