Archives

Categories

Case Sensitivity and Published Passwords

When I first started running a SE Linux Play Machine [1] I used passwords such as “123456“. Then for a while I had “selinux” but when I created a T-shirt design (see the main Play Machine page for details) I changed the password to “SELINUX” because that is easier to read on a shirt.

Unfortunately the last time I rebuilt the Play Machine I used a password of “selinux“, some people worked this out and still logged in so I didn’t realise that anything was wrong until a comment was placed on my blog yesterday. So for the past three weeks or so some people have been finding themselves unable to login. The password is now “SELINUX” again, sorry for any inconvenience.

It’s a pity that I can’t make sshd a little less case sensitive for passwords. A PAM module to implement a caps-lock mode where the opposite case is tried would be useful for this case and some others too.

SE Linux Lenny Status Update

I previously described four levels of SE Linux support on the desktop [1].

Last night I updated my APT repository of SE Linux packages for Lenny (as described on my document about installing SE Linux [2]). I included a new policy package that supports logging in to a graphical session via gdm in either unconfined_t or user_t. This covers all the functionality I described as target 2 (some restricted users). I have tested this to a moderate degree.

Target 3 was having all users restricted and no unconfined_t domain (the policy module unconfined.pp not being linked into the running policy). I had previously done a large part of the work towards that goal in preparation for running a SE Linux Play Machine (with public root password) [3] on Lenny – but until last night I had not published it. The combination of the policy needed to run with no unconfined_t domain and the policy to allow logging in as user_t via gdm should mean that a desktop system with gdm for graphical login that has no unconfined_t domain will work – but I have not tested this. So target 3 is likely to have been achieved, if testing reveals any problems in this regard then I’ll release another policy update.

So now the only remaining target is MLS.

Also I have been setting up a mail server with a MySQL database for user account data and using Courier-Maildrop for delivery, so I’ve written policy for that and also made some other improvements to the policy regarding complex mail servers.

You Have the Right to Not Search My Bag

This afternoon I was in a Safeway/Woolworths store (an Australian supermarket chain) and the lady on the checkout asked to inspect my backpack on the way out. The conversation went as follows:
Checkout Lady: Can I inspect your bag?
Me: Sure. – I put my backpack on the counter
CL: Could you open it for me?
Me: It’s OK, you can do it.
CL: I’m not allowed to open your bag, can you open it?
Me: I don’t mind, you can open it.

We iterated over the last two lines a few times, when it became clear that no progress was going to be made I asked “Can I go now?” and left.

It seems rather pointless to demand to search someone’s bag if you are not permitted to open it. Not that they have any power to search bags anyway. I discussed this with a police officer about 20 years ago and was told that store staff can do nothing other than refuse to allow you into their store in future if you don’t agree to a bag search. Stores claim that it’s a condition of entry that your bag be searched, but apparently that was not enforceable. Of course the law could have changed recently, I guess it would only require a terrorist threat related to supermarket products (baking soda can make your bread rise explosively) to get the law changed.

The last time my bag was searched was when leaving a JB Hi-Fi store. I had a brand new EeePC (purchased from a different store) in one hand and a bag in the other. The EeePC was identical to ones that they had on display and they didn’t even ask about it. It seems hardly worth the effort of searching bags when anyone can carry out expensive gear in their hand without being questioned.

A Police SMS about Fire Risk

My wife and I have each received SMS messages from “Vic.Police” that say:

Extreme weather expected tonight (Monday) & tomorrow. High wind & fire risk. Listen to the ABC local radio for emergency update. Do not reply to this message.

Presumably the police are trying to contact everyone in Victoria. The problem seems to be related to the high wind speed that is forecast, the temperature is only predicted to be 32C (as opposed to the 38C that they were forecasting a few days ago and the temperatures of 46C or more a few weeks ago).

The last reports were that the firefighters were still working on putting out fires, and the unclear news coverage seemed to suggest that some of the fires had been burning since the 7th of February. A day of extreme fire danger that starts without any fires would be bad enough, but starting with some fires that are already out of control is destined to give a very bad result.

Below is the link to my previous post about people trying to take advantage of a tragedy to benefit their own political causes. For anyone who wants to rail against abortion, homosexuality, or the Greens party, please show some decency and do so based on relevant facts and do it at an appropriate time. I suggest that anyone who writes later this week about ways to avoid bushfires should be careful to check their claims for accuracy and scientific evidence (hint – the CSIRO and NASA have published a lot of useful background information).

http://etbe.coker.com.au/2009/02/25/tragedy-and-profit/

Links February 2009

Michael Anissimov writes about the theft of computers from the Los Alamos nuclear weapons lab [1]. He suggests that this incident (and others like it) pose a great risk to out civilisation. He advocates donating towards The Lifeboat Foundation [2] to try and mitigate risks to humanity. They suggest pledging $1000 per year for 25 years.

It’s interesting to note that people in Pakistan pay $8 per month for net access that better by most objective metrics than that which most people in first world can get [3]. It seems that we need to remove the cartel for the local loop to get good net access, either deregulate it entirely or make it owned by the local government who are more directly responsive to the residents.

Bruce Schneier wrote a post about a proposed US law to force all mobile phones with cameras to make a “click” sound when taking a picture [4]. The law is largely irrelevant, as it’s been law in Japan for a while most phones are already designed in that way. One interesting comment from MarkH was: But if congress REALLY wishes to benefit the public, I suggest that all guns in the U.S. be required, before each discharge, to make loud sounds (with appropriate time sequencing) simulating the flintlock technology that was common at the beginning of U.S. history, including cocking, use of the ramrod, etc. This would give fair warning of an impending discharge, and would limit firing rates to a few per minute. ROFL

Brief review of a Google Android phone vs an iPhone [5]. The Android G1 is now on sale in Australia! [6].

LWN has an article about the panel discussion at the LCA Security Mini-conf [7]. Jonathan Corbet has quoted me quite a bit in the article, thanks Jonathan!

Peter Ward gave an interesting TED talk about Hydrogen Sulphide and mass extinctions [8]. The best available evidence is that one of the worst extinctions was caused by H2S in the atmosphere which was produced by bacteria. The bacteria in question like a large amount of CO2 in the atmosphere. It’s yet another reason for reducing the CO2 production.

Michael Anissimov has written a good article summarising some of the dangers of space exploration [9], he suggests colonising the sea, deserts, and Antartica first (all of which are much easier and safer). “Until we gain the ability to create huge (miles wide or larger) air bubbles in space enclosed by rapidly self-healing transparent membranes, it will be cramped and overwhelmingly boring. You’ll spend even more time on the Internet up there than down here, and your connection will be slow“. A confined space and slow net access, that’s like being on a plane.

Tragedy and Profit

Every time something goes wrong there will be someone who tries to take advantage of the situation. The recent bushfires in Australia that have killed hundreds of people (the count is not known yet) are a good example. Pastor Nalliah of Catch the Fire Ministries [1] claims that it is due to legalising abortion. This is astoundingly wrong.

In a more extreme example representatives of the Westboro Baptist Church were planning to visit Australia to launch a protest in support of the bushfires [2]. I have not yet found any news reports about whether they actually visited Australia or protested – it’s most likely that they decided not to visit due to the Australian laws being very different to US laws regarding the relative importance of freedom of speech and incitement to violence. Apparently the insane Westboro Baptist Church people (who are best known for GodHatesFags.com and GodHatesAmerica.com) believe that God hates Australia and caused the fires (presumably due to Australia not persecuting homosexuals). Danny Nalliah has permanently damaged his own reputation by acting in a similar way to the Westboro Baptist Church. The reputation of Catch The Fire now depends on how quickly they get a new pastor…

Please note well that the vast majority of Christians have nothing in common with Westboro or Catch The Fire. I don’t recall the last time I met an Australian Christian who was strongly opposed to homosexuality or abortion.

Now we do have to try and investigate ways of avoiding future tragedies, and the work to do this needs to begin immediately. John Brumby (the Premier of Victoria) has announced that Victoria will get new strict building codes for fire resistant buildings [3]. There have been many anecdotes of people who claim to have been saved by attaching sprinkler systems to their homes, by building concrete bunkers to hide in while the fire passes, and using other techniques to save their home or save themselves. Some more research on the most effective ways of achieving such goals would be worthwhile, an increase in funding for the CSIRO to investigate the related issues would be a good thing. The article also has an interesting quote “As the fallout from the disaster widened, the union representing the nation’s 13,000 firefighters warned both the federal and state governments to take global warming seriously to prevent a repeat of last weekend’s lethal firestorm“. However given that traditionally Australia and the US have been the two nations most opposed to any efforts to mitigate global warming it seems unlikely that anything will change in this regard in a hurry.

The attempts to link bushfires to abortion and homosexuality are offensive, but can be ignored in any remotely serious debate about politics. However there are some other groups trying to profit from the tragedy that make claims which are not as ridiculous.

On the 9th of February the Australian Green party was compelled to release an official statement from Spokesperson Scott Ludlam, Sarah Hanson-Young, Rachel Siewert, Christine Milne, and Bob Brown following some political discussion about Greens policies [4]. There have been attempts to blame the Greens for the tragedy which were politically motivated, some of which came from groups that traditionally oppose the Greens for other reasons (I’m not going to provide the detail – anyone who is really interested can do google searches on the people in question). On the 16th of February Bob Brown (the leader of the Green party) felt obliged to make another media release reiterating the fact that the Greens support prescribed burn-offs to limit the scope of wild fires [5], he also decried the hate mongering that has been occurring in the wake of the disaster.

One of the strange memes that seems to be spread by opponents to the Greens is that the Greens are all supposedly from the city and know nothing about the country. To avoid being subject to such attack I feel obliged to note that on one of the bad fire days I visited my parents. I spent the morning with my father and some friends at a park that was not far from the fire area, my friends then returned to their home which was not far from the fire area. I then had lunch with my parents and watched the smoke through the dining room window. After that my friends didn’t respond to email for a while and I was concerned that they may have lost their house or maybe suffered injury or death. I didn’t know them well enough to feel it appropriate to try a dozen different ways of contacting them (I’m sure that many other people were doing so), but I was rather concerned until my wife received an email from them.

But I don’t base my political beliefs on what I personally observe or my connections to people on the edge of the fire zone. I believe in the Green principles of “Peace and Non Violence, Grassroots Democracy, Social and Economic Justice, Ecological Sustainability” and the use of science and statistics to determine the best ways of achieving those goals.

Red Hat, Microsoft, and Virtualisation Support

Red Hat has just announced a deal with MS for support of RHEL virtual machines on Windows Server and Windows virtual machines on RHEL [1]. It seems that this deal won’t deliver anything before “calendar H2 2009” so nothing will immediately happen – but the amount of testing to get these things working correctly is significant.

Red Hat has stated that “the agreements contain no patent or open source licensing components” and “the agreements contain no financial clauses, other than industry-standard certification/validation testing fees” so it seems that there is nothing controversial in this. Of course that hasn’t stopped some people from getting worked up about it.

I think that this deal is a good thing. I have some clients who run CentOS and RHEL servers (that I installed and manage) as well as some Windows servers. Some of these clients have made decisions about the Windows servers that concern me (such as not using ECC RAM, RAID, or backups). It seems to me that if I was to use slightly more powerful hardware for the Linux servers I could run Windows virtual machines for those clients, manage all the backups at the block device level (without bothering the Windows sysadmins). This also has the potential to save the client some costs in terms of purchasing hardware and managing it.

When this deal with MS produces some results (maybe in 6 months time) I will recommend that some of my clients convert CentOS machines to RHEL to take advantage of it. If my clients take my advice in this regard then it will result in a small increase in revenue and market share for RHEL. So Red Hat’s action in this regard seems to be a good business decision for them. If my clients take my advice and allow me to use virtualisation to better protect their critical data that is on Windows servers then it will be a significant benefit for the users.

Lenny Play Machine Online

As Debian/Lenny has been released and the temperatures in my part of the world are no longer insanely hot I have put my SE Linux Play Machine [1] online again. It is running Debian/Lenny and is a Xen DomU on a Debian/Lenny Dom0.

To get this working I had to make a few more fixes to the SE Linux policy and will update my Lenny repository (as mentioned in my document on installing SE Linux on Lenny [2]) in the near future.

I have reformatted most of the text from the thanks.txt file on my Play Machine and put is online on my documents blog [3]. I have also graphed the logins to my Play Machine using Webalizer [4] with 1KB transfer in the graph meaning one minute of login time. Below is the Perl code I used to convert the output of “last -i” to what looks like an Apache log file, the program takes a single command-line parameter which indicates the year that the data is from (which is not included in last output) and takes the output of “last -i” on standard input and gives a web log on standard output.

#!/usr/bin/perl

my @output;

while(<STDIN>)
{
  if(not $_ =~ /^root.*pts/)
  {
    next;
  }
  $_ =~ s/  +/ /g;
  $_ =~ s/^root pts.[0-9]+ //;
  chomp $_;
  my @arr = split(' ', $_);
  my $url = "/";
  if($arr[6] =~ /crash/)
  {
    $url = "/crash";
  }
  my $t = $arr[7];
  $t =~ s/[()]//g;
  my @times = split(':', $t);
  if($times[0] =~ /\+/)
  {
    my @hours = split('\+', $times[0]);
    $t = $hours[0] * 24 * 60 + $hours[1] * 60 + $times[1];
  }
  else
  {
    $t = $times[0] * 60 + $times[1];
  }
  $t *= 1024;
  if($t == 0)
  {
    $t = 1;
  }
  if(length($arr[3]) == 1)
  {
    $arr[3] = "0" . $arr[3];
  }
  $output[$#output + 1] = "$arr[0] – – [$arr[3]/$arr[2]/$ARGV[0]:$arr[4]:00 +0000] \"GET $url HTTP/1.0\" 200 $t \"-\"\n";
}

my $i;
for($i = $#output; $i > -1; $i--)
{
  print $output[$i];
}

Xen and Lenny

Debian GNU/Linux 5.0 AKA “Lenny” has just been released [1].

One of the features that is particularly noteworthy is that Xen has been updated and now works fully and correctly on the 2.6.26 kernel (see the Debian Wiki page about Xen for details [2]). This may not sound exciting, but I know that a lot of people put a lot of work into getting this going, and for a long time in Unstable it wasn’t working well. I’ve just upgraded three Xen servers from Etch to Lenny (actually one was Etch kernel with Lenny user-space), and they all worked!

Those three servers were all running the i386 architecture, the next thing to do is to try it out with the AMD64 architecture. One of my plans is to try the latest Debian kernel on the server I use in Germany, but I’ll try on a few other AMD64 machines first.

Do Spammers target Secondary MX Servers

Rumour has it that some types of spammer target the secondary MX servers. The concept is that some people have less control over the secondary MX server and less ability to implement anti-spam measures. Therefore if they accept all mail from the secoondary then a spammer will have more success if they attack the secondary server.

True secondary servers are becoming increasingly uncommon, the lower priority servers listed in MX records tend to have the same configuration as the primary, so the benefit for the spammer in attacking the secondary server is probably minimal. But it would be good to know whether they do this.

I decided to analyse the logs from a mail server that I run to see if I can find evidence of this. I chose a server that I run for a client which has thousands of accounts and tens of thousands of messages delivered per day, my own server doesn’t get enugh traffic to give good results.

I analysed the logs for a week for the primary and secondary MX servers to see if the ratio of spam to ham differed. Now this does have some inherent inaccuracy, some spam will slip past the filters and occasionally a legitimate email will be rejected. But I believe that the accuracy required in a spam filter to avoid making the users scream is vastly greater than that which is required to give a noteworthy result.

I produced totals of the number of messages delivered, the number rejected by SpamAssassin (which has a number of proprietary additions), the number of message delivery attempts that were prevented due to rate limiting (most of which will be due to spammers), and the number of attempts to deliver to unknown accounts (some of which will be due to spammers having bad addresses in their lists).

For each of these rejection criteria I produced a ratio of the number of rejections to the number of delivered messages for each of the servers.

The rate limit count didn’t seem useful. While the primary server had a ratio of 0.75 messages rejected due to rate limiting to every message accepted the secondary had a ratio of 0.08. It seems that the secondary just didn’t get enough traffic to trigger the limits very often. This is an indication that the more aggressive bots might not be targetting the secondary.

The ratio of messages rejected by SpamAssassin to legitimate mail was 0.76:1 on the primary server and on the secondary server it was 1.24:1. The ratio of messages addressed to unknown users to successful deliveries was 3.05:1 on the primary and 7.00:1 on the secondary! This seems like strong evidence to show that some spammers are deliberately targetting the secondary server.

In this case both the primary and secondary servers are in server rooms hosted by the same major ISP in the same region. The traceroute between the two mail servers is only 7 hops, and there is only one hop between the two server rooms. So it seems unlikely that there would be some connectivity issue that prevents spammers from connecting to the primary.

One other factor that may be relevant is that the secondary server has been in service for some years while the primary is only a few months old. Spammers who store the server IP address with the email address (which happens – change the DNS records to send your mail to a different server and you will see some spam go to the old server) will be sending mail to what is now the secondary server. The difference in the rejected mail volume on the secondary server and the amount that would be rejected if it had the same ratio as the primary amounts to 7% of all mail rejected by SpamAssassin and 14% of all mail addressed to unknown users. I think it’s unlikely that any significant fraction of that would be due to spammers caching the server IP address for months after the DNS records were changed. So therefore it seems most likely that something between 7% and 14% of spam is specifically targetted at the secondary server.

While the ratio of spam to ham seems significantly worse on the secondary it is still a relatively small portion of the overall spam sent to the service. I had been considering setting up secondary mail servers with extra-strict anti-spam measures but the small portion of the overall spam that is targetted in such a way indicates to me that it is not going to be worth the effort.

Another thing that has occurred to me (which I have not yet had time to investigate) is the possibility that some spammers will send the same messages to all MX servers. If that happens then the ratio of spam to ham would increase every time the number of MX servers is increased. In that case it would make sense to minimise the number of MX servers to reduce the amount of CPU power devoted to runing SpamAssassin.

Note that I have intentionally not given any numbers for the amount of mail received by the service as it is a commercial secret.

Update: One thing I realised after publishing this post is that the secondary MX server is also the main server for mail sent between local users. While the number of users who send mail to other users on the service is probably a small portion of the overall traffic (it’s not a really big ISP) it will make a difference to the ratios. Therefore the ratio of spam to ham would be even worse on the secondary MX (assuming for the sake of discussion that local users aren’t spamming each other).