Archives

Categories

A Police SMS about Fire Risk

My wife and I have each received SMS messages from “Vic.Police” that say:

Extreme weather expected tonight (Monday) & tomorrow. High wind & fire risk. Listen to the ABC local radio for emergency update. Do not reply to this message.

Presumably the police are trying to contact everyone in Victoria. The problem seems to be related to the high wind speed that is forecast, the temperature is only predicted to be 32C (as opposed to the 38C that they were forecasting a few days ago and the temperatures of 46C or more a few weeks ago).

The last reports were that the firefighters were still working on putting out fires, and the unclear news coverage seemed to suggest that some of the fires had been burning since the 7th of February. A day of extreme fire danger that starts without any fires would be bad enough, but starting with some fires that are already out of control is destined to give a very bad result.

Below is the link to my previous post about people trying to take advantage of a tragedy to benefit their own political causes. For anyone who wants to rail against abortion, homosexuality, or the Greens party, please show some decency and do so based on relevant facts and do it at an appropriate time. I suggest that anyone who writes later this week about ways to avoid bushfires should be careful to check their claims for accuracy and scientific evidence (hint – the CSIRO and NASA have published a lot of useful background information).

http://etbe.coker.com.au/2009/02/25/tragedy-and-profit/

Links February 2009

Michael Anissimov writes about the theft of computers from the Los Alamos nuclear weapons lab [1]. He suggests that this incident (and others like it) pose a great risk to out civilisation. He advocates donating towards The Lifeboat Foundation [2] to try and mitigate risks to humanity. They suggest pledging $1000 per year for 25 years.

It’s interesting to note that people in Pakistan pay $8 per month for net access that better by most objective metrics than that which most people in first world can get [3]. It seems that we need to remove the cartel for the local loop to get good net access, either deregulate it entirely or make it owned by the local government who are more directly responsive to the residents.

Bruce Schneier wrote a post about a proposed US law to force all mobile phones with cameras to make a “click” sound when taking a picture [4]. The law is largely irrelevant, as it’s been law in Japan for a while most phones are already designed in that way. One interesting comment from MarkH was: But if congress REALLY wishes to benefit the public, I suggest that all guns in the U.S. be required, before each discharge, to make loud sounds (with appropriate time sequencing) simulating the flintlock technology that was common at the beginning of U.S. history, including cocking, use of the ramrod, etc. This would give fair warning of an impending discharge, and would limit firing rates to a few per minute. ROFL

Brief review of a Google Android phone vs an iPhone [5]. The Android G1 is now on sale in Australia! [6].

LWN has an article about the panel discussion at the LCA Security Mini-conf [7]. Jonathan Corbet has quoted me quite a bit in the article, thanks Jonathan!

Peter Ward gave an interesting TED talk about Hydrogen Sulphide and mass extinctions [8]. The best available evidence is that one of the worst extinctions was caused by H2S in the atmosphere which was produced by bacteria. The bacteria in question like a large amount of CO2 in the atmosphere. It’s yet another reason for reducing the CO2 production.

Michael Anissimov has written a good article summarising some of the dangers of space exploration [9], he suggests colonising the sea, deserts, and Antartica first (all of which are much easier and safer). “Until we gain the ability to create huge (miles wide or larger) air bubbles in space enclosed by rapidly self-healing transparent membranes, it will be cramped and overwhelmingly boring. You’ll spend even more time on the Internet up there than down here, and your connection will be slow“. A confined space and slow net access, that’s like being on a plane.

Tragedy and Profit

Every time something goes wrong there will be someone who tries to take advantage of the situation. The recent bushfires in Australia that have killed hundreds of people (the count is not known yet) are a good example. Pastor Nalliah of Catch the Fire Ministries [1] claims that it is due to legalising abortion. This is astoundingly wrong.

In a more extreme example representatives of the Westboro Baptist Church were planning to visit Australia to launch a protest in support of the bushfires [2]. I have not yet found any news reports about whether they actually visited Australia or protested – it’s most likely that they decided not to visit due to the Australian laws being very different to US laws regarding the relative importance of freedom of speech and incitement to violence. Apparently the insane Westboro Baptist Church people (who are best known for GodHatesFags.com and GodHatesAmerica.com) believe that God hates Australia and caused the fires (presumably due to Australia not persecuting homosexuals). Danny Nalliah has permanently damaged his own reputation by acting in a similar way to the Westboro Baptist Church. The reputation of Catch The Fire now depends on how quickly they get a new pastor…

Please note well that the vast majority of Christians have nothing in common with Westboro or Catch The Fire. I don’t recall the last time I met an Australian Christian who was strongly opposed to homosexuality or abortion.

Now we do have to try and investigate ways of avoiding future tragedies, and the work to do this needs to begin immediately. John Brumby (the Premier of Victoria) has announced that Victoria will get new strict building codes for fire resistant buildings [3]. There have been many anecdotes of people who claim to have been saved by attaching sprinkler systems to their homes, by building concrete bunkers to hide in while the fire passes, and using other techniques to save their home or save themselves. Some more research on the most effective ways of achieving such goals would be worthwhile, an increase in funding for the CSIRO to investigate the related issues would be a good thing. The article also has an interesting quote “As the fallout from the disaster widened, the union representing the nation’s 13,000 firefighters warned both the federal and state governments to take global warming seriously to prevent a repeat of last weekend’s lethal firestorm“. However given that traditionally Australia and the US have been the two nations most opposed to any efforts to mitigate global warming it seems unlikely that anything will change in this regard in a hurry.

The attempts to link bushfires to abortion and homosexuality are offensive, but can be ignored in any remotely serious debate about politics. However there are some other groups trying to profit from the tragedy that make claims which are not as ridiculous.

On the 9th of February the Australian Green party was compelled to release an official statement from Spokesperson Scott Ludlam, Sarah Hanson-Young, Rachel Siewert, Christine Milne, and Bob Brown following some political discussion about Greens policies [4]. There have been attempts to blame the Greens for the tragedy which were politically motivated, some of which came from groups that traditionally oppose the Greens for other reasons (I’m not going to provide the detail – anyone who is really interested can do google searches on the people in question). On the 16th of February Bob Brown (the leader of the Green party) felt obliged to make another media release reiterating the fact that the Greens support prescribed burn-offs to limit the scope of wild fires [5], he also decried the hate mongering that has been occurring in the wake of the disaster.

One of the strange memes that seems to be spread by opponents to the Greens is that the Greens are all supposedly from the city and know nothing about the country. To avoid being subject to such attack I feel obliged to note that on one of the bad fire days I visited my parents. I spent the morning with my father and some friends at a park that was not far from the fire area, my friends then returned to their home which was not far from the fire area. I then had lunch with my parents and watched the smoke through the dining room window. After that my friends didn’t respond to email for a while and I was concerned that they may have lost their house or maybe suffered injury or death. I didn’t know them well enough to feel it appropriate to try a dozen different ways of contacting them (I’m sure that many other people were doing so), but I was rather concerned until my wife received an email from them.

But I don’t base my political beliefs on what I personally observe or my connections to people on the edge of the fire zone. I believe in the Green principles of “Peace and Non Violence, Grassroots Democracy, Social and Economic Justice, Ecological Sustainability” and the use of science and statistics to determine the best ways of achieving those goals.

Red Hat, Microsoft, and Virtualisation Support

Red Hat has just announced a deal with MS for support of RHEL virtual machines on Windows Server and Windows virtual machines on RHEL [1]. It seems that this deal won’t deliver anything before “calendar H2 2009” so nothing will immediately happen – but the amount of testing to get these things working correctly is significant.

Red Hat has stated that “the agreements contain no patent or open source licensing components” and “the agreements contain no financial clauses, other than industry-standard certification/validation testing fees” so it seems that there is nothing controversial in this. Of course that hasn’t stopped some people from getting worked up about it.

I think that this deal is a good thing. I have some clients who run CentOS and RHEL servers (that I installed and manage) as well as some Windows servers. Some of these clients have made decisions about the Windows servers that concern me (such as not using ECC RAM, RAID, or backups). It seems to me that if I was to use slightly more powerful hardware for the Linux servers I could run Windows virtual machines for those clients, manage all the backups at the block device level (without bothering the Windows sysadmins). This also has the potential to save the client some costs in terms of purchasing hardware and managing it.

When this deal with MS produces some results (maybe in 6 months time) I will recommend that some of my clients convert CentOS machines to RHEL to take advantage of it. If my clients take my advice in this regard then it will result in a small increase in revenue and market share for RHEL. So Red Hat’s action in this regard seems to be a good business decision for them. If my clients take my advice and allow me to use virtualisation to better protect their critical data that is on Windows servers then it will be a significant benefit for the users.

Lenny Play Machine Online

As Debian/Lenny has been released and the temperatures in my part of the world are no longer insanely hot I have put my SE Linux Play Machine [1] online again. It is running Debian/Lenny and is a Xen DomU on a Debian/Lenny Dom0.

To get this working I had to make a few more fixes to the SE Linux policy and will update my Lenny repository (as mentioned in my document on installing SE Linux on Lenny [2]) in the near future.

I have reformatted most of the text from the thanks.txt file on my Play Machine and put is online on my documents blog [3]. I have also graphed the logins to my Play Machine using Webalizer [4] with 1KB transfer in the graph meaning one minute of login time. Below is the Perl code I used to convert the output of “last -i” to what looks like an Apache log file, the program takes a single command-line parameter which indicates the year that the data is from (which is not included in last output) and takes the output of “last -i” on standard input and gives a web log on standard output.

#!/usr/bin/perl

my @output;

while(<STDIN>)
{
  if(not $_ =~ /^root.*pts/)
  {
    next;
  }
  $_ =~ s/  +/ /g;
  $_ =~ s/^root pts.[0-9]+ //;
  chomp $_;
  my @arr = split(' ', $_);
  my $url = "/";
  if($arr[6] =~ /crash/)
  {
    $url = "/crash";
  }
  my $t = $arr[7];
  $t =~ s/[()]//g;
  my @times = split(':', $t);
  if($times[0] =~ /\+/)
  {
    my @hours = split('\+', $times[0]);
    $t = $hours[0] * 24 * 60 + $hours[1] * 60 + $times[1];
  }
  else
  {
    $t = $times[0] * 60 + $times[1];
  }
  $t *= 1024;
  if($t == 0)
  {
    $t = 1;
  }
  if(length($arr[3]) == 1)
  {
    $arr[3] = "0" . $arr[3];
  }
  $output[$#output + 1] = "$arr[0] – – [$arr[3]/$arr[2]/$ARGV[0]:$arr[4]:00 +0000] \"GET $url HTTP/1.0\" 200 $t \"-\"\n";
}

my $i;
for($i = $#output; $i > -1; $i--)
{
  print $output[$i];
}

Xen and Lenny

Debian GNU/Linux 5.0 AKA “Lenny” has just been released [1].

One of the features that is particularly noteworthy is that Xen has been updated and now works fully and correctly on the 2.6.26 kernel (see the Debian Wiki page about Xen for details [2]). This may not sound exciting, but I know that a lot of people put a lot of work into getting this going, and for a long time in Unstable it wasn’t working well. I’ve just upgraded three Xen servers from Etch to Lenny (actually one was Etch kernel with Lenny user-space), and they all worked!

Those three servers were all running the i386 architecture, the next thing to do is to try it out with the AMD64 architecture. One of my plans is to try the latest Debian kernel on the server I use in Germany, but I’ll try on a few other AMD64 machines first.

Do Spammers target Secondary MX Servers

Rumour has it that some types of spammer target the secondary MX servers. The concept is that some people have less control over the secondary MX server and less ability to implement anti-spam measures. Therefore if they accept all mail from the secoondary then a spammer will have more success if they attack the secondary server.

True secondary servers are becoming increasingly uncommon, the lower priority servers listed in MX records tend to have the same configuration as the primary, so the benefit for the spammer in attacking the secondary server is probably minimal. But it would be good to know whether they do this.

I decided to analyse the logs from a mail server that I run to see if I can find evidence of this. I chose a server that I run for a client which has thousands of accounts and tens of thousands of messages delivered per day, my own server doesn’t get enugh traffic to give good results.

I analysed the logs for a week for the primary and secondary MX servers to see if the ratio of spam to ham differed. Now this does have some inherent inaccuracy, some spam will slip past the filters and occasionally a legitimate email will be rejected. But I believe that the accuracy required in a spam filter to avoid making the users scream is vastly greater than that which is required to give a noteworthy result.

I produced totals of the number of messages delivered, the number rejected by SpamAssassin (which has a number of proprietary additions), the number of message delivery attempts that were prevented due to rate limiting (most of which will be due to spammers), and the number of attempts to deliver to unknown accounts (some of which will be due to spammers having bad addresses in their lists).

For each of these rejection criteria I produced a ratio of the number of rejections to the number of delivered messages for each of the servers.

The rate limit count didn’t seem useful. While the primary server had a ratio of 0.75 messages rejected due to rate limiting to every message accepted the secondary had a ratio of 0.08. It seems that the secondary just didn’t get enough traffic to trigger the limits very often. This is an indication that the more aggressive bots might not be targetting the secondary.

The ratio of messages rejected by SpamAssassin to legitimate mail was 0.76:1 on the primary server and on the secondary server it was 1.24:1. The ratio of messages addressed to unknown users to successful deliveries was 3.05:1 on the primary and 7.00:1 on the secondary! This seems like strong evidence to show that some spammers are deliberately targetting the secondary server.

In this case both the primary and secondary servers are in server rooms hosted by the same major ISP in the same region. The traceroute between the two mail servers is only 7 hops, and there is only one hop between the two server rooms. So it seems unlikely that there would be some connectivity issue that prevents spammers from connecting to the primary.

One other factor that may be relevant is that the secondary server has been in service for some years while the primary is only a few months old. Spammers who store the server IP address with the email address (which happens – change the DNS records to send your mail to a different server and you will see some spam go to the old server) will be sending mail to what is now the secondary server. The difference in the rejected mail volume on the secondary server and the amount that would be rejected if it had the same ratio as the primary amounts to 7% of all mail rejected by SpamAssassin and 14% of all mail addressed to unknown users. I think it’s unlikely that any significant fraction of that would be due to spammers caching the server IP address for months after the DNS records were changed. So therefore it seems most likely that something between 7% and 14% of spam is specifically targetted at the secondary server.

While the ratio of spam to ham seems significantly worse on the secondary it is still a relatively small portion of the overall spam sent to the service. I had been considering setting up secondary mail servers with extra-strict anti-spam measures but the small portion of the overall spam that is targetted in such a way indicates to me that it is not going to be worth the effort.

Another thing that has occurred to me (which I have not yet had time to investigate) is the possibility that some spammers will send the same messages to all MX servers. If that happens then the ratio of spam to ham would increase every time the number of MX servers is increased. In that case it would make sense to minimise the number of MX servers to reduce the amount of CPU power devoted to runing SpamAssassin.

Note that I have intentionally not given any numbers for the amount of mail received by the service as it is a commercial secret.

Update: One thing I realised after publishing this post is that the secondary MX server is also the main server for mail sent between local users. While the number of users who send mail to other users on the service is probably a small portion of the overall traffic (it’s not a really big ISP) it will make a difference to the ratios. Therefore the ratio of spam to ham would be even worse on the secondary MX (assuming for the sake of discussion that local users aren’t spamming each other).

Normalising Wages

John Robb writes about the normalisation of salaries that is driven by the use of the Internet and global corporations [1]. He cites an example of IBM forcing many of it’s employees to work in developing countries for lower wages.

It seems to me that IBM is leading the field in this regard and many other companies will do the same. The computer industry which has been very well paid for the last 20 years seems likely to have some significant reductions in wages. Some computer jobs can’t be immediately outsourced to other companies such as network cabling, hardware repair, and training. But they will experience a pay reduction due to competition from software engineers and other people who’s primary work is easily outsourced.

For a job hunter it seems that the best thing to do is to look for work that requires site visits and customer contact. One of my long-term clients has a large business in installing wireless devices. For a long time they have had an open offer for me to climb up towers and install wireless devices. If the rates for system administration and software engineering in Australia (and the outsourced work from the US that I sometimes do) drops to Indian pay standards then I might do some of that wireless work.

When recommending that my clients hire people to do software engineering or programming work I am considering seeking out people in low-wages countries that I know through the free software community. I believe that through an organisation like Debian I can find people who are as good as the people I have been finding through my local LUG but who can feel happy earning a much lower rate. My clients seem to periodically need PHP work, so if you live in a low wages country, have good PHP skills, and are well known in the free software community to a degree that I can feel happy in skipping personal meetings then you can email me your CV.

Don Marti has written about the Linux vs Windows situation on Netbooks [2]. He suggests that neither Intel nor Microsoft is set up for a Netbook world. If both Intel and Microsoft were to pay the majority of their employees rates that are only slightly greater than typical wages in India then things might look a little better for them.

During the dot-com boom I was working in Europe (firstly in London and then later in Amsterdam). It was a lot of fun in many ways with large amounts of money, easy work, and parties. The down-side was that I had to work with druggies and other people who were not suitable employees as the companies I worked for felt that they had no choice. Since the dot-com crash the quality of the people I have worked with has increased significantly which was a good compensation for the lower pay. Also I believe that the lack of silly money was one factor that helped Linux and other free software increase market share after the crash. I expect that the economic depression that we are now entering will have a similar effect. I will earn less money, have fewer parties, and work harder. But generally the quality of the staff in the IT industry will improve and the usage of free software will increase.

As an aside, Don suggests that people who create web sites etc will want more expensive machines. I regularly use my EeePC for running servers in several data centers, I do all types of system administration and system programming on it. The screen resolution is not that great, but it shouldn’t be difficult to design a Netbook that can drive a 1920*1200 external display (as is being commonly deployed in new hotel rooms – it seems that all new large-screen TVs have VGA and DVI input). A netbook which can drive a 1920*1200 display and a full-size USB keyboard could allow me to do some very effective work in a hotel room while also allowing me to work when traveling in public transport. Now all I need is for the hotel booking sites such as www.wotif.com to allow me to search for rooms that have such a display. A business hotel could even provide a USB keyboard in the room to allow guests to travel light. Of course it would be possible to design a slightly larger laptop at a Netbook price point, the extra plastic and metal needed to make a larger frame and keyboard costs almost nothing and given the low prices of large desktop TFT displays I find it difficult to believe that the factor of two difference in price between the cheaper and more expensive Netbooks is due to the display.

My main machine is a Thinkpad T41p, I really don’t need a high-end machine on my desktop. I am considering the practice of avoiding purchasing expensive machines as a matter of principle. Maybe if we try and avoid buying expensive machines we can help drive the market towards Netbooks where Linux has an advantage.

Bridging and Redundancy

I’ve been working on a redundant wireless network for a client. The network has two sites that have pairs of links (primary and backup) which have dedicated wireless hardware (not 802.11 and some proprietary controller in the device – it’s not an interface for a Linux box).

When I first started work the devices were configured in a fully bridged mode, so I decided to use Linux Bridging (with brctl) to bridge an Ethernet port connected to the LAN with only one of the two wireless devices. The remote end had a Linux box that would bridge both the wireless devices at it’s end (there were four separate end-points as the primary and backup links were entirely independent). This meant of course that packets would go over the active link and then return via the inactive link, but needless data transfer on the unused link didn’t cause any problems.

The wireless devices claimed to implement bridging but didn’t implement STP (Spanning Tree Protocol) and they munged every packet to have the MAC address of the wireless device (unlike a Linux bridge which preserves the MAC address). The lack of STP meant that the devices couldn’t be connected at both ends. They also only forwarded IP packets so I couldn’t use STP implementations in Linux hosts or switches to prevent loops.

Below (in the part of this post which shouldn’t be in the RSS feed) I have included the script I wrote to manage a failover bridge. It pings the router at the other end when the primary link is in use, if it can’t reach it then it removes the Ethernet device that corresponds to the primary link and adds the device related to the secondary link. I had an hourly cron job that would flip it back to the primary link if it was on the secondary.

I ended up not using this in production because there were other some routers on the network which couldn’t cope with a MAC address changing and needed a reboot after such changes (even waiting 15 minutes didn’t result in the new MAC being reliably detected). So I’m posting it here for the benefit of anyone who is interested.
Continue reading Bridging and Redundancy

Employment Packages

Paul Wayper has said that he only wants to work for companies that will send him too LCA [1]. While that criteria is quite reasonable it seems overly specific. Among other things the varying location of LCA will result in the expense for the employer varying slightly year by year – which employers generally don’t like.

I believe that a better option is to have an employment package that specifies a certain amount of money (related to the gross income of the employee) should be set aside for training, hardware, or other expenses that help the employee (or their colleagues) do their job. Such an option would probably only be available to senior employees who are most able to determine the most effective way of spending the money.

For example an employee who earns $100,000 per annum might be permitted to assign 10% of their income ($10,000) to training or hardware that assists their job. Modern PCs are so cheap that any reasonable hardware requirements could fit within that budget with ease.

There are several benefits to such a scheme. On many occasions I have had colleagues who had inadequate hardware to do their work, slow PCs with small screens really impacted their productivity, in such situations buying a $400 PC and a $400 monitor for each person in the team could make a significant direct improvement to productivity before the impact on moralle kicked in!

Some years ago Lenovo ran some adverts for Thinkpads which said “demand one at the interview”. That made sense when a Thinkpad was an expensive piece of hardware. While there are still some expensive Thinkpads, there is a good range of cheap models, two which cost less than $1200AU and another eight which cost between $1200 and $1800. Now it makes more sense to allow each employee to choose their own hardware (both desktop and portable) and not even bother about issues such as whether the IT department blesses them. As Tom Limoncelli suggested in his LCA keynote, users are going to take more control over their environment whether the IT department like it or not, so it’s best to work with them rather than fighting them.

For training a common problem is that management can’t correctly determine which conferences are worthy of the expense of sending their technical staff. Then when a conference is selected they send everyone. It seems to me that when there are a number of conferences in a region (EG AUUG, LCA, OSDC, and SAGE-AU in Australia) there is a benefit in having someone from your team attend each one. Planning at the start of the year which conferences will be attended by each team member is something that appears to be beyond the ability of most managers as it requires knowing the technical interests and skill areas of most staff. If each employee was granted one week of paid time per year to attend conferences and they could determine their own budget allocation then they would be able to work it out themselves in a more effective manner.