Archives

Categories

Car Drivers vs Mechanics and Free Software

In a comment on my post about Designing Unsafe Cars [1] Noel said “If you don’t know how to make a surgery, you don’t do it. If you don’t know how to drive, don’t drive. And if you don’t know how to use a computer, don’t expect anybody fix your disasters, trojans and viruses.” Later he advocates using a taxi.

Now I agree about surgery – apart from corner cases such as medical emergencies in remote places and large-scale disasters. I also agree that it’s good to avoid driving if you aren’t very good at it (that would be much better than the current fad of sub-standard drivers buying large 4WD vehicles).

But I don’t think that people who lack computer skills should avoid using computers.

When cars were first invented everyone who owned one was either a mechanic or employed one. Driving a car often involved being well out of range of anyone else who might know how to fix it, so either the car owner or their chauffeur had to be able to fix almost any problem. As the car industry evolved the level of mechanical knowledge required to own and operate a car has steadily decreased. I expect that a significant portion of drivers don’t know how to top up the oil or radiator water in their car and probably don’t know what is the correct pressure for air in their tires. To a large extent I don’t think this is a problem, owning a car involves regularly taking it to be serviced where professionals will (or at least should) check every aspect of the car that is likely to fail. If I used my windscreen-washer less frequently I could probably avoid opening the bonnet of the car between scheduled services!

When budgeting for car ownership you just have to include regularly spending a few hundred dollars to pay for an expert to find problems and fix them – with of course the occasional large expense for when something big breaks.

When the computer industry matures I expect that the same practice will occur. Most people will buy computers and plan to spend small amounts of money regularly to pay people to maintain it. Currently most older people seem to plan to have a young relative take care of their PC for them – essentially free mechanic services. The quality of such work will vary of course, and poorly designed OSs that are vulnerable to attack may require more support than can be provided for free.

Due to deficiencies in city design it is almost essential to drive a car in most parts of the US and Australia – as opposed to countries such as the Netherlands where you can survive quite well without ever driving. When a service is essential it has to be usable by people who have little skill in that area. It would be good if driving wasn’t necessary, I would be happy if I never drove a car again.

The need to use computers however will continue to increase. So we need to make them more available to users and to support users who can’t disinfect computers etc. The only skill requirements for using a computer should be the ability to use a keyboard and a mouse!

This requires a new industry in supporting PCs. Geek Squad in the US [2] seems to be the organisation that is most known for this. I expect that there will be multiple companies competing for such work in every region in the near future, just as there are currently many companies competing for the business of servicing cars.

We need support for free software from such companies. Maybe existing free software companies such as Red Hat and Canonical can get into this business. One advantage of having such companies supporting software is that they would have a strong commercial incentive to avoid having it break – unlike proprietary software vendors who have little incentive to do things right.

The next issue is the taxi analogy. Will software as a service with Google subsidising our use of their systems [3] take over any significant part of the market?

Of course the car analogy breaks down when it comes to privacy, no-one does anything remotely private in a taxi while lots of secret data is stored on a typical home computer. Google is already doing some impressive security development work which will lead towards low maintenance systems [4] as well as protecting the privacy of the users – to the extent that you can trust whoever runs the servers.

My parents use their computer for reading email, browsing the web, and some basic wordprocessing and spreadsheet work. The mail is on my IMAP server so all I need is to have some way to store their office documents on a server and they will pretty much have a dataless workstation. Moving their collection of photos and videos of their friends and relatives to a server will be a problem, transferring multiple gigabytes of data on a cheap Australian Internet access plan is a problem.

Designing Unsafe Cars

The LA Times has an interesting article about problems with Toyota and Lexus cars [1]. Basically there are problems where the cars have uncontrolled acceleration (there seems to be some dispute about whether it is due to engine management or the floor mat catching the accelerator pedal). When that happens the brakes don’t work (due to the vacuum power-assistance for brakes going away when the engine is at full power) and a terrible crash seems inevitable.

There are suggestions that the driver should shift the car to neutral and discussion about how the Toyota gear selection makes that difficult. Some years ago I was driving an automatic car on a freeway at 100Km/h and the engine stalled (due to a problem with the LPG system). I had become used to never touching the gear lever while driving so the possibility of moving the gear lever one notch to neutral didn’t occur to me. With a dead engine in gear the car slowed rapidly which is quite dangerous when surrounded by 100Km/h traffic. Fortunately I was able to swerve into the emergency lane (across one lane of active traffic) before the car slowed much. That was in a relatively controlled environment with a gear shift mechanism that is a lot simpler than that which is common in some of the more expensive cars.

According to Wikipedia the maximum speed limit in the US is 80M/h [2]. It seems to me that Toyota is being irresponsible by selling cars that can sustain 120M/h, while the probability of surviving a crash at 80M/h is quite low, it seems likely to be a lot greater than the probability of surviving a crash at 120M/h. Also if a car is out of control at 80M/h then the driver will have a lot more time to work out how to put the engine in neutral or turn it off – the lower speed will extend the time available by more than 50% because bends in the road can be better handled at 80M/h.

It seems to me that it would be a feature for the car owner to have the car limited to a speed that is not much greater than the speed limit. According to Wikipedia the highest speed limit in Australia is 130Km/h (in NT), but it’s 110Km/h in all places where I have driven. If my car had a governor to limit the speed to 115Km/h and a switch to change the limit to 135Km/h in case I ever drive to the NT then it would not affect my driving patterns (I rarely drive on roads with a 100Km/h limit and almost never drive on roads with a 110Km/h limit) – but it could reduce the probability of things going horribly wrong. Also one thing to note is that last time I checked car tyres sold in Australia were only required to operate safely at speeds below 190Km/h (118M/h), so a Lexus that went out of control at 120M/h in Australia might risk a tyre blow-out – which admittedly would only make things marginally worse.

A governor for the reverse gear would also be a good feature. Some time ago a granny got her foot stuck on the accelerator in a car park and caused serious damage to her car and a parked car – after passing close by where I was standing. I don’t think that there is a real need to do more than 5Km/h in reverse, limiting the speed would give pedestrians a better chance of escaping parking accidents.

One serious problem with some of the Toyota and Lexus vehicles is that it apparently takes 3 seconds to turn the engine off in an emergency! I’ve been driving for almost 20 years and experienced a number of dangerous situations, all of which were essentially resolved (for better or worse) in significantly less than 3 seconds. A 3 second delay is as good as a 1 hour delay for safety critical systems.

Also if the accelerator and brake pedals are pressed at the same time then the brake should take precedence. It seems quite obvious that whenever both pedals are pressed hard then the driver would probably prefer hard braking to hard acceleration.

If you look at industrial machinery (robots, lathes, etc) you will always see big red buttons (or whatever color is used for emergency stop in your region) that are clearly marked and obvious – to the workers and to bystanders. Escalators have less obvious red buttons but they can still be shut down in an emergency. It seems to me that there are potential benefits to having an emergency shutdown button in a car, maybe in a position that is accessible to the front-seat passenger in case the driver is incapacitated. Such a shutdown button wouldn’t do anything extreme such as fully activating the brakes (which would be very bad on a road that has high-speed traffic), but would prevent acceleration (with some sort of hardware control to avoid software problems) and maintain power to the brakes and the steering.

One thing that needs to be considered is that people tend not to do the most logical things when in an emergency situation. It needs to be possible to do whatever is necessary to save your life without any great deal of thought. Pushing a big red button is easy, holding down the “on” button for 3 seconds or even navigating a gear shift to an uncommon setting is a lot more difficult.

It seems to me that there is also an issue of driver training. If putting an automatic car into neutral and cruising to a stop was part of the test for new drivers then the results of such car problems might not always be so bad.

But I don’t expect there to be any serious changes to driver training or car design. People are too accepting of road deaths.

Don Marti has expressed a plan to never buy a vehicle with an automatic transmission because of this issue [3]. But the number of new vehicles being sold with a manual transmission is steadily reducing. An automatic transmission allows better performance (F1 cars have used them for ages), better fuel efficiency (you could never make a manual Prius), a more comfortable ride (the Hybrid Lexus keeps winning the Australian Luxury Car of the Year award), and allows less skillful drivers. Unless Don wants to ride a moped or drive an old car then I expect that he will be forced to get an automatic transmission. Then of course he will still be at risk of other people having car problems (the LA Times article mentions a third party being killed after an out of control car hit them).

Also I expect that the extra safety features that are implemented in luxury vehicles such as the Lexus would save a few lives, they should save enough to outweigh the number that are lost on the rare occasions when the car goes out of control. Other luxury cars such as the Mercedes S class have great safety features and don’t have a history of going wrong in a newsworthy way. A second-hand S Class Mercedes was surprisingly cheap in the UK last time I checked, cheap enough to make it worth considering the importation of one to Australia.

But my solution to these problems is to try and minimise my driving. A 1.5 ton Lexus driving out of control at the maximum speed possible in urban streets won’t do much damage to a 20 ton tram.

Laptop Reliability

Update: TumbleDry has a good analysis of the Square Trade report [0]. It seems that there are significant statistical problems in Square Trade’s analysis and a possible conflict of interest.

Square Trade did a survey of laptop reliability and wrote an interesting article about the results [1]. One thing to keep in mind when reading them is that the usage patterns vary greatly by type of product (netbook vs laptop) and probably by brand.

Their statistics indicate that netbooks are less reliable than laptops, but I think that my actions in taking my EeePC to places such as the beach are probably not uncommon – a netbook is ideal when you might need access to a computer at a moment’s notice. I expect that the reliability of my laptop has increased because I bought a netbook!

Their statistics show that Lenovo is far from the most reliable brand, I wonder what the usage scenarios for such machines are. I’ve been using Thinkpads happily for over 11 years. I have had many warranty repair jobs, I lost count long ago. But I don’t think that this indicates a problem with Thinkpads, my use is very demanding, I have done a lot of traveling, and done coding in planes, trains, trams, and taxis in many countries. So instead of criticising IBM/Lenovo for having their machines break, I praise them for repeatedly repairing them no matter how they wear out in my use. The speed and quality of the repair work is very impressive. Based on this I have been strongly recommending Thinkpads to everyone I know who seems likely to wear laptops out through basically doing everything that a laptop is designed to do all day every day! Among people I know the incidence of laptop warranty repairs is probably about 100% as the number of systems that are never repaired are outweighed by the systems that are repaired multiple times.

Generally Thinkpads seem fairly well built to me, I’ve been surprised at how many times they didn’t break when I expected them to.

When articles like the one from Square Trade are discussed people usually cite personal anecdotes. My above anecdote covers just over 11 years of intensive use of four different Thinkpads. Of course it doesn’t prove much about the inherent reliability of Thinkpads (I could have received through random selection either four Thinkpads that were significantly more or less reliable than average). But having dealt with the IBM/Lenovo service in a few countries I can confirm the quality of their repair work. Every time they have returned my machine rapidly, they never complained about my policy of giving them a system with the hard drive removed, and in all but one case they completely solved the problem on the first try. One time they forgot to replace a broken rubber foot and to make up for this they sent me a complete set of spare parts by courier – they had repaired the other two faults without problem.

Now comparing the reliability of rack-mount servers would be a lot easier. The vast majority of such systems are stored safely in racks all the time and tend not to be mishandled. My experience with servers from Dell, HP, IBM, and Sun is that apart from routine hard drive failures they all run well if you keep them cool enough. But of course as I haven’t run more than a few dozen of any of those brands at one time I don’t have anything near a statistically significant sample.

Planning Servers for Failure

Sometimes computers fail. If you run enough computers then you encounter failures regularly. If the computers are important then you need to plan for the failure.

An ideal situation is to have redundant servers. Misconfigured clusters can cause more downtime than they can prevent and it requires more expensive hardware to properly implement a cluster (you need at least two servers and hardware to allow a good node to kill a bad node) as well as more time (from people who may charge higher rates).

Most companies don’t have redundant servers. So if you have some non-redundant servers there seem to be two reasonable options. The first one is to use more expensive hardware on a support contract. If a server is really important to you then get a 24*7 support contract – it only takes an extra mouse click (and a couple of thousand dollars) when ordering a Dell server. I am not going to debate the relative merits of Dell vs IBM vs HP at this time, but I think that most people will agree that Dell offers significant advantages over a white-box server, both in terms of quality (the low incidence of failure) and support (including 24*7 part replacement).

The second option is to have a cheap server that can be easily replaced. This IS appropriate for some tasks. For example I have installed many cheap desktop systems with two IDE disks in a RAID-1 array that run as Internet gateway systems for small businesses. The requirements were that they be quiet, use little power (due to poorly ventilated server rooms / cupboards), be relatively reliable, and be reasonably cheap. If one of those systems suddenly fails and no replacement hardware is available then someone’s desktop PC can be taken as a replacement, having one person unable to work due to a lack of a PC is better than having everyone’s work impeded by lack of Internet access!

This ability to swap hardware is dependent on the new hardware being reasonably similar. Finding a desktop PC in an office today which can support two IDE disks and which has an Ethernet port on the motherboard and a spare PCI slot is not too difficult. I expect that in the near future such machines will start to disappear which will be an incentive for using systems with SATA disks and USB keyboards as routers.

This evening I had to advise someone who was dealing with a broken server. The system in question is mission critical, was based on white-box hardware, and had four SATA disks in a LVM volume group for the root filesystem. This gave a 600G filesystem with less than 10G in use. If the person who installed it had chosen to use only a single disk (or even better two disks in a RAID-1 array) then there would have been a wide range of systems that could take the disks and be used to keep the company running for business tomorrow. But finding a computer that can handle four SATA disks is a little more tricky.

Of course running a mission critical server that doesn’t use RAID is obviously very wrong. But using four disks in an LVM volume group both increases the probability of a data destroying disk failure and makes it more difficult to replace the computer itself. Some server installations are fractally wrong.

Some Tips for Shell Code that Won’t Destroy Your OS

When writing a shell script you need to take some care to ensure that it won’t run amok. Extra care is needed for shell scripts that run as root, firstly because of the obvious potential for random destruction, and secondly because of the potential for interaction between accounts that can cause problems.

  • One possible first step towards avoiding random destruction is to start your script with “#/bin/sh -e” instead of “#/bin/sh“, this means that the script will exit on an unexpected error, which is generally better than continuing merrily along to destroy vast swathes of data. Of course sometimes you will expect an error, in which case you can use “/usr/local/bin/command-might-fail || true” to make it not abort on a command that might fail.
  • #!/bin/sh -e
    cd /tmp/whatever
    rm -rf *
    #!/bin/sh
    cd /tmp/whatever || exit 1
    rm -rf *

    Instead of using the “-e” switch to the shell you can put “|| exit 1” after a command that really should succeed. For example neither of the above scripts is likely to destroy your system, while the following script is very likely to destroy your system:
    #!/bin/sh
    cd /tmp/whatever
    rm -rf *
  • Also consider using absolute paths. “rm -rf /tmp/whatever/*” is as safe as the above option but also easier to read – avoiding confusion tends to improve the reliability of the system. Relative paths are most useful for humans doing typing, when a program is running there is no real down-side to using long absolute paths.
  • Shell scripts that cross account boundaries are a potential cause of problems, for example if a script does “cd /home/user1” instead of “cd ~user1” then if someone in the sysadmin team moves the user’s home directory to /home2/user1 (which is not uncommon when disk space runs low) then things can happen that you don’t expect – and we really don’t want unexpected things happening as root! Most shells don’t support “cd ~$1“, but that doesn’t force you to use “cd /home/$1“, instead you can use some shell code such as the following:
    #!/bin/sh
    HOME=`grep ^$1 /etc/passwd|head -1|cut -f6 -d:`
    if [ "$HOME" = "" ]; then
      echo "no home for $1"
      exit 1
    fi
    cd ~


    I expect that someone can suggest a better way of doing that. My point is not to try and show the best way of solving the problem, merely to show that hard coding assumptions about paths is not necessary. You don’t need to solve a problem in the ideal way, any way that doesn’t have a significant probability of making a server unavailable and denying many people the ability to do their jobs will do. Also consider using different tools, zsh supports commands such as “cd ~$1“.
  • When using a command such as find make sure that you appropriately limit the results, in the case of find that means using options such as -xdev, -type, and -maxdepth. If you mistakenly believe that permission mode 666 is appropriate for all files in a directory then it won’t do THAT much harm. But if your find command goes wrong and starts applying such permissions to directories and crosses filesystem boundaries then your users are going to be very unhappy.
  • Finally when multiple scripts use the same data consider using a configuration file. If you feel compelled to do something grossly ugly such as writing a dozen expect scripts which use the root password then at least make it an entry in a configuration file so that it can be changed in one place. It seems that every time I get a job working on some systems that other people have maintained there is at least one database, LDAP directory, or Unix root account for which the password can’t be changed because no-one knows how many scripts have it hard-coded. It’s usually the most important server, database, or directory too.

Please note that nothing in this post is theoretical, it’s all from real observations of real systems that have been broken.

Also note that this is not an attempt at making an exhaustive list of ways that people may write horrible scripts, merely enough to demonstrate the general problem and encourage people to think about ways to solve the general problems. But please submit your best examples of how scripts have broken systems as comments.

First Dead Disk of Summer

Last night I was in the middle of checking my email when I found that clicking on a URL link wouldn’t work. It turned out that my web browser had become unavailable due to a read error on the partition for my root filesystem (the usual IDE uncorrectable error thing). My main machine is a Thinkpad T41p, it is apparently possible to replace the CD-ROM drive with a second hard drive to allow RAID-1 but I haven’t felt inclined to spend the money on that. So any hard drive error is a big problem.

Fortunately I had made a backup of /home only a few days ago. I use offline IMAP for my email so that my recent email (the most variable data that matters to me) is stored on a server with a RAID-1 as well as on my laptop and my netbook. The amount of other stuff I’ve been working on in my home directory is fairly small, and the amount of that which isn’t on other systems is even smaller (I usually build packages on servers and then scp the relevant files to my laptop for Debian uploads, bug reports, etc.

The first thing I did was to ssh to one of my servers and paste a bunch of text from various open programs into a file there. That was the contents of all open programs, the URLs of web pages I was reading, and the contents of an OpenOffice spread-sheet which I couldn’t save directly (it seems that a read-only /tmp will prevent OpenOffice from saving anything). Then I used scp to copy 600M of ted.com videos that I hadn’t backed up, I don’t usually backup such things but I don’t want to download them twice if I can avoid it (I only have a quota of 25G per month).

After that I made new backups of all filesystems starting with /home. I then used tar to backup the root filesystem.

The hard drive in the laptop only had a single bad sector, so I could have re-written it so that it would be remapped (as I have done before with that disk), but I think that on a 5yo disk it’s probably best to replace it. I had been thinking of installing a larger disk anyway.

On restore I restored the root filesystem from a month-old backup and then used “diff -r” to discover what had changed, it took me less than an hour to merge the changes from the corrupted root filesystem to the restored one.

Now I have lots of free disk space and no data loss!

I am now considering making an automated backup system for /home. My backup method is to make an LVM snapshot of the LV which is used and then copy that – this gets the encrypted data so I can safely store it on USB devices while traveling. I could easily write a cron job that uses scp to transfer a backup to one of my servers at some strange time of the night.

The next issue is how many other disks I will lose this summer. I have installed many small mail server and Internet gateway systems running RAID-1, it seems most likely that some of them will have dead disks with the expected record temperatures this summer.

Debian SSH and SE Linux

I have just filed Debian bug report #556644 against the version of openssh-server in Debian/Unstable (Squeeze).  It has a patch that moves the code to set the SE Linux context for the child process before calling chroot. Without this a chroot environment on a SE Linux system can only work correctly if /proc and /selinux are mounted in the chroot environment.

deb http://www.coker.com.au squeeze selinux

I’ve created the above APT repository for Squeeze which has a package that fixes this bug. I will continue to use that repository for a variety of SE Linux patches to Squeeze packages, at the moment it’s packages from Unstable but I will also modify released packages as needed.

The bug report #498684 has a fix for a trivial uninitialised variable bug. The fix is also in my build.

Also I filed the bug report #556648 about the internal version of sftp being
incompatible with SE Linux (it doesn’t involve an exec so the context doesn’t change). The correct thing to do is for sshd to refuse to run an internal sftpd at least if the system is in enforcing mode, and probably even in permissive mode.

deb http://www.coker.com.au lenny selinux

Update: I’ve also backported my sshd changes to Lenny at the above APT repository.

Backing up MySQL

I run a number of MySQL databases, the number of mysqld installations that I run is something like 8, but I may have forgotten some. With the number of servers that I run on a “do nothing except when it breaks” basis it’s difficult to remember the details. The number of actual databases that I run would be something like 30, four databases running on a database server (not counting “mysql” is fairly common. Now I need to maintain some sort of automated backup of these, this fact became obvious to me a couple of days ago when I found myself trying to recreate blog entries and comments from Google’s cache…

There are two types of database that I run. There are ones of significant size (more than 1GB) and tiny ones – I don’t think I run any database which has a MySQL dump file that is more than 20M and less than 2G in size.

For the machines with small databases I have the following script run daily from a cron job (with db1 etc replaced with real database names and /mysql-backup replaced by something more appropriate). The “--skip-extended-insert” option allows the possibility of running diff on the dump files but at the cost of increasing file size, when the raw file size is less than 20M this overhead doesn’t matter – and gzip should handle some extra redundancy well.

#!/bin/bash -e
for n in db1 etc ; do
  /usr/bin/mysqldump --skip-extended-insert $n | gzip -9 > /mysql-backup/$n-`date +%Y-%m-%d`.gz
done

Then I have a backup server running the following script from a cron job to copy all the dump files off the machines.

#!/bin/bash -e
cd /backup-store
for n in server1 server2 ; do
  scp $n:/mysql-backup/*-`date +%Y-%m-%d`.gz $n
done

This script relies on being run after the script that generates the dump files. Which is a little more tricky than it should be, it’s a pity that cron jobs can’t be set to have UTC run times. I could have run the dumps more frequently and used rsync to transfer the data, but it seems that the risk of losing one day’s worth of data is acceptable. For my blog I can get any posts that I might lose from Planet installations in that time period.

For the bigger databases my backup method starts by putting the database and the binary log files on the same filesystem – not /var. This requires some minor hackery of the MySQL configuration. Then I use rsync to copy the contents of an LVM snapshot of the block device. The risks of data consistency problems involved in doing this should be no greater than the risks from an unexpected power fluctuation – and the system should be able to recover from that without any problems.

My experience with MySQL dumps is that they take too long and too much system resources for large databases so I only use them for backing up small databases (where a dump can be completed in a matter of seconds so even without using a transaction it doesn’t hurt).

What to do When You Break a Server

Everyone who does any significant amount of sysadmin work will break a server. Most people who have any significant experience will have broken several. Anyone who has never broken one should be treated with suspicion by other members of the sysadmin team, they probably haven’t learned the caution that most of us learn from stuffing something up badly.

When you break a server there are some things that you can do to greatly mitigate the scale of the disaster. Firstly carefully watch what happens, for example don’t type “rm -rf *” and then walk away, watch the command – if it takes unusually long then press ^C and double-check that you are removing the right files. Fixing a half broken server is often easier than fixing one that has been broken properly and completely!

If you don’t know what to do then do nothing! Doing the wrong thing can make things worse. Seek advice from the most experienced sysadmin who is available. If backups are inadequate then leave the server in a broken state while seeking advice, particularly if you were working at the end of the day and it doesn’t need to be up for a while. There is often time to seek advice by email before doing something.

Do not reboot the system! Even if your backups are perfect there is probably some important data that has been recently modified and can be salvaged. Certain types of corruption (such as filesystem metadata corruption) will leave good data in memory where it can be recovered.

Do not logout! If you do then you may not be able to login again and this may destroy your chances of fixing the system or recovering data.

Do not terminate any programs that you have running. There have been more than a few instances where the first step towards recovering from disaster involved using an open editor session (such as vi or emacs). If the damage prevents an editor from being used (EG by removing the editor’s program or a shared object it relies on) then having one running is very important.

These procedures are particularly important if you are unable to visit the server. For example when using a hosted server at an ISP the more cost effective plans give you no option to ever gain physical access. So plugging the disks into another machine for recovery is not an option.

Does anyone have any other suggestions as to what to do in such a catastrophy?

PS This post is related to the fact that I had to recover the last couple of weeks of blog comments and posts from Google’s cache…

Update: I got my data back, not I have to copy one day of blog stuff from one database to another.

Links November 2009

Credit Writedowns has a populist interpretation of the latest Boom-Bust cycle [1]. It’s an interesting analysis of the way the US economy is working.

Bono writes for the NY Times about Rebranding America [2]. He praises Barack Obama suggesting a different reason to believe that the peace prize is deserved and describes what he believes to be the world’s hope for the US.

IsMyBlogWorking.com is a useful site that analyses your blog [3]. It gives some advice on how to improve some things as well as links to feed validation sites.

Evgeny Morozov gave an interesting TED talk “How the Net Aids Dictatorships” [4]. I don’t agree with his conclusion, he has some evidence to support his claims but I think that a large part of that is due to people not using the Internet well. I expect things to improve. The one claim that was particularly weak was when he mentioned radio stations in Rwanda as an example of technology being used for bad purposes – the entire point about the first-world discussion about such things is the radio vs the Internet.

Ray Anderson gave an inspiring talk about “The Business Logic of Sustainability” [5]. He transformed his carpet company, decreasing it’s environmental impact by 82% and it’s impact per volume of product by more than 90% while also significantly increasing it’s profitability. He says that corporate managers who don’t protect the environment should be regarded as criminals. Making his company more environmentally friendly reduced expenses (through efficiency), attracted more skillful employees, and attracted environmentally aware customers. Managers who don’t follow Ray’s example are not only doing the wrong thing for the environment, they are doing the wrong thing for their stockholders! Ray’s company Flor takes carpet orders over the web [6]. They won’t ship a catalogue outside the US, so presumably they only sell carpet to the US too.

Marc Koska gave an interesting TED talk about a new syringe design that prevents re-use [7]. His main aim is to prevent the spread of AIDS in the developing world – where even hospital staff knowingly reuse syringes. It will also do some good in developed countries that try to prohibit drug use.

David Logan gave an interesting TED talk about tribal leadership [8]. His use of the word “tribe” seems rather different from most other uses, and I am a bit dubious about some of his points. But it is definitely a talk worth seeing and considering.

Deirdre Walker is a recently retired Assistant Chief of Police who has worked for 24 years as a police officer, she describes in detail her analysis of the flaws in the TSA security checks at US airports [9].

Brian Krebs wrote an article for the Washington Post recommending that Linux Live CDs be used for Internet banking [10]. Windows trojans have been used to take over bank accounts that were accessed by security tokens, that could only be accessed by certain IP addresses, and that required two people to login. It seems that nothing less than a Linux system that is solely used for banking is adequate when a lot of money is at stake.

The NY Times has an interesting review of the book “Ayn Rand and the World She Made” [11]. It seems that Ayn was even madder than I thought.

Gary Murphy has written an interesting analysis of the latest stage in the collapse of the US Republican party [12].

The ABC (AU) Law Report has an interesting article about Evony’s (of China and the US) attempting to sue Bruce Everiss (of the UK) in Australia [13].

The Guardian has an insightful article about the IEA making bogus claims about the remaining oil reserves [14]. It seems that the experts who work for the IEA estimate that oil is running out rapidly while the US is forcing them to claim otherwise.

Dean Baker of the Center for Economic and Policy Research has written an interesting article about the economic effects of the war in Iraq [15]. Apparently it caused the loss of over 2,000,000 jobs – considerably more than the job losses that could ever result from efforts to combat global warming.