Valid XHTML 1.0 Transitional

Links July 2013

Wayne Mcgregor gave an interesting TED talk about the creative processes of a choreographer [1]. The dancing in this talk is really good.

Melissa McEwan wrote an interesting article on whether being an “ally” to members of a disadvantaged group is a state or a process [2]. It seems to me that the word ally is a problem here, maybe a word like supporter would be more useful.

Ken Murray wrote an insightful article How Doctors Die about the end of life choices that people with medical experience make [3]. He makes a good case for rejecting the type of “treatment” which has a low probability of success and a certainty of lowering the quality of life. It would be good if health insurance offered patients with terminal illness an option of $1000 per day party funds if they chose to reject the expensive and painful methods that might extend their life, that might even save enough money to allow cheaper health insurance!

Rick Falkvinge wrote an interesting post about the copyright to translations of the Bible [4]. I used to think that copyright issues with “religious” works was only a problem with cults…

Joshua Foer wrote an interesting article for the New Yorker about the invention of the language Quijada which is designed for maximum precision [5]. It also has a lot of background information on constructed languages and the way that they are used.

Security is Impossible

The Scope of the Problem

Security is inherently complex because of the large number of ways of circumventing it. For example Internet facing servers have been successfully attacked based on vulnerabilities in the OS, the server application, public key generation, DNS, SSL key certificates (and many other programs and algorithms in use), as well as the infrastructure and employees of all companies in the chain. When all those layers work reasonably well (not perfectly but well enough to not obviously be the weakest link) there are attacks on the end user systems that access the servers (such as the trojan horse programs used to attack PCs used for online banking).

My Area of Interest

The area of security that interests me is Linux software development. There are many related areas such as documentation and default configurations to make it easier for people to secure their systems (instead of insecure systems being the default option) which are all important.

There are also many related fields such as ensuring that all people with relevant access are trustworthy. There are many interesting problems to solve in such areas most of which aren’t a good match for my skills or just require more time than I have available.

I sometimes write blog posts commenting on random security problems in other areas. Sometimes I hope to inspire people to new research, sometimes I hope to just inform users who can consider the issues when implementing solutions to security problems.


In the software development side there are ongoing problems of bugs in code that weaken security. The fact that the main area of concern for people who are interested in securing systems is fixing bugs is an indication that the problem of software quality needs a lot of work at the moment.

The other area that gets a reasonable amount of obvious work is in access control. Again it’s an area that needs a lot of work, but the fact that we’re not done with that is an indication of how far there is to go in generally improving computer security.

Authenticating Software Releases

There have been cases where source code repositories have been compromised to introduce trojan horse code, the ones I’ve read about have been discovered reasonably quickly with little harm done – but there could be some which weren’t discovered. Of course it’s likely that such attacks will be discovered because someone will have the original and the copies can be compared.

Repositories of binaries are a bigger problem, it’s not always possible to recompile a program and get a binary which checks out as being identical (larger programs often include the build time in the binary). Even for build processes which don’t include such data it can be very difficult to determine the integrity of a build process. For example programs compiled with different versions of libraries, header files, or compilers will usually differ slightly.

As most developers frequently change the versions of such software they will often be unable to verify their own binaries and any automated verification of such binaries will be impossible for anyone else. So if a developer’s workstation was compromised without their knowledge it might be impossible for them to later check whether they released trojan binaries – without just running the binaries in question and looking for undesired behavior.

The problem of verifying past binaries is solvable for large software companies, Linux distributions, and all other organisations that have the resources to keep old versions of all binaries and libraries used to build software. For proprietary software companies the verification process would have to start with faith in the vendor of their OS and compiler doing the right thing. For Linux distributions and other organisations based on free software it would start by having the source to everything which can then be verified in theory – although in practice verifying all source for compilers, the OS, and libraries would be a huge undertaking.


There is a well documented history of military espionage, people who are sworn to secrecy have been subverted by money, blackmail, and by having political beliefs which don’t agree with their government. The history of corporate espionage is less well documented but as corporations perform less stringent background checks than military organisations I think it’s safe to assume that corporate espionage is much more common.

Presumably any government organisation that can have any success at subverting employees of a foreign government can be much more successful in subverting programmers (either in companies such as Microsoft or in the FOSS community). One factor that makes it easier to launch such attacks is the global nature of software development. Government jobs that involve access to secret data have requirements about where the applicant was born and has lived, corporate jobs and volunteer positions in free software development don’t have such requirements.

The effort involved in subverting an existing employee of a software company or contributor to free software or the effort involved in getting an agent accepted in such a project would be quite small when compared to a nuclear weapons program. Therefore I think we should assume that every country which is capable of developing nuclear weapons (even North Korea) can do such things if they wish.

Would the government of such a country want to subvert a major software project that is used by hundreds of millions of people? I can imagine ways that such things could benefit a government and while there would be costs for such actions (both in local politics and international relations) it seems most likely that some governments would consider it to be worth the risk – and North Korea doesn’t seem to have much to lose.


We would like every computer to be like a castle with a strong wall separating them from the bad things which can’t be breached in ways that aren’t obvious. But the way things are progressing with increasingly complex systems depending on more people and other systems it’s becoming more like biology than engineering. We can think of important government systems as being comparable to the way people with compromised immune systems are isolated from any risk of catching a disease, the consequences of an infection are worse so greater isolation measures are required.

For regular desktop PCs getting infected by a trojan is often regarded as being similar to getting a cold in winter. People just accept that their PC will be infected on occasion and don’t bother making any serious effort to prevent it. After an infection is discovered the user (or their management for a corporate PC) tend not to be particularly worried about data loss in spite of some high profile data leaks from companies that do security work and the ongoing attacks against online banking and webcam spying on home PCs. I don’t know what it will take for users to start taking security risks seriously.

I think that a secure boot is a good step in the right direction, but it’s a long way from being able to magically solve all security problems. I’ve previously described some of the ways that secure boot won’t save you [1].

The problems of subverting developers don’t seem to be an immediate concern (although we should consider the possibility that it might be happening already without anyone noticing). The ongoing trend is that the value of computers in society is steadily increasing which therefore increases the rewards for criminals and spy agencies who can compromise them. Therefore it seems that we will definitely face the problems of subverted developers if we can adequately address the current technical problems related to flaws in software and inadequate access control. We just need to fix some of the problems which are exploited more easily to force the attackers to use the more difficult and expensive attacks. Note that it is a really good thing to make attacks more difficult, that decreases the number of organisations that are capable of attack even though it won’t stop determined attackers.

For end user systems the major problem seems to be related to running random programs from the Internet without a security model that adequately protects the system. Both Android and iOS make good efforts at protecting a system in the face of random hostile applications, but they have both been shown to fail in practice (it might be a good idea to have a phone for games that is separate from the phone used for phone calls etc). More research into OS security is needed to address this. But in the mean time users need to refrain from playing games and viewing porn on systems that are used for work, Internet banking, and other important things. While PCs are small and cheap enough that having separate PCs for important and unimportant tasks is practical it seems that most users don’t regard the problems as being serious enough to be worth the effort.

Samsung Galaxy Note 2

A few weeks ago I bought a new Samsung Galaxy Note 2 Android phone. As I predicted in my post about Phone and Tablet Sizes [1] the Note 2 with a 5.55″ display is a bit too big for me to have an ideal single handed side grasp (I estimate that about 5.2″ would be ideal). But I can stretch a bit and move it around in my hand to touch all parts of the screen with my thumb. Although when doing that I don’t have a tight grip, with my previous phone the Samsung Galaxy S3 [2] I could properly wrap my hand around it to grip it tightly while using it with one hand. The Note 2 will be easier for me to drop or for a thief to snatch it from me.

While the big screen makes the phone difficult to use with one hand it does allow viewing more data. The ConnectBot SSH client (Play Store link) [3] (F-Droid repository link) [4] works a lot better on a larger screen – I’ve also discovered that the volume control buttons can be used to change the font size in ConnectBot which is handy as the default is really tiny. Also Klaus Weidner’s Hacker’s Keyboard (Play Store link) [5] (F-Droid repository link) [6] works a lot better with a larger screen. When I tested the Hacker’s keyboard on a smaller phone I found the 5 row layout too difficult to use, but on the Note 2 it works well. As an aside I wish I could quickly and easily toggle between 4 row mode (good for SMS) and 5 row mode (good for sysadmin work) in the Hackers’ Keyboard.

For less serious use the large screen on the Note 2 is good for watching TV. I’ve got a collection of mp4s of TV shows that I’ve been meaning to watch, now I watch them in bed on my phone.

Another advantage of the Note 2 is the battery life. When playing Ingress and doing all the usual email checks etc my Note 2 will last about as long as my wife’s Galaxy S3 with it’s power case. So without any extra batteries a Note 2 will probably last about twice as long as a Galaxy S3.

The Note 2 has more RAM than the S3 I used to use (I had the S3 with 1G of RAM) and it also apparently has a faster CPU. The CPU speed hasn’t been an issue for me but the extra RAM is a real benefit, it means that I can usually switch between Ingress and other programs without having to restart Ingress each time. As an aside I think that Google and LG should release a “Nexus Ingress 4″ phone or some other device that’s optimised for Ingress, I’m sure it would sell well.

Some people make a big deal about the stylus that comes with the Note 2. It is a neat feature to have the device know when the stylus tip is hovering over the screen but it’s not very useful for me. If I was going to create art work on a phone (as some people do) then it would interest me, but I’m more interested in email and ssh for fine input and my fingers are generally good enough for those tasks. I have got thin fingers, so I think that people with thicker fingers could really benefit from the stylus. I recently bought a batch of stylus pens from Kogan which have a pen at one end and a rubber stylus tip at the other for fat fingered people I know who need to use an Android phone but can’t afford a Note or Note 2.

I also like the software build on it which is almost the same as that on the Galaxy S3. It seems that many people prefer the apparently stock features on the Nexus 4 but I like the way Samsung does things.


I’m very happy with my new phone. The bigger screen allows me to see things more clearly, this is good for web browsing, reading email, and now that I can use the Hackers’ Keyboard I can type more effectively. The longer battery life is really good too, although I think that Samsung could have done better – if the phone was 3mm thicker then it could have a much bigger battery and have a larger CCD for the camera.

I don’t think that the phone is really different, at least not for my use. Samsung has promoted it for artistic use and I’ve seen evidence to support their claims. But for most people it’ll just be a phone with a larger screen and a bigger battery. Some people criticise it for being too big, but it’s still smaller than the handsets on most desk phones so it’s not big by the standards of old fashioned phones.

The increased size has not only allowed me to do the same things more effectively but also allowed me to do things I hadn’t tried doing on a phone before such as watching TV. This isn’t because of the phone being particularly special in any way, it’s just that the change in size gives more possibilities for ways of using it.

The Samsung Galaxy Mega is the largest smart phone. The 5.8″ version of the Mega has a resolution of only 960*540 (less than the Note and Note 2 – not good enough IMHO) and the 6.3″ version has the same resolution as the Note 2 of 1280*720. I think that both Mega variants are too big for me, I need to be able to use a phone with one hand. So it seems that the Note 2 is probably the best phone for me right now.

Links June 2013

Cory Doctorow published a letter from a 14yo who had just read his novel “Homeland” [1]. I haven’t had anything insightful to say about Aaron Swartz, so I think that this link will do [2].

Seth Godin gave an interesting TED talk about leading tribes [3]. I think everyone who is active in the FOSS community should watch this talk.

Ron Garrett wrote an interesting post about the risk of being hit by a “dinosaur killer” [4]. We really need to do something about this and the cost of defending against asteroids is almost nothing compared to “defence” spending.

Afra Raymond gave an interesting TED talk about corruption [5]. He focussed on his country Trinidad and Tobago but the lessons apply everywhere.

Wikihouse is an interesting project that is based around sharing designs for houses that can be implemented using CNC milling machines [6]. It seems to be at the early stages but it has a lot of potential to change the building industry.

Here is a TED blog post summarising Dan Pallotta’s TED talk about fundraising for nonprofits [7]. His key point is that moral objections to advertising for charities significantly reduce their ability to raise funds and impacts the charitable mission. I don’t entirely agree with his talk which is very positive towards spending on promotion but I think that he makes some good points which people should consider.

Here is a TED blog post summarising Peter Singer’s TED talk about effective altruism [8]. His focus seems to be on ways of cheaply making a significant difference which doesn’t seem to agree with Dan Pallotta’s ideas.

Patton Oswalt wrote an insightful article about the culture of stand-up comedians which starts with joke stealing and heckling and ends with the issue of rape jokes [9].

Karen Eng wrote an interesting TED blog post about Anthony Vipin’s invention of HAPTIC shoes for blind people [10]. The vibration of the shoes tells the person which way to walk and a computer sees obstacles that need to be avoided.

David Blaine gave an interesting TED talk about how he prepared for a stunt of holding his breath for 17 minutes [11].

Nexus 4

My wife has had a LG Nexus 4 for about 4 months now so it’s time for me to review it and compare it to my Samsung Galaxy S3.

A Sealed Case

The first thing to note about the Nexus 4 is that it doesn’t support changing a battery or using micro-SD storage. The advantage of these design choices is that it allows reduced weight and greater strength compared to what the phone might otherwise be. Such choices would also allow the phone to be slightly cheaper which is a massive advantage, it’s worth noting that the Nexus 4 is significantly cheaper than any other device I can buy with comparable specs. My wife’s phone has 8G of storage (not RAM – thanks Robin) and cost $369 at the start of the year while the current price is $349 for the 8G version and $399 for the 16G version. Of course one down-side of this is that if you need 16G of storage then you need to spend an extra $50 on the 16G phone instead of buying a phone with 8G of storage and inserting a 16GB micro-SD card which costs $19 from OfficeWorks. Also there’s no option of using a 32G SD card (which costs less than $50) or a 64G SD card.

Battery etc

The battery on the Nexus 4 isn’t nearly big enough, when playing Ingress it lasts about half as long as my Galaxy S3, about 90 minutes to fully discharge. If it was possible to buy a bigger battery from a company like Mugan Power then the lack of battery capacity wouldn’t be such a problem. But as it’s impossible to buy a bigger battery (unless you are willing to do some soldering) the only option is an external battery.

I was unable to find a Nexus 4 case which includes a battery (which is probably because the Nexus 4 is a lot less common than the Galaxy S3) so my wife had to buy an external battery. If you are serious about playing Ingress with a Nexus 4 then you will end up with a battery in your pocket and cable going to your phone from the battery, this is a real annoyance. While being a cheap fast phone with a clear screen makes it well suited to Ingress the issue of having a cable permanently attached is a real down-side.

One significant feature of the Nexus 4 is that it supports wireless charging. I have no immediate plans to use that feature and the wireless charger isn’t even on sale in Australia. But if the USB connector was to break then I could buy a wireless charger from the US and keep using the phone, while for every other phone I own a broken connector would render the phone entirely useless.

Screen Brightness

I have problems with my Galaxy S3 not being bright enough at midday when on “auto” brightness. I have problems with my wife’s Nexus 4 being too bright in most situations other than use at midday. Sometimes at night it’s painfully bright. The brightness of the display probably contributes to the excessive battery use. I don’t know whether all Nexus 4 devices are like this or whether there is some variance. In any case it would be nice if the automatic screen brightness could be tuned so I could make it brighter on my phone and less bright on my wife’s.

According to AndroSensor my Galaxy S3 thinks that the ambient light in my computer room is 28 lux while my wife’s Nexus 4 claims it’s 4 lux. So I guess that part of the problem is the accuracy of the light sensors in the phones.

On-Screen Buttons

I am a big fan of hardware buttons. Hardware buttons work reliably when your fingers are damp and can be used by feel at night. My first Android phone the Sony-Ericsson Xperia X10 had three hardware buttons for settings, home, and back as well as buttons for power, changing volume, and taking a photo which I found very convenient. My Galaxy S3 has hardware buttons for power, home, and volume control. I think that Android phones should have more hardware buttons not less. Unfortunately it seems that Google and the phone manufacturers disagree with me and the trend is towards less buttons. Now the Nexus 4 only has hardware buttons for power, and volume control.

One significant advantage of the Galaxy S3 over the Nexus 4 is that the S3′s settings and back buttons while not implemented in hardware are outside the usable screen area. So the 4.8″ 1280*720 display is all for application data while the buttons for home, settings, and back on the Nexus 4 take up space on the screen so only a subset of the 4.7″ 1280*768 is usable by applications. While according to specs the Nexus 4 has a screen almost as big as the Galaxy S3 and a slightly higher resolution in practice it has an obviously smaller screen with fewer usable pixels.

Also one of the changes related to having the buttons on-screen means that the “settings” button is often in the top right corner which I find annoying. I didn’t like that aspect of the GUI the first time I used a tablet running Android 3.0 and I still don’t like it now.


My wife’s Nexus 4 seems to be much less accurate than my Galaxy S3 for GPS. I don’t know how much of this is due to phone design and how much is due to random factors in manufacturing. I presume that a large portion of it is due to random manufacturing issues because other people aren’t complaining about it. Maybe she just got unlucky with an inaccurate phone.

Shape and Appearance

One feature that I really like in the Samsung Galaxy S is that it has a significant ridge surrounding the screen. If you place a Galaxy S face-down on a desk that makes it a lot less likely to get a scratch on the screen. The LG U990 Viewty also had a similar ridge. Of course the gel case I have bought for every Android phone has solved this problem, but it would really be nice to have a phone that I consider usable without needing to buy such a case. The Nexus 4 has a screen that curves at the edges which if anything makes the problem worse than merely lacking a ridge around the edge. On the up-side the Nexus 4 looks and feels nice before you use it.

The back of the Nexus 4 sparkles, that’s nice but when you buy a gel case (which doesn’t seem to be optional with modern design trends) you don’t get to see it.

The Nexus 4 is a very attractive package, it’s really a pity that they didn’t design it to be usable without a gel case.


Kogan is currently selling the Galaxy S3 with 16G of storage for $429. When comparing that to the 16G version of the Nexus 4 at $399 that means there’s a price of $30 to get a SD socket, the option of replacing a battery, one more hardware button, and more screen space. So when comparing the Google offers for the Nexus 4 with the Kogan offer on the Galaxy S3 or the Galaxy Note which also has 16G of storage and sells for $429 the Google offer doesn’t seem appealing to me.

The Nexus 4 is still a good phone and is working well for my wife, but she doesn’t need as much storage as I do. Also when she got her phone the Galaxy S3 was much more expensive than it is now.

Also Kogan offer the 16G version of the Nexus 4 for $389 which makes it more appealing when compared to the Galaxy S3. It’s surprising that they can beat Google on price.

Generally I recommend the Nexus 4 without hesitation to anyone who wants a very capable phone for less than $400 and doesn’t need a lot of storage. If you need more storage then the Galaxy S3 is more appealing. Also if you need to use a phone a lot then a Galaxy S3 with a power case works well in situations where the Nexus 4 performs poorly.

Links May 2013

Cameron Russell (who works as an underwear model) gave an interesting TED talk about beauty [1].

Ben Goldacre gave an interesting and energetic TED talk about bad science in medicine [2]. A lot of the material is aimed at non-experts, so this is a good talk to forward to your less scientific friends.

Lev wrote a useful description of how to disable JavaScript from one site without disabling it from all sites which was inspired by Snopes [3]. This may be useful some time.

Russ Allbery wrote an interesting post about work and success titled ‘The “Why?” of Work’ [4]. Russ makes lots of good points and I’m not going to summarise them (read the article, it’s worth it). There is one point I disagree with, he says “You are probably not going to change the world“. The fact is that I’ve observed Russ changing the world, he doesn’t appear to have done anything that will get him an entry in a history book but he’s done a lot of good work in Debian (a project that IS changing the world) and his insightful blog posts and comments on mailing lists influence many people. I believe that most people should think of changing the world as a group project where they are likely to be one of thousands or millions who are involved, then you can be part of changing the world every day.

James Morrison wrote an insightful blog post about what he calls “Penance driven development” [5]. The basic concept of doing something good to make up for something you did which has a bad result (even if the bad result was inadvertent) is probably something that most people do to some extent, but formalising it in the context of software development work is a cencept I haven’t seen described before.

A 9yo boy named Caine created his own games arcade out of cardboard, when the filmmaker Nirvan Mullick saw it he created a short movie about it and promoted a flash mob event to play games at the arcade [6]. They also created the Imagination Foundation to encourage kids to create things from cardboard [7].

Tanguy Ortolo describes how to use the UDF filesystem instead of FAT for USB devices [8]. This allows you to create files larger than 2G while still allowing the device to be used on Windows systems. I’ll keep using BTRFS for most of my USB sticks though.

Bruce Schneier gave an informative TED talk about security models [9]. Probably most people who read my blog already have a good knowledge of most of the topics he covers. I think that the best use of this video is to educate less technical people you know.

Blaine Harden gave an informative and disturbing TED talk about the concentration camps in North Korea [10]. At the end he points out the difficult task of helping people recover from their totalitarian government that will follow the fall of North Korea.

Bruce Schneier has an interesting blog post about the use of a motherboard BMC controller (IPMI and similar) to compromise a server [11]. Also some “business class” desktop systems and laptops have similar functionality.

Russ Allbery wrote an insightful article about the failures of consensus decision-making [12]. He compares the Wikipedia and Debian methods so his article is also informative for people who are interested in learning about those projects.

The TED blog has a useful reference article with 10 places anyone can learn to code [13].

Racialicious has an interesting article about the people who take offense when it’s pointed out that they have offended someone else [14].

Nick Selby wrote an interesting article criticising the Symantic response to the NYT getting hacked and also criticises anti-viru software in general [15]. He raises the point that most of us already know, anti-virus software doesn’t do much good. Securing Windows networks is a losing game.

Joshua Brindle wrote an interesting blog post about security on mobile phones and the attempts to use hypervisors for separating data of different levels [16]. He gives lots of useful background information about how to design and implement phone based systems.

SCSI Failures

For a long time it was widely regarded that SCSI was the interface for all serious drives that were suitable for “Enterprise Use” or for anything else which requires reliable operation. On the other hand IDE was for cheap disks that were only suitable for home use. The SCSI vs IDE issue continues to this day but now we have SAS and SATA filling the same market niches with the main difference between the current debate and the debate a decade ago being that a SATA disk can be connected on a SAS bus.

Both SAS and SATA have a single data cable for each disk which avoids the master/slave configuration on IDE and the issue of bus device ID number (from 0-7 or 0-15) and termination on SCSI.


When a high speed electrical signal travels through a cable some portion of the signal will be reflected from any cable end point or any point of damage. To prevent the signal reflection from the end of a cable you can have a set of resistors (or some other terminating device) at the end of the cable, see the Terminator(electrical) Wikipedia page [1] for a brief overview. As an aside I think that page could do with some work, if you are an EE with a bit of spare time then improving that page would be a good thing.

SCSI was always designed to have termination while IDE never was. I presume that this was largely due to the cable length (18″ for IDE vs 1.5m to 25m for SCSI) and the number of devices (2 for IDE vs 7 or 15 for SCSI). I also presume that some of the problems that I’ve had with IDE systems have been related to signal problems that could have been avoided with a terminated bus.

My first encounter with SCSI was when working for a small business that focused on WindowsNT software development. Everyone in the office knew a reasonable amount about computers and was happy to adjust the hardware of their own workstation. A room full of people who didn’t understand termination who fiddled with SCSI buses tended to give a bad result. On the up-side I learned that a SCSI bus can work most of the time if you have a terminator in the middle of the cable and a hard drive at the end.

There have been two occasions when I’ve been at ground zero for a large deployment of servers from a company I’ll call Moon Computers. In both cases there were two particularly large and expensive servers in a cluster and one of the cluster servers had data loss from bad SCSI termination. This is particularly annoying as the terminators have different colours, all that was needed to get the servers working was to change the hardware to make them look the same. As an aside the company with no backups [2] had one of the servers with bad SCSI termination.


SCSI disks and now SAS disks tend to be designed for higher performance, this usually means greater heat dissipation. A disk that dissipates a lot of heat won’t necessarily work well in a desktop case with small and quiet fans. This can become a big problem if you have workstations running 24*7 in a hot place (such as any Australian city that’s not in Tasmania) and turn the air-conditioner off on the weekends. One of my clients lost a few disks before they determined that IDE disks are the only option for systems that are to survive Australian heat without any proper cooling.

Differences between IDE/SATA and SCSI/SAS

In 2009 I wrote about vibration and SATA performance [3]. Rumor has it that SCSI/SAS disks are designed to operate in environments where there is a lot of vibration (servers with lots of big fans and fast disks) while IDE/SATA disks are designed for desktop and laptop systems in quiet environments. One thing I’d like to do is to test performance of SATA vs SAS disks in a server that vibrates.

SCSI/SAS disks have apparently been designed for operation in a RAID array and therefore will give a faster timeout on a read error (so another disk can return the data). While IDE/SATA disks are designed for a non-RAID situation and will spend longer trying to read the data.

There are also various claims about the error rates from SCSI/SAS disks being better than those of IDE/SATA disks. But I think that in all cases the error rates are small enough not to be a problem if you use a filesystem like ZFS or BTRFS but they are also large enough to be a significant risk with modern data volumes if you have a lesser filesystem.

Data Loss from Storage Failure

In the data loss that I’ve personally observed from storage failures the loss from SCSI problems (termination and heat) is about equal to all the hardware related data loss I’ve seen on IDE disks. Given that the majority of disks I’ve been responsible for have been IDE and SATA that’s a bad sign for SCSI use in practice.

But all serious data loss that I’ve seen has involved the use of a single disk (no RAID) and inadequate backups. So a basic RAID-1 or RAID-5 installation will solve most hardware related data loss problems.

There was one occasion when heat caused two disks in a RAID-1 to give errors at the same time, but by reading from both disks I managed to get almost all the data back, RAID can save you from some extreme error conditions. That situation would have been ideal for BTRFS or ZFS to recover data.


SCSI and SAS are designed for servers, using them in non-server systems seems to be a bad idea. Using SATA disks in servers can have problems too, but not typically problems that involve massive data loss.

Using technology that is too complex for the people who install it seems risky. That includes allowing programmers to plug SCSI disks into their workstations and whoever it was from Moon computers or their resellers who apparently couldn’t properly terminate a SCSI bus. It seems that the biggest advantage of SAS over SCSI is that SAS is simple enough for most people to be able to correctly install it.

Making servers similar to the systems that the system administrators use at home seems like a really good idea. I think that one of the biggest benefits of using x86 systems as servers is that skills learned on home PCs can be transferred to administration of servers. Of course it would also be a good idea to have test servers that are identical to servers in production so that the sysadmin team can practice and make mistakes on systems that aren’t mission critical, but companies seem to regard that as a waste of money – apparently the risk of down-time is cheaper.

Noise from Shaving

About 10 years ago I started using an electric shaver. An electric shaver is more convenient to use as it doesn’t require any soap, foam, or water. It is also almost impossible to cut yourself properly with an electric shaver which is a major benefit for anyone who’s not particularly alert in the morning. Generally my experience of electric shavers has been good, although the noise is quite annoying.

Recently a friend told me that an electric shaver is as noisy as a chain-saw. Given the inverse-square law and the fact that the shaver operates within 1cm of my ears that sounds plausible. So the risk of hearing loss is a great concern. Disposable ear plugs are very cheap and they can be used multiple times (they don’t get particularly dirty while shaving or get squashed in the short time needed to shave). So for a few weeks I’ve been using ear plugs while shaving which reduces the noise and presumable saves me from some hearing damage – although after 10 years of using electric shavers I may have already sustained some damage.

According to Cooper Safety their ear plugs reduce noise by 29dB, [1] I presume that the cheap ones I bought from Bunnings would be good for at least 15dB.

According to Better Hearing Sydney the noise from an electric shaver is typically around 90dB, less than the 100dB that is typical of a chain-saw [2]. So if my ear-plugs are good for 15dB then they would reduce the noise from a typical electric shaver to 75dB which is well below the 85dB that will cause hearing damage. Given that the noise from a typical shaver is only slightly above the damage threshold it seems that I might not need particularly good ear-plugs when shaving.

A quick scan of shaver reviews indicates that the amount of noise differs by brand and technology. The Hubpages review suggests that rotary shavers tend to make less noise than foil shavers [3], but I’m sure that it varies enough between brands that some rotary shavers are louder than the quietest foil shavers. It seems that the best thing to do when buying a new shaver would be to go to a specialised shaver shop (which has many models on offer) and get the staff to demonstrate them to determine which is the quietest. If a typical shaver produces 90dB then it seems likely that one of the more quiet models would produce less than 85dB.

Another item on my todo list is to buy a noise meter to measure the amount of noise produced in the places where I spend time. There are some Android apps to measure noise, I’m currently playing with the Smart Tools Co Sound Meter [4] which gives some interesting information. The documentation notes that phone microphones are limited to the typical volume and frequencies of human voice, so my Galaxy S3 can’t measure anything about 81dB. My wife’s Nexus 4 doesn’t seem to register anything above 74dB. Additionally there is some uncertainty about the accuracy of the microphone, there is a calibration feature but that requires another meter. Anyway the Sound Meter app suggests that my shaver (a Philips HQ7380/B) produces only 71dB at the closest possible range – and drops down to 67dB at the range I would use if I grew sideburns.


Getting a proper noise meter to protect one’s hearing seems like a good idea. An Android app for measuring noise is a good thing to have, even though it’s not going to be accurate it’s convenient and will give an indication.

When buying a shaver one should listen to all the options and choose a quiet one (I might have got a quiet one by luck).

Sideburns seem like a good idea if you value your hearing.

No Backups WTF

Some years ago I was working on a project that involved a database cluster of two Sun E6500 servers that were fairly well loaded. I believe that the overall price was several million pounds. It’s the type of expensive system where it would make sense to spend adequately to do things properly in all ways.

The first interesting thing was the data center where it was running. The front door had a uniformed security guard and a sign threatening immediate dismissal for anyone who left the security door open. The back door was wide open for the benefit of the electricians who were working there. Presumably anyone who had wanted to steal some servers could have gone to the back door and asked the electricians for assistance in removing them.

The system was poorly tested. My colleagues thought that with big important servers you shouldn’t risk damage by rebooting them. My opinion has always been that rebooting a cluster should be part of standard testing and that it’s especially important with clusters which have more interesting boot sequences. But I lost the vote and there was no testing of rebooting.

Along the way there were a number of WTFs in that project. One of which was when the web developers decided to force all users to install the latest beta release of Internet Explorer, a decision that was only revoked when the IE install process broke MS-Office on the PC of a senior manager. Another was putting systems with a default Solaris installation live on the Internet with all default services running, there’s never a reason for a database server to be directly accessible over the Internet.

No Backups At All

But I think that the most significant failing was the decision not to make any backups. This wasn’t merely forgetting to make backups, when I raised the issue I received a negative reaction from almost everyone. As an aside I find it particularly annoying when someone implies that I want backups because I am likely to stuff things up.

There are many ways of proving that there’s a general lack of competence in the computer industry. But I think that one of the best is the number of projects where the person who wants backups has their competence questioned instead of all the people who don’t want backups.

A decision to make no backups relies on one of two conditions, either the service has to be entirely unimportant or you need to have no bugs in the OS or hardware defects that can corrupt data, no application bugs, and a team of sysadmins who never make mistakes. The former condition raises the question of why the service is being run and the latter is impossible.

As I’m more persistent than most people I kept raising the issue via email and adding more people to the CC list until I got a positive reaction. Eventually I CC’d someone who responded with “What the fuck” which I consider to be a reasonable response to a huge and expensive project with no backups. However the managers on the CC list regarded the use of profanity in email to be a much more serious problem. To the best of my knowledge there were never any backups of that system but the policy on email was strongly enforced.

This is only a partial list of WTF incidents that assisted in my decision to leave the UK and migrate to the Netherlands.

Not Doing Much

About a year after leaving I returned to London for a holiday and had dinner with a former colleague. When I asked what he was working on he said “Not much“. It turned out that proximity to the nearest manager determined the amount of work that was assigned. As his desk was a long way from the nearest manager he had spent about 6 months getting paid to read Usenet. That wasn’t really a surprise given my observations of the company in question.

Advice on Buying a PC

A common topic of discussion on computer users’ group mailing lists is advice on buying a PC. I think that most of the offered advice isn’t particularly useful with an excessive focus on building or upgrading PCs and on getting the latest and greatest. So I’ll blog about it instead of getting involved in more mailing-list debates.

A Historical Perspective – the PC as an Investment

In the late 80′s a reasonably high-end white-box PC cost a bit over $5,000 in Australia (or about $4,000 without a monitor). That was cheaper than name-brand PCs which cost upwards of $7,000 but was still a lot of money. $5,000 in 1988 would be comparable to $10,000 in today’s money. That made a PC a rather expensive item which needed to be preserved. There weren’t a lot of people who could just discard such an investment so a lot of thought was given to upgrading a PC.

Now a quite powerful desktop PC can be purchased for a bit under $400 (maybe $550 if you include a good monitor) and a nice laptop is about the same price as a desktop PC and monitor. Laptops are almost impossible to upgrade apart from adding more RAM or storage but hardly anyone cares because they are so cheap. Desktop PCs can be upgraded in some ways but most people don’t bother apart from RAM, storage, and sometimes a new video card.

If you have the skill required to successfully replace a CPU or motherboard then your time is probably worth enough that getting more value out of a PC that was worth $400 when new and is worth maybe $100 when it’s a couple of years old probably isn’t a good investment.

Times have changed and PCs just aren’t worth enough to be bothered upgrading. A PC is a disposable item not an investment.

Buying Something Expensive?

There are a range of things that you can buy. You can spend $200 on a second-hand PC that’s a couple of years old, $400 on a new PC that’s OK but not really fast, or you can spend $1000 or more on a very high end PC. The $1000 PC will probably perform poorly when compared to a PC that sells for $400 next year. The $400 PC will probably perform poorly when compared to the second-hand systems that are available next year.

If you spend more money to get a faster PC then you are only getting a faster PC for a year until newer cheaper systems enter the market.

As newer and better hardware is continually being released at low enough prices that make upgrades a bad deal I recommend just not buying expensive systems. For my own use I find that e-waste is a good source of hardware. If I couldn’t do that then I’d buy from an auction site that specialises in corporate sales, they have some nice name-brand systems in good condition at low prices.

One thing to note is that this is more difficult for Windows users due to “anti-piracy” features. With recent versions of Windows you can’t just put an old hard drive in a new PC and have it work. So the case for buying faster hardware is stronger for Windows than for Linux.

That said, $1,000 isn’t a lot of money. So spending more money for a high-end system isn’t necessarily a big deal. But we should keep in mind that it’s just a matter of getting a certain level of performance a year before it is available in cheaper systems. Getting a $1,000 high-end system instead of a $400 cheap system means getting that level of performance maybe a year earlier and therefore at a price premium of maybe $2 per day. I’m sure that most people spend more than $2 per day on more frivolous things than a faster PC.

Understanding How a Computer Works

As so many things are run by computers I believe that everyone should have some basic knowledge about how computers work. But a basic knowledge of computer architecture isn’t required when selecting parts to assemble to make a system, one can know all about selecting a CPU and motherboard to match without understanding what a CPU does (apart from a vague idea that it’s something to do with calculations). Also one can have a good knowledge of how computers work without knowing anything about the part numbers that could be assembled to make a working system.

If someone wants to learn about the various parts on sale then sites such as Tom’s Hardware [1] provide a lot of good information that allows people to learn without the risk of damaging expensive parts. In fact the people who work for Tom’s Hardware frequently test parts to destruction for the education and entertainment of readers.

But anyone who wants to understand computers would be better off spending their time using any old PC to read Wikipedia pages on the topic instead of spending their time and money assembling one PC. To learn about the basics of computer operation the Wikipedia page for “CPU” is a good place to start. Then the Wikipedia page for “hard drive” is a good start for learning about storage and the Wikipedia page for Graphics Processing Unit to learn about graphics processing. Anyone who reads those three pages as well as a selection of pages that they link to will learn a lot more than they could ever learn by assembling a PC. Of course there’s lots of other things to learn about computers but Wikipedia has pages for every topic you can imagine.

I think that the argument that people should assemble PCs to understand how they work was not well supported in 1990 and ceased to be accurate once Wikipedia became popular and well populated.

Getting a Quality System

There are a lot of arguments about quality and reliability, most without any supporting data. I believe that a system designed and manufactured by a company such as HP, Lenovo, NEC, Dell, etc is likely to be more reliable than a collection of parts uniquely assembled by a home user – but I admit to a lack of data to support this belief.

One thing that is clear however is the fact that ECC RAM can make a significant difference to system reliability as many types of error (including power problems) show up as corrupted memory. The cheapest Dell PowerEdge server (which has ECC RAM) is advertised at $699 so it’s not a feature that’s out of reach of regular users.

I think that anyone who makes claims about PC reliability and fails to mention the benefits of ECC RAM (as used in Dell PowerEdge tower systems, Dell Precision workstations, and HP XW workstations among others) hasn’t properly considered their advice.

Also when discussing overall reliability the use of RAID storage and a good backup scheme should be considered. Good backups can do more to save your data than anything else.


I think it’s best to use a system with ECC RAM as a file server. Make good backups. Use ZFS (in future BTRFS) for file storage so that data doesn’t get corrupted on disk. Use reasonably cheap systems as workstations and replace them when they become too old.

Update: I find it rather ironic when a discussion about advice on buying a PC gets significant input from people who are well paid for computer work. It doesn’t take long for such a discussion to take enough time that the people involved could spent their time working instead, put enough money in a hat to buy a new PC for the user in question, and still had money left over.