Google and Certbot (Letsencrypt)

Like most people I use Certbot AKA Letsencrypt to create SSL certificates for my sites. It’s a great service, very easy to use and it generally works well.

Recently the server running www.coker.com.au among other domains couldn’t get a certbot certificate renewed, here’s the error message:

Failed authorization procedure. mail.gw90.de (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: "mail.gw90.de" was considered an unsafe domain by a third-party API, listen.gw90.de (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: "listen.gw90.de" was considered an unsafe domain by a third-party API

IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: mail.gw90.de
   Type:   unauthorized
   Detail: "mail.gw90.de" was considered an unsafe domain by a third-
   party API

   Domain: listen.gw90.de
   Type:   unauthorized
   Detail: "listen.gw90.de" was considered an unsafe domain by a
   third-party API

It turns out that Google Safebrowsing had listed those two sites. Visit https://listen.gw90.de/ or https://mail.gw90.de/ today (and maybe for some weeks or months in the future) using Google Chrome (or any other browser that uses the Google Safebrowsing database) and it will tell you the site is “Dangerous” and probably refuse to let you in.

One thing to note is that neither of those sites has any real content, I only set them up in Apache to get SSL certificates that are used for other purposes (like mail transfer as the name suggests). If Google had listed my blog as a “Dangerous” site I wouldn’t be so surprised, WordPress has had more than a few security issues in the past and it’s not implausible that someone could have compromised it and made it serve up hostile content without me noticing. But the two sites in question have a DocumentRoot that is owned by root and was (until a few days ago) entirely empty, now they have a index.html that just says “This site is empty”. It’s theoretically possible that someone could have exploited a RCE bug in Apache to make it serve up content that isn’t in the DocumentRoot, but that seems unlikely (why waste an Apache 0day on one of the less important of my personal sites). It is possible that the virtual machine in question was compromised (a VM on that server has been compromised before [1]) but it seems unlikely that they would host bad things on those web sites if they did.

Now it could be that some other hostname under that domain had something inappropriate (I haven’t yet investigated all possibilities). But if so Google’s algorithm has a couple of significant problems, firstly if they are blacklisting sites related to one that had an issue then it would probably make more sense to blacklist by IP address (which means including some coker.com.au entries on the same IP). In the case of a compromised server it seems more likely to have multiple bad sites on one IP than multiple bad subdomains on different IPs (given that none of the hostnames in question have changed IP address recently and Google of course knows this). The next issue is that extending blacklisting doesn’t make sense unless there is evidence of hostile intent. I’m pretty sure that Google won’t blacklist all of ibm.com when (not if) a server in that domain gets compromised. I guess they have different policies for sites of different scale.

Both I and a friend have reported the sites in question to Google as not being harmful, but that hasn’t changed anything yet. I’m very disappointed in Google, listing sites, not providing any reason why (it could be a hostname under that domain was compromised and if so it’s not fixed yet BECAUSE GOOGLE DIDN’T REPORT A PROBLEM), and not removing the listing when it’s totally obvious there’s no basis for it.

While it makes sense for certbot to not issue SSL certificates to bad sites. It seems that they haven’t chosen a great service for determining which sites are bad.

Anyway the end result was that some of my sites had an expired SSL certificate for a day. I decided not to renew certificates before they expired to give Google a better chance of noticing their mistake and then I was busy at the time they expired. Now presumably as the sites in question have an invalid SSL certificate it will be even harder to convince anyone that they are not hostile.

Forking Mon and DKIM with Mailing Lists

I have forked the “Mon” network/server monitoring system. Here is a link to the new project page [1]. There hasn’t been an upstream release since 2010 and I think we need more frequent releases than that. I plan to merge as many useful monitoring scripts as possible and support them well. All Perl scripts will use strict and use other best practices.

The first release of etbe-mon is essentially the same as the last release of the mon package in Debian. This is because I started work on the Debian package (almost all the systems I want to monitor run Debian) and as I had been accepted as a co-maintainer of the Debian package I put all my patches into Debian.

It’s probably not a common practice for someone to fork upstream of a package soon after becoming a comaintainer of the Debian package. But I believe that this is in the best interests of the users. I presume that there are other collections of patches out there and I hope to merge them so that everyone can get the benefits of features and bug fixes that have been separate due to a lack of upstream releases.

Last time I checked mon wasn’t in Fedora. I believe that mon has some unique features for simple monitoring that would be of benefit to Fedora users and would like to work with anyone who wants to maintain the package for Fedora. I am also interested in working with any other distributions of Linux and with non-Linux systems.

While setting up the mailing list for etbemon I wrote an article about DKIM and mailing lists (primarily Mailman) [2]. This explains how to setup Mailman for correct operation with DKIM and also why that seems to be the only viable option.

802.1x Authentication on Debian

I recently had to setup some Linux workstations with 802.1x authentication (described as “Ethernet authentication”) to connect to a smart switch. The most useful web site I found was the Ubuntu help site about 802.1x Authentication [1]. But it didn’t describe exactly what I needed so I’m writing a more concise explanation.

The first thing to note is that the authentication mechanism works the same way as 802.11 wireless authentication, so it’s a good idea to have the wpasupplicant package installed on all laptops just in case you need to connect to such a network.

The first step is to create a wpa_supplicant config file, I named mine /etc/wpa_supplicant_SITE.conf. The file needs contents like the following:

network={
 key_mgmt=IEEE8021X
 eap=PEAP
 identity="USERNAME"
 anonymous_identity="USERNAME"
 password="PASS"
 phase1="auth=MD5"
 phase2="auth=CHAP password=PASS"
 eapol_flags=0
}

The first difference between what I use and the Ubuntu example is that I’m using “eap=PEAP“, that is an issue of the way the network is configured, whoever runs your switch can tell you the correct settings for that. The next difference is that I’m using “auth=CHAP” and the Ubuntu example has “auth=PAP“. The difference between those protocols is that CHAP has a challenge-response and PAP just has the password sent (maybe encrypted) over the network. If whoever runs the network says that they “don’t store unhashed passwords” or makes any similar claim then they are almost certainly using CHAP.

Change USERNAME and PASS to your user name and password.

wpa_supplicant -c /etc/wpa_supplicant_SITE.conf -D wired -i eth0

The above command can be used to test the operation of wpa_supplicant.

Successfully initialized wpa_supplicant
eth0: Associated with 00:01:02:03:04:05
eth0: CTRL-EVENT-EAP-STARTED EAP authentication started
eth0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=25
TLS: Unsupported Phase2 EAP method 'CHAP'
eth0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
EAP-MSCHAPV2: Authentication succeeded
EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed
eth0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully
eth0: CTRL-EVENT-CONNECTED - Connection to 00:01:02:03:04:05 completed [id=0 id_str=]

Above is the output of a successful test with wpa_supplicant. I replaced the MAC of the switch with 00:01:02:03:04:05. Strangely it doesn’t like “CHAP” but is automatically selecting “MSCHAPV2” and working, maybe anything other than “PAP” would do.

auto eth0
iface eth0 inet dhcp
  wpa-driver wired
  wpa-conf /etc/wpa_supplicant_SITE.conf

Above is a snippet of /etc/network/interfaces that works with this configuration.

Google mod_pagespeed

I’ve just downloaded and installed the Debian AMD64 package of the Google Apache Pagespeed module [1].

To see if it worked I used the Google PageSpeed insights tool which gave my blog a rating of 93% (and 88% for mobile) [2].

After installing mod_pagespeed I received the same scores. So it appears that Pagespeed isn’t doing any good according to Google’s analysis!

etbe.coker.com.au 10.11.12.13 – – [13/Oct/2012:05:22:31 +0000] "GET /wp-content/plugins/openid/f/W.openid.css,qver=519.pagespeed.cf.Bbu1gxRjUE.css HTTP/1.0" 200 2165 "http://etbe.coker.com.au/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1" 0
etbe.coker.com.au 10.11.12.13 – – [13/Oct/2012:05:22:31 +0000] "GET /wp-content/themes/atahualpa/js/DD_roundies.js,qver=0.0.2a.pagespeed.jm.4gw5yluag0.js HTTP/1.0" 200 3679 "http://etbe.coker.com.au/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1" 0
etbe.coker.com.au 10.11.12.13 – – [13/Oct/2012:05:22:31 +0000] "GET /wp-includes/js/jquery/jquery.js,qver=1.7.2.pagespeed.jm.XZwfunyK-6.js HTTP/1.0" 200 33587 "http://etbe.coker.com.au/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1" 0
etbe.coker.com.au 10.11.12.13 – – [13/Oct/2012:05:22:33 +0000] "GET /wp-content/themes/atahualpa/images/xlogo.png.pagespeed.ic.ICWmaHBME5.png HTTP/1.0" 200 2267 "http://etbe.coker.com.au/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1" 0

The above sample of web logs shows that the string “pagespeed” is appended to some URLs with a hash of the file contents which apparently allows much longer cache times without making it difficult to change content. So Pagespeed is obviously doing something.

Is Google analysis expected to say that there is no change? Note that my speed scores are 93% and 88% so my site is was apparently quite good before mod_pagespeed was installed – maybe the analysis would report a difference on a site that didn’t perform so well. Now even if mod_pagespeed has given a benefit to real users but not Google tests it still means that I won’t get the SEO benefits that Google apparently gives to fast sites.

Also to make things even more exciting the W3.org HTML validator [3] now says that there’s an error in my blog. So not only has mod_pagespeed failed to improve performance in a way that Google notices but it has also broken something!

Server Use Per Person

I’ve just read Diego’s response to an ill-informed NYT article about data-center power efficiency [1]. This makes me wonder, how much server use does each person have?

Google

Almost everyone uses Google, most of us use it a lot. The main Google product is also probably the most demanding, their search engine.

In a typical day I probably do about 50 to 100 Google searches, that sounds like a lot, but half of them would probably be for one topic that is difficult to find. I don’t think that I do that many Google searches because I generally know what I’m doing and when I find what I need I spend a lot of time reading it. I’m sure that many people do a lot more.

Each Google search takes a few seconds to complete (or maybe more if it’s an image search and I’m on a slow link), but I think it’s safe to assume that more than a few seconds of CPU time are involved. How much work would each Google search take if performed on a single system? Presumably Google uses the RAM of many systems as cache which gives a result more similar to a NUMA system than one regular server working for a longer time so there is no way of asking how long it would take to do a Google search with a single server. But I’m sure that Google has some ratio of servers to the rate of requests coming in, it’s almost certainly a great secret, but we can make some guesses. If the main Google user base comprises people who collectively do an average of 100 searches per day then we can probably guess at the amount of server use required for each search based on the number of servers Google would run. I think it’s safe to assume that Google doesn’t plan to buy one server for every person on the planet and that they want to have users significantly outnumbering servers. So even for core users they should be aiming to have each user only take a fraction of the resources that one server adds to the pool.

So 100 searches probably each take more than 1 second of server use. But they almost certainly take a lot less than 864 seconds (the server use if Google had one server for every 100 daily requests which would imply one server for each of the heavier users). Maybe it takes 10 seconds of server use (CPU, disk, or network – whichever is the bottleneck) to complete one search request. That would mean that if the Google network was at 50% utilisation on average then they would have 86400*.5/10/100 == 43 users per server for the core user base who average 100 daily requests. If there are 80M core users that would be about 2M servers, and then maybe something like another 4M servers for the rest of the world.

So I could be using 1000 seconds of server time per day on Google searches. I also have a Gmail account which probably uses a few seconds for storing email and giving it to Fetchmail, and I have a bunch of Android devices which use Google calendars, play store, etc. The total Google server use on my behalf for everything other than search is probably a rounding error.

But I could be out by an order of magnitude, if it only took 1 second of server use for a Google search then I would be at 100 server seconds per day and Google would only need one server for every 430 users like me.

Google also serves lots of adverts on web sites that I visit, I presume that serving the adverts doesn’t take much resources by Google standards. But accounting for it, paying the people who host content, and detecting fraud probably takes some significant resources.

Other Big Services

There are many people who spend hours per day using services such as Facebook. No matter how I try to estimate the server requirements it’s probably going to be fairly wrong. But I’ll make a guess at a minute of server time per hour. So someone who averages 3 hours of social networking per day (which probably isn’t that uncommon) would be using 180 seconds of server time.

Personal Servers

The server that hosts my blog is reasonably powerful and has two other people as core users. So that could count as 33% of a fairly powerful server in my name. But if we are counting server use per USER then most of the resources of my blog server would be divided among the readers. My blog has about 10,000 people casually reading it through Planet syndication, that could mean that each person who casually reads my blog has 1/30,000 of a server allocated to them for that. Another way of considering it is that 10% of a server (8640 seconds) is covered by me maintaining my blog and writing posts, 20% is for users who visit my blog directly, and 3% is for the users who just see a Planet feed. That would mean that a Planet reader gets 1/330,000 of a server (250ms per day) and someone who reads directly gets 1/50,000 of a server (1.72s per day) as I have about 10,000 people visiting my blog directly in a month.

My mail server which is also shared by a dozen or so people (maybe that counts as 5% of a server for me or 4320 seconds per day). Then there’s the server I use for SE Linux development (including my Play Machine) and a server I use as a DNS secondary and a shell server for various testing and proxying.

Other People’s Servers

If every reader of a Planet instance like Planet Debian and Planet Linux Australia counts as 1/330,000 of a server for their usage of my blog, then how would that count for my own use of blogs? I tend to read blogs written by the type of people who like to run things themselves, so there would be a lot of fairly under-utilised servers that run blogs. Through Planet Debian and Planet Linux Australia I could be reading 100 or more blogs which are run in the same manner as mine, and in a typical day I probably directly visit a dozen blogs that are run in such a manner. This could give me 50 seconds of server time for blog reading.

Home Servers

I have a file server at home which is also a desktop system for my wife. In terms of buying and running systems that doesn’t count as an extra server as she needs to have a desktop system anyway and using bigger disks doesn’t make much difference to the power use (7W is the difference between a RAID-1 server and a single disk desktop system). I also have a PC running as an Internet gateway and firewall.

Running servers at home isn’t making that much of an impact on my computer power use as there is only one dedicated 24*7 server and that is reasonably low power. But having two desktop systems on 24*7 is a significant factor.

Where Power is Used/Wasted

No matter how things are counted or what numbers we make up it seems clear that having a desktop system running 24*7 is the biggest use of power that will be assigned to one person. Making PCs more energy efficient through better hardware design and better OS support for suspending would be the best way of saving energy. Nothing that can be done at the server side can compare.

Running a server that is only really used by three people is a significant waste by the standards of the NYT article. Of course the thing is that Hetzner is really cheap (and I’m not contributing any money) so there isn’t a great incentive to be more efficient in this regard. Even if I allocate some portion of the server use to blog readers then there’s still a significant portion that has to be assigned to me for my choice to not use a managed service. Running a mail server for a small number of users and running a DNS server and a SE Linux development server are all ways of wasting more power. But the vast majority of the population don’t have the skills to run their own server directly, so this sort of use doesn’t affect the average power use for the population.

Nothing else really matters. No matter what Google does in terms of power use it just doesn’t matter when compared to all the desktop systems running 24*7. Small companies may be less efficient, but that will be due to issues of how to share servers among more people and the fact that below a certain limit you can’t save money by using less resources – particularly if you pay people to develop software.

Conclusion

I blame Intel for most of the power waste. Android phones and tablets can do some amazing things, which is hardly surprising as by almost every measure they are more powerful than the desktop systems we were all using 10 years ago and by many measures they beat desktop systems from 5 years ago. The same technology should be available in affordable desktop systems.

I’d like to have a desktop system running Debian based on a multi-core ARM CPU that can drive a monitor at better than FullHD resolution and which uses so little power that it is passively cooled almost all the time. A 64bit ARM system with 8G of RAM a GPU that can decode video (with full Linux driver support) and a fast SSD should compete well enough with typical desktop systems on performance while being quiet, reliable, and energy efficient.

Finally please note that most of this post relies on just making stuff up. I don’t think that this is wrong given the NYT article that started this. I also think that my estimates are good enough to draw some sensible conclusions.

Changing Phone Prices in Australia

18 months ago when I signed up with Virgin Mobile [1] the data transfer quotas were 200MB on the $29 per month plan and 1.5G on the $39 per month plan. About 4 months ago when I checked the prices the amounts of data had gone up on the same plans (2.25G for $39 per month from memory). Now $39 per month gets only 500MB! It seems that recently Virgin has significantly reduced their value for money.

Virgin does have an option to pay an exgtra $10 per month for 2GB of data which gets you 2GB per month if you sign up for 24 months. That is reasonably good value, when I first signed up with Virgin I paid $39 per month to get extra data transfer, now I could use the $29 plan for phone access and spend $10 per month on data with a Wifi gateway device.

On top of this the phone plans aren’t nearly as good value as they used to be. When I signed up with Virgin the Sony Ericsson Xperia X10 was “free” on a $29 plan, at the time that was a hell of a phone. I believe that the Samsung Galaxy S3 currently occupies a similar market position to the one that the Xperia X10 did 18 months ago – so it shouldn’t be much more expensive. But Virgin are offering the Galaxy S3 for $21 extra per month over 24 months on the $29 plan – a total cost of ($29+$21)*24==$1200 while offering the same amount of calls and data transfer for $19 per month ($19*24==$456) when you don’t get a phone – this makes the price of a Galaxy S3 $1200-$456==$744 while Kogan [2] sells the same phone for $519 + postage!

The cheapest phone that Virgin is offering is a Galaxy S2 for $5 per month on a $29 plan which when compared to $19 per month for the same plan without a phone makes the phone cost ($5+$10)*24==$360. Kogan sells the Galaxy S2 for $399 so there’s a possibility of a Virgin plan saving some money over buying a phone from Kogan. But given the choice of $360 for a Galaxy S2 from Virgin and the Kogan prices of $839 for a Galaxy Note 2, $349 for a Galaxy Nexus, $469 for a Galaxy Note, $529 for a Galaxy S3, and $219 for a HTC One V I find it difficult to imagine that anyone would think that the $360 Galaxy S2 is the best option.

I’ve previously investigated dual-sim phones for cheap calls and data [3] but they didn’t seem like good value at the time because the “free” phones offered by the telcos used to be a good deal. Now it seems that none of the telcos are offering good deals on phones so with my needs the way to go would be to buy a Samsung Galaxy S3 or Samsung Galaxy Note 2 from Kogan, and get the $19 plan from Virgin – probably with a $10 per month extra fee to get an extra 2GB of data. For my wife the best option would be to keep using the Xperia X10 on a $19 per month plan as she doesn’t seem to have any problem with the Xperia X10 that justifies spending hundreds of dollars.

I idly considered getting a portable Wifi-3G device to use a cheap pre-paid 3G data option ($10 per month) and a cheap phone plan without data (maybe $10 per month), but decided that it’s not worth the effort. The Virgin $19 plan gives me free calls to my wife and lots of calls to other numbers (more than I can use) and an extra $10 gives me all the data transfer I need. To use a Wifi-3G device would involve buying such a device and the hassle of carrying it and using it, that wouldn’t save money for at least a year and would be annoying.

The sudden decrease in data quotas is a real concern though. It’s an indication that the telco cartel in Australia is pushing prices up, that’s not a good sign. LTE is nice, but 3G with better quotas would be more generally useful to me.

What I REALLY Want from the NBN

Generally I haven’t had a positive attitude towards the NBN. It doesn’t seem likely to fulfill the claims of commercial success and would be a really bad thing to privatise anyway. Also it hasn’t seemed to offer any great benefits either. The claim that it will enable lots of new technical developments which we can’t even imagine yet that aren’t possible with 25Mb/s ADSL but which also don’t require more than the 100Mb/s speed of the NBN never convinced me.

But one thing it could really do well is to give better Internet access in remote areas. Ideally with static or near-static IPv6 addresses (because we have already run out of IPv4 addresses). Currently 3G networks do all sorts of nasty NAT things to deal with the lack of IPv4 addresses which causes a lot of needless pain if you have a server connected via 3G. One of the NBN plans is for wireless net access to remote homes, with some sanity among the people designing the network such NBN connections would all have static IPv6 subnets as long as they don’t move.

I’m currently working on a project that involves servers on 3G links. I don’t have a lot of options on implementation due to hardware and software constraints. So if the ISPs using the NBN and the NBN itself (for the wireless part) could just give us all IPv6 static ranges then lots of problems would be solved.

Of course I don’t have high hopes for this. One of the many ways that the NBN has been messed up is in allowing the provision of lower speed connections. As having an ADSL2+ speed NBN connection is the cheapest option a lot of people will choose it. Therefore the organisations providing services will have to do so with the expectation that most NBN customers have ADSL2+ speed and thus they won’t provide services to take advantage of higher speeds.

The Most Important things for running a Reliable Internet Service

One of my clients is currently investigating new hosting arrangements. It’s a bit of a complex process because there are lots of architectural issues relating to things such as the storage and backup of some terabytes of data and some serious computation on the data. Among other options we are considering cheap servers in the EX range from Hetzner [1] which provide 3TB of RAID-1 storage per server along with reasonable CPU power and RAM and Amazon EC2 [2]. Hetzner and Amazon aren’t the only companies providing services that can be used to solve my client’s problems, but they both provide good value for what they provide and we have prior experience with them.

To add an extra complication my client did some web research on hosting companies and found that Hetzner wasn’t even in the list of reliable hosting companies (whichever list that was). This is in some ways not particularly surprising, Hetzner offers servers without a full management interface (you can’t see a serial console or a KVM, you merely get access to reset it) and the best value servers (the only servers to consider for many terabytes of data) have SATA disks which presumably have a lower MTBF than SAS disks.

But I don’t think that this is a real problem. Even when hardware that’s designed for the desktop is run in a server room the reliability tends to be reasonable. My experience is that a desktop PC with two hard drives in a RAID-1 array will give a level of reliability in practice that compares very well to an expensive server with ECC RAM, redundant fans, redundant PSUs, etc.

My experience is that the most critical factor for server reliability is management. A server that is designed to be reliable can give very poor uptime if poorly maintained or if there is no rapid way of discovering and fixing problems. But a system that is designed to be cheap can give quite good uptime if well maintained, if problems can be repidly discovered and fixed.

A Brief Overview of Managing Servers

There are text books about how to manage servers, so obviously I can’t cover the topic in detail in a blog post. But here are some quick points. Note that I’m not claiming that this list includes everything, please add comments about anything particularly noteworthy that you think I’ve missed.

  1. For a server to be well managed it needs to be kept up to date. It’s probably a good idea for management to have this on the list of things to do. A plan to check for necessary updates and apply them at fixed times (at least once a week) would be a good thing. My experience is that usually managers don’t have anything to do with this and sysadmins either apply patches or not at their own whim.
  2. It is really ideal for people to know how all the software works. For every piece of software that’s running it should either have come from a source that provides some degree of support (EG a Linux distribution) or be maintained by someone who knows it well. When you install custom software from people who become unavailable then it puts the reliability of the entire system at risk – if anything breaks then you won’t be able to get it fixed quickly.
  3. It should be possible to rapidly discover problems, having a client phone you to tell you that your web site is offline is a bad thing. Ideally you will have software like Nagios monitoring the network and reporting problems via a SMS gateway service such as ClickaTell.com. I am not sure that Nagios is the best network monitoring system or that ClickaTell is the best SMS gateway, but they have both worked well in my experience. If you think that there are better options for either of those then please write a comment.
  4. It should be possible to rapidly fix problems. That means that a sysadmin must be available 24*7 to respond to SMS and you must have a backup sysadmin for when the main person takes a holiday, or ideally two backup sysadmins so that if one is on holiday and another has an emergency then problems can still be fixed. Another thing to consider is that an increasing number of hotels, resorts, and cruise ships are providing net access. So you could decrease your need for backup sysadmins if you give a holiday bonus to a sysadmin who uses a hotel, resort, or cruise ship that has good net access. ;)
  5. If it seems likely that there may be some staff changes then it’s a really good idea to hire a potential replacement on a casual basis so that they can learn how things work. There have been a few occasions when I started a sysadmin contract after the old sysadmin ceased being on speaking terms with the company owner. This made it difficult for me to learn what’s going on.
  6. If your network is in any way complex (IE it’s something that needs some skill to manage) then it will probably be impossible to hire someone who has experience in all the areas of technology at a salary you are prepared to pay. So you should assume that whoever you hire will do some learning on the job. This isn’t necessarily a problem but is something that needs to be considered. If you use some unusual hardware or software and want it to run reliably then you should have a spare system for testing so that the types of mistake which are typically made in the learning process are not made on your production network.

Conclusion

If you have a business which depends on running servers on the Internet and you don’t do all the things in the above list then the reliability of a service like Hetzner probably isn’t going to be an issue at all.

Good Riddance to Flash

The Age reports that Adobe has ceased development of Flash for mobile systems [1]. This is described as leading to an improvement in the web experience for iPhone and iPad users, but the more important thing is that it will improve the experience for everyone. The Flash plugin has always been a resource hog and has never been properly supported on all the common platforms. Also most sites that use Flash never needed to as there were other ways of getting equal or better results without it.

Now that Flash is officially on the path to obsolescence everyone can move to HTML5.

I use the following configuration directives in my Squid configuration to block Flash, I selectively enable Flash for the few web sites which use it for useful things. Blocking flash in this manner means that desktop systems which have the Flash plugin installed probably won’t be vulnerable to Flash security flaws as it is unlikely that one of the few sites that I permit to send Flash to my network would end up hosting hostile Flash code.

acl swf url_regex swf$ swf\?
acl swftype req_mime_type -i ^application/x-shockwave-flash$
http_access deny swf
http_access deny swftype

Wikipedia has a comparison of HTML5 and Flash. One interesting benefit that is claimed for Flash is that it allows DRM and it supports inserting commercials and in other ways giving the user an experience that they don’t want. It seems that to put some more nails in the Flash coffin we need tools to suck video from Flash sites regardless of DRM and which skip commercials.

Dual SIM Phones vs Amaysim vs Contract for Mobile Phones

Currently Dick Smith is offering two dual-SIM mobile phones for sale in Australia. One is the LG T510 for $99, but it only supports GSM on each SIM. This might be a good phone for someone who needs to receive both work and personal calls and doesn’t want to carry two phones, but the lack of 3G support is a major limit on what can be done with the phone.

The other phone is the Huawei U8520 which supports 3G on one SIM and GSM on the other. It costs $249, runs Android 2.2, has a 320*480 display, and a 3.2 megapixel camera. For comparison the LG Optimus One is a single-SIM phone with similar specs that only costs $179 from TeleChoice, so there is a 40% price premium to pay for a dual-SIM phone.

When I first heard about dual-SIM phones (before they were commonly and cheaply available in Australia) I had thought that it would be a good option for using a cheap 3G broadband SIM along with a SIM for voice calls from one of the cheaper pre-paid mobile companies. But the helpful guy at Dick Smith informed me that Amaysim offers good pre-paid deals for voice and data [1]. With 10G of data quota to be used in one year for $100 and reasonable rates on voice calls it should be easy to keep under $200 per annum if you don’t use many calls.

Rene Cunningham has described how to use a pre-paid data-only plan on the Optus network with VOIP for most outbound calls [2]. To do that he is paying $30 every 6 months to keep his old number for inbound calls for which he gets $30 of credit, with Amaysim you can pay $10 every 3 months to get the same result with $40 per annum call cost instead of $60. As Amaysim are on the Optus network the result should be the same as long as Amaysim have enough capacity for IP data transfer. Rene uses an iPhone but the same result can be achieved with an Android phone.

If by using VOIP the cost of running a phone on Amaysim was reduced to something like $160 per annum (with a possibly optimistic aim of $20 per annum for outbound VOIP calls) then over two years that could save $376 over a $29 per month contract. A Virgin $29 contract includes a Sony Ericsson Xperia X10 which is a fairly nice phone if you can deal with the short battery life and the fact that it’s locked to Android 2.1. An Xperia X10 can be bought on Ebay for less than $376 but the hassle of setting up VOIP and Amaysim will be more effort than it’s worth to save $100 over two years.

A couple of my relatives have phone contracts that are about to expire. I’m not going to set them up on VOIP as it’s too much effort for too little benefit and the dual-SIM phone really isn’t an option. I will recommend Virgin contracts with Xperia X10 phones or Amaysim with their existing phones (2yo smart phones that are still quite usable).