QEMU for ARM Processes

I’m currently doing some embedded work on ARM systems. Having a virtual ARM environment is of course helpful. For the i586 class embedded systems that I run it’s very easy to setup a virtual environment, I just have a chroot run from systemd-nspawn with the --personality=x86 option. I run it on my laptop for my own development and on a server my client owns so that they can deal with the “hit by a bus” scenario. I also occasionally run KVM virtual machines to test the boot image of i586 embedded systems (they use GRUB etc and are just like any other 32bit Intel system).

ARM systems have a different boot setup, there is a uBoot loader that is fairly tightly coupled with the kernel. ARM systems also tend to have more unusual hardware choices. While the i586 embedded systems I support turned out to work well with standard Debian kernels (even though the reference OS for the hardware has a custom kernel) the ARM systems need a special kernel. I spent a reasonable amount of time playing with QEMU and was unable to make it boot from a uBoot ARM image. The Google searches I performed didn’t turn up anything that helped me. If anyone has good references for getting QEMU to work for an ARM system image on an AMD64 platform then please let me know in the comments. While I am currently surviving without that facility it would be a handy thing to have if it was relatively easy to do (my client isn’t going to pay me to spend a week working on this and I’m not inclined to devote that much of my hobby time to it).

QEMU for Process Emulation

I’ve given up on emulating an entire system and now I’m using a chroot environment with systemd-nspawn.

The package qemu-user-static has staticly linked programs for emulating various CPUs on a per-process basis. You can run this as “/usr/bin/qemu-arm-static ./staticly-linked-arm-program“. The Debian package qemu-user-static uses the binfmt_misc support in the kernel to automatically run /usr/bin/qemu-arm-static when an ARM binary is executed. So if you have copied the image of an ARM system to /chroot/arm you can run the following commands like the following to enter the chroot:

cp /usr/bin/qemu-arm-static /chroot/arm/usr/bin/qemu-arm-static
chroot /chroot/arm bin/bash

Then you can create a full virtual environment with “/usr/bin/systemd-nspawn -D /chroot/arm” if you have systemd-container installed.

Selecting the CPU Type

There is a huge range of ARM CPUs with different capabilities. How this compares to the range of x86 and AMD64 CPUs depends on how you are counting (the i5 system I’m using now has 76 CPU capability flags). The default CPU type for qemu-arm-static is armv7l and I need to emulate a system with a armv5tejl. Setting the environment variable QEMU_CPU=pxa250 gives me armv5tel emulation.

The ARM Architecture Wikipedia page [2] says that in armv5tejl the T stands for Thumb instructions (which I don’t think Debian uses), the E stands for DSP enhancements (which probably isn’t relevant for me as I’m only doing integer maths), the J stands for supporting special Java instructions (which I definitely don’t need) and I’m still trying to work out what L means (comments appreciated).

So it seems clear that the armv5tel emulation provided by QEMU_CPU=pxa250 will do everything I need for building and testing ARM embedded software. The issue is how to enable it. For a user shell I can just put export QEMU_CPU=pxa250 in .login or something, but I want to emulate an entire system (cron jobs, ssh logins, etc).

I’ve filed Debian bug #870329 requesting a configuration file for this [1]. If I put such a configuration file in the chroot everything would work as desired.

To get things working in the meantime I wrote the below wrapper for /usr/bin/qemu-arm-static that calls /usr/bin/qemu-arm-static.orig (the renamed version of the original program). It’s ugly (I would use a config file if I needed to support more than one type of CPU) but it works.

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>

int main(int argc, char **argv)
  if(setenv("QEMU_CPU", "pxa250", 1))
    printf("Can't set $QEMU_CPU\n");
    return 1;
  execv("/usr/bin/qemu-arm-static.orig", argv);
  printf("Can't execute \"%s\" because of qemu failure\n", argv[0]);
  return 1;

Running a Tor Relay

I previously wrote about running my SE Linux Play Machine over Tor [1] which involved configuring ssh to use Tor.

Since then I have installed a Tor hidden service for ssh on many systems I run for clients. The reason is that it is fairly common for them to allow a server to get a new IP address by DHCP or accidentally set their firewall to deny inbound connections. Without some sort of VPN this results in difficult phone calls talking non-technical people through the process of setting up a tunnel or discovering an IP address. While I can run my own VPN for them I don’t want their infrastructure tied to mine and they don’t want to pay for a 3rd party VPN service. Tor provides a free VPN service and works really well for this purpose.

As I believe in giving back to the community I decided to run my own Tor relay. I have no plans to ever run a Tor Exit Node because that involves more legal problems than I am willing or able to deal with. A good overview of how Tor works is the EFF page about it [2]. The main point of a “Middle Relay” (or just “Relay”) is that it only sends and receives encrypted data from other systems. As the Relay software (and the sysadmin if they choose to examine traffic) only sees encrypted data without any knowledge of the source or final destination the legal risk is negligible.

Running a Tor relay is quite easy to do. The Tor project has a document on running relays [3], which basically involves changing 4 lines in the torrc file and restarting Tor.

If you are running on Debian you should install the package tor-geoipdb to allow Tor to determine where connections come from (and to not whinge in the log files).

ORPort [IPV6ADDR]:9001

If you want to use IPv6 then you need a line like the above with IPV6ADDR replaced by the address you want to use. Currently Tor only supports IPv6 for connections between Tor servers and only for the data transfer not the directory services.

Data Transfer

I currently have 2 systems running as Tor relays, both of them are well connected in a European DC and they are each transferring about 10GB of data per day which isn’t a lot by server standards. I don’t know if there is a sufficient number of relays around the world that the share of the load is small or if there is some geographic dispersion algorithm which determined that there are too many relays in operation in that region.

Apache Mesos on Debian

I decided to try packaging Mesos for Debian/Stretch. I had a spare system with a i7-930 CPU, 48G of RAM, and SSDs to use for building. The i7-930 isn’t really fast by today’s standards, but 48G of RAM and SSD storage mean that overall it’s a decent build system – faster than most systems I run (for myself and for clients) and probably faster than most systems used by Debian Developers for build purposes.

There’s a github issue about the lack of an upstream package for Debian/Stretch [1]. That upstream issue could probably be worked around by adding Jessie sources to the APT sources.list file, but a package for Stretch is what is needed anyway.

Here is the documentation on building for Debian [2]. The list of packages it gives as build dependencies is incomplete, it also needs zlib1g-dev libapr1-dev libcurl4-nss-dev openjdk-8-jdk maven libsasl2-dev libsvn-dev. So BUILDING this software requires Java + Maven, Ruby, and Python along with autoconf, libtool, and all the usual Unix build tools. It also requires the FPM (Fucking Package Management) tool, I take the choice of name as an indication of the professionalism of the author.

Building the software on my i7 system took 79 minutes which includes 76 minutes of CPU time (I didn’t use the -j option to make). At the end of the build it turned out that I had mistakenly failed to install the Fucking Package Management “gem” and it aborted. At this stage I gave up on Mesos, the pain involved exceeds my interest in trying it out.

How to do it Better

One of the aims of Free Software is that bugs are more likely to get solved if many people look at them. There aren’t many people who will devote 76 minutes of CPU time on a moderately fast system to investigate a single bug. To deal with this software should be prepared as components. An example of this is the SE Linux project which has 13 source modules in the latest release [3]. Of those 13 only 5 are really required. So anyone who wants to start on SE Linux from source (without considering a distribution like Debian or Fedora that has it packaged) can build the 5 most important ones. Also anyone who has an issue with SE Linux on their system can find the one source package that is relevant and study it with a short compile time. As an aside I’ve been working on SE Linux since long before it was split into so many separate source packages and know the code well, but I still find the separation convenient – I rarely need to work on more than a small subset of the code at one time.

The requirement of Java, Ruby, and Python to build Mesos could be partly due to language interfaces to call Mesos interfaces from Ruby and Python. Ohe solution to that is to have the C libraries and header files to call Mesos and have separate packages that depend on those libraries and headers to provide the bindings for other languages. Another solution is to have autoconf detect that some languages aren’t installed and just not try to compile bindings for them (this is one of the purposes of autoconf).

The use of a tool like Fucking Package Management means that you don’t get help from experts in the various distributions in making better packages. When there is a FOSS project with a debian subdirectory that makes barely functional packages then you will be likely to have an experienced Debian Developer offer a patch to improve it (I’ve offered patches for such things on many occasions). When there is a FOSS project that uses a tool that is never used by Debian developers (or developers of Fedora and other distributions) then the only patches you will get will be from inexperienced people.

A software build process should not download anything from the Internet. The source archive should contain everything that is needed and there should be dependencies for external software. Any downloads from the Internet need to be protected from MITM attacks which means that a responsible software developer has to read through the build system and make sure that appropriate PGP signature checks etc are performed. It could be that the files that the Mesos build downloaded from the Apache site had appropriate PGP checks performed – but it would take me extra time and effort to verify this and I can’t distribute software without being sure of this. Also reproducible builds are one of the latest things we aim for in the Debian project, this means we can’t just download files from web sites because the next build might get a different version.

Finally the fpm (Fucking Package Management) tool is a Ruby Gem that has to be installed with the “gem install” command. Any time you specify a gem install command you should include the -v option to ensure that everyone is using the same version of that gem, otherwise there is no guarantee that people who follow your documentation will get the same results. Also a quick Google search didn’t indicate whether gem install checks PGP keys or verifies data integrity in other ways. If I’m going to compile software for other people to use I’m concerned about getting unexpected results with such things. A Google search indicates that Ruby people were worried about such things in 2013 but doesn’t indicate whether they solved the problem properly.

Forking Mon and DKIM with Mailing Lists

I have forked the “Mon” network/server monitoring system. Here is a link to the new project page [1]. There hasn’t been an upstream release since 2010 and I think we need more frequent releases than that. I plan to merge as many useful monitoring scripts as possible and support them well. All Perl scripts will use strict and use other best practices.

The first release of etbe-mon is essentially the same as the last release of the mon package in Debian. This is because I started work on the Debian package (almost all the systems I want to monitor run Debian) and as I had been accepted as a co-maintainer of the Debian package I put all my patches into Debian.

It’s probably not a common practice for someone to fork upstream of a package soon after becoming a comaintainer of the Debian package. But I believe that this is in the best interests of the users. I presume that there are other collections of patches out there and I hope to merge them so that everyone can get the benefits of features and bug fixes that have been separate due to a lack of upstream releases.

Last time I checked mon wasn’t in Fedora. I believe that mon has some unique features for simple monitoring that would be of benefit to Fedora users and would like to work with anyone who wants to maintain the package for Fedora. I am also interested in working with any other distributions of Linux and with non-Linux systems.

While setting up the mailing list for etbemon I wrote an article about DKIM and mailing lists (primarily Mailman) [2]. This explains how to setup Mailman for correct operation with DKIM and also why that seems to be the only viable option.

More KVM Modules Configuration

Last year I blogged about blacklisting a video driver so that KVM virtual machines didn’t go into graphics mode [1]. Now I’ve been working on some other things to make virtual machines run better.

I use the same initramfs for the physical hardware as for the virtual machines. So I need to remove modules that are needed for booting the physical hardware from the VMs as well as other modules that get dragged in by systemd and other things. One significant saving from this is that I use BTRFS for the physical machine and the BTRFS driver takes 1M of RAM!

The first thing I did to reduce the number of modules was to edit /etc/initramfs-tools/initramfs.conf and change “MODULES=most” to “MODULES=dep”. This significantly reduced the number of modules loaded and also stopped the initramfs from probing for a non-existant floppy drive which added about 20 seconds to the boot. Note that this will result in your initramfs not supporting different hardware. So if you plan to take a hard drive out of your desktop PC and install it in another PC this could be bad for you, but for servers it’s OK as that sort of upgrade is uncommon for servers and only done with some planning (such as creating an initramfs just for the migration).

I put the following rmmod commands in /etc/rc.local to remove modules that are automatically loaded:
rmmod btrfs
rmmod evdev
rmmod lrw
rmmod glue_helper
rmmod ablk_helper
rmmod aes_x86_64
rmmod ecb
rmmod xor
rmmod raid6_pq
rmmod cryptd
rmmod gf128mul
rmmod ata_generic
rmmod ata_piix
rmmod i2c_piix4
rmmod libata
rmmod scsi_mod

In /etc/modprobe.d/blacklist.conf I have the following lines to stop drivers being loaded. The first line is to stop the video mode being set and the rest are just to save space. One thing that inspired me to do this is that the parallel port driver gave a kernel error when it loaded and tried to access non-existant hardware.
blacklist bochs_drm
blacklist joydev
blacklist ppdev
blacklist sg
blacklist psmouse
blacklist pcspkr
blacklist sr_mod
blacklist acpi_cpufreq
blacklist cdrom
blacklist tpm
blacklist tpm_tis
blacklist floppy
blacklist parport_pc
blacklist serio_raw
blacklist button

On the physical machine I have the following in /etc/modprobe.d/blacklist.conf. Most of this is to prevent loading of filesystem drivers when making an initramfs. I do this because I know there’s never going to be any need for CDs, parallel devices, graphics, or strange block devices in a server room. I wouldn’t do any of this for a desktop workstation or laptop.
blacklist ppdev
blacklist parport_pc
blacklist cdrom
blacklist sr_mod
blacklist nouveau

blacklist ufs
blacklist qnx4
blacklist hfsplus
blacklist hfs
blacklist minix
blacklist ntfs
blacklist jfs
blacklist xfs

SE Linux in Debian/Stretch

Debian/Stretch has been frozen. Before the freeze I got almost all the bugs in policy fixed, both bugs reported in the Debian BTS and bugs that I know about. This is going to be one of the best Debian releases for SE Linux ever.

Systemd with SE Linux is working nicely. The support isn’t as good as I would like, there is still work to be done for systemd-nspawn. But it’s close enough that anyone who needs to use it can use audit2allow to generate the extra rules needed. Systemd-nspawn is not used by default and it’s not something that a new Linux user is going to use, I think that expert users who are capable of using such features are capable of doing the extra work to get them going.

In terms of systemd-nspawn and some other rough edges, the issue is the difference between writing policy for a single system vs writing policy that works for everyone. If you write policy for your own system you can allow access for a corner case without a lot of effort. But if I wrote policy to allow access for every corner case then they might add up to a combination that can be exploited. I don’t recommend blindly adding the output of audit2allow to your local policy (be particularly wary of access to shadow_t and write access to etc_t, lib_t, etc). But OTOH if you have a system that’s running in enforcing mode that happens to have one daemon with more access than is ideal then all the other daemons will still be restricted.

As for previous releases I plan to keep releasing updates to policy packages in my own apt repository. I’m also considering releasing policy source to updates that can be applied on existing Stretch systems. So if you want to run the official Debian packages but need updates that came after Stretch then you can get them. Suggestions on how to distribute such policy source are welcome.

Please enjoy SE Linux on Stretch. It’s too late for most bug reports regarding Stretch as most of them won’t be sufficiently important to justify a Stretch update. The vast majority of SE Linux policy bugs are issues of denying wanted access not permitting unwanted access (so not a security issue) and can be easily fixed by local configuration, so it’s really difficult to make a case for an update to Stable. But feel free to send bug reports for Buster (Stretch+1).

Video Mode and KVM

I recently changed my KVM servers to use the kernel command-line parameter nomodeset for the virtual machine kernels so that they don’t try to go into graphics mode. I do this because I don’t have X11 or VNC enabled and I want a text console to use with the -curses option of KVM. Without the nomodeset KVM just says that it’s in 1024*768 graphics mode and doesn’t display the text.

Now my KVM server running Debian/Unstable has had it’s virtual machines start going into graphics mode in spite of nomodeset parameter. It seems that an update to QEMU has added a new virtual display driver which recent kernels from Debian/Unstable support with the bochs_drm driver, and that driver apparently doesn’t respect nomodeset.

The solution is to create a file named /etc/modprobe.d/blacklist.conf with the contents “blacklist bochs_drm” and now my virtual machines have a usable plain-text console again! This blacklist method works for all video drivers, you can blacklist similar modules for the other virtual display hardware. But it would be nice if the one kernel option would cover them all.

Is a Thinkpad Still Like a Rolls-Royce

For a long time the Thinkpad has been widely regarded as the “Rolls-Royce of laptops”. Since 2003 one could argue that Rolls-Royce is no longer the Rolls-Royce of cars [1]. The way that IBM sold the Think business unit to Lenovo and the way that Lenovo is producing both Thinkpads and cheaper Ideapads is somewhat similar to the way the Rolls-Royce trademark and car company were separately sold to companies that are known for making cheaper cars.

Sam Varghese has written about his experience with Thinkpads and how he thinks it’s no longer the Rolls-Royce of laptops [2]. Sam makes some reasonable points to support this claim (one of which only applies to touchpad users – not people like me who prefer the Trackpoint), but I think that the real issue is whether it’s desirable to have a laptop that could be compared to a Rolls-Royce nowadays.


The Rolls-Royce car company is known for great reliability and support as well as features that other cars lack (mostly luxury features). The Thinkpad marque (both before and after it was sold to Lenovo) was also known for great support. You could take a Thinkpad to any service center anywhere in the world and if the serial number indicated that it was within the warranty period it would be repaired without any need for paperwork. The Thinkpad service centers never had any issue with repairing a Thinkpad that lacked a hard drive just as long as the problem could be demonstrated. It was also possible to purchase an extended support contract at any time which covered all repairs including motherboard replacement. I know that not everyone had as good an experience as I had with Thinkpad support, but I’ve been using them since 1998 without problems – which is more than I can say for most hardware.

Do we really need great reliability from laptops nowadays? When I first got a laptop hardly anyone I knew owned one. Nowadays laptops are common. Having a copy of important documents on a USB stick is often a good substitute for a reliable laptop, when you are in an environment where most people own laptops it’s usually not difficult to find someone who will let you use theirs for a while. I think that there is a place for a laptop with RAID-1 and ECC RAM, it’s a little known fact that Thinkpads have a long history of supporting the replacement of a CD/DVD drive with a second hard drive (I don’t know if this is still supported) but AFAIK they have never supported ECC RAM.

My first Thinkpad cost $3,800. In modern money that would be something like $7,000 or more. For that price you really want something that’s well supported to protect the valuable asset. Sam complains about his new Thinkpad costing more than $1000 and needing to be replaced after 2.5 years. Mobile phones start at about $600 for the more desirable models (IE anything that runs Pokemon Go) and the new Google Pixel phones range from $1079 to $1,419. Phones aren’t really expected to be used for more than 2.5 years. Phones are usually impractical to service in any way so for most of the people who read my blog (who tend to buy the more expensive hardware) they are pretty much a disposable item costing $600+. I previously wrote about a failed Nexus 5 and the financial calculations for self-insuring an expensive phone [3]. I think there’s no way that a company can provide extended support/warranty while making a profit and offering a deal that’s good value to customers who can afford to self-insure. The same applies for the $499 Lenovo Ideapad 310 and other cheaper Lenovo products. Thinkpads (the higher end of the Lenovo laptop range) are slightly more expensive than the most expensive phones but they also offer more potential for the user to service them.


My first Thinkpad was quite underpowered when compared to desktop PCs, it had 32M of RAM and could only be expanded to 96M at a time when desktop PCs could be expanded to 128M easily and 256M with some expense. It had a 800*600 display when my desktop display was 1280*1024 (37% of the pixels). Nowadays laptops usually start at about 8G of RAM (with a small minority that have 4G) and laptop displays start at about 1366*768 resolution (51% of the pixels in a FullHD display). That compares well to desktop systems and also is capable of running most things well. My current Thinkpad is a T420 with 8G of RAM and a 1600*900 display (69% of FullHD), it would be nice to have higher resolution but this works well and it was going cheap when I needed a new laptop.

Modern Thinkpads don’t have some of the significant features that older ones had. The legendary Butterfly Keyboard is long gone, killed by the wide displays that economies of scale and 16:9 movies have forced upon us. It’s been a long time since Thinkpads had some of the highest resolution displays and since anyone really cared about it (you only need pixels to be small enough that you can’t see them).

For me one of the noteworthy features of the Thinkpads has been the great keyboard. Mechanical keys that feel like a desktop keyboard. It seems that most Thinkpads are getting the rubbery keyboard design made popular by Apple. I guess this is due to engineering factors in designing thin laptops and the fact that most users don’t care.

Matthew Garrett has blogged about the issue of Thinkpad storage configured as “RAID mode” without any option to disable it [4]. This is an annoyance (which incidentally has been worked around) and there are probably other annoyances like it. Designing hardware and an OS are both complex tasks. The interaction between Windows and the hardware is difficult to get right from both sides and the people who design the hardware often don’t think much about Linux support. It has always been this way, the early Thinkpads had no Linux support for special IBM features (like fan control) and support for ISA-PnP was patchy. It is disappointing that Lenovo doesn’t put a little extra effort into making sure that Linux works well on their hardware and this might be a reason for considering another brand.

Service Life

I bought my curent Thinkpad T420 in October 2013 [5] It’s more than 3 years old and has no problems even though I bought it refurbished with a reduced warranty. This is probably the longest I’ve had a Thinkpad working well, which seems to be a data point against the case that modern Thinkpads aren’t as good.

I bought a T61 in February 2010 [6], it started working again (after mysteriously not working for a month in late 2013) and apart from the battery lasting 5 minutes and a CPU cooling problem it still works well. If that Thinkpad had cost $3,800 then I would have got it repaired, but as it cost $796 (plus the cost of a RAM upgrade) and a better one was available for $300 it wasn’t worth repairing.

In the period 1998 to 2010 I bought a 385XD, a 600E, a T21, a T43, and a T61 [6]. During that time I upgraded laptops 4 times in 12 years (I don’t have good records of when I bought each one). So my average Thinkpad has lasted 3 years. The first 2 were replaced to get better performance, the 3rd was replaced when an employer assigned me a Thinkpad (and sold it to be when I left), and 4 and 5 were replaced due to hardware problems that could not be fixed economically given the low cost of replacement.


Thinkpads possibly don’t have the benefits over other brands that they used to have. But in terms of providing value for the users it seems that they are much better than they used to be. Until I wrote this post I didn’t realise that I’ve broken a personal record for owning a laptop. It just keeps working and I hadn’t even bothered looking into the issue. For some devices I track how long I’ve owned them while thinking “can I justify replacing it yet”, but the T420 just does everything I want. The battery still lasts 2+ hours which is a new record too, with every other Thinkpad I’ve owned the battery life has dropped to well under an hour within a year of purchase.

If I replaced this Thinkpad T420 now it will have cost me less than $100 per year (or $140 per year including the new SSD I installed this year), that’s about 3 times better than any previous laptop! I wouldn’t feel bad about replacing it as I’ve definitely got great value for money from it. But I won’t replace it as it’s doing everything I want.

I’ve just realised that by every measure (price, reliability, and ability to run all software I want to run) I’ve got the best Thinkpad I’ve ever had. Maybe it’s not like a Rolls-Royce, but I’d much rather drive a 2016 Tesla than a 1980 Rolls-Royce anyway.

Another Broken Nexus 5

In late 2013 I bought a Nexus 5 for my wife [1]. It’s a good phone and I generally have no complaints about the way it works. In the middle of 2016 I had to make a warranty claim when the original Nexus 5 stopped working [2]. Google’s warranty support was ok, the call-back was good but unfortunately there was some confusion which delayed replacement.

Once the confusion about the IMEI was resolved the warranty replacement method was to bill my credit card for a replacement phone and reverse the charge if/when they got the original phone back and found it to have a defect covered by warranty. This policy meant that I got a new phone sooner as they didn’t need to get the old phone first. This is a huge benefit for defects that don’t make the phone unusable as you will never be without a phone. Also if the user determines that the breakage was their fault they can just refrain from sending in the old phone.

Today my wife’s latest Nexus 5 developed a problem. It turned itself off and went into a reboot loop when connected to the charger. Also one of the clips on the rear case had popped out and other clips popped out when I pushed it back in. It appears (without opening the phone) that the battery may have grown larger (which is a common symptom of battery related problems). The phone is slightly less than 3 years old, so if I had got the extended warranty then I would have got a replacement.

Now I’m about to buy a Nexus 6P (because the Pixel is ridiculously expensive) which is $700 including postage. Kogan offers me a 3 year warranty for an extra $108. Obviously in retrospect spending an extra $100 would have been a benefit for the Nexus 5. But the first question is whether new phone going to have a probability greater than 1/7 of failing due to something other than user error in years 2 and 3? For an extended warranty to provide any benefit the phone has to have a problem that doesn’t occur in the first year (or a problem in a replacement phone after the first phone was replaced). The phone also has to not be lost, stolen, or dropped in a pool by it’s owner. While my wife and I have a good record of not losing or breaking phones the probability of it happening isn’t zero.

The Nexus 5 that just died can be replaced for 2/3 of the original price. The value of the old Nexus 5 to me is less than 2/3 of the original price as buying a newer better phone is the option I want. The value of an old phone to me decreases faster than the replacement cost because I don’t want to buy an old phone.

For an extended warranty to be a good deal for me I think it would have to cost significantly less than 1/10 of the purchase price due to the low probability of failure in that time period and the decreasing value of a replacement outdated phone. So even though my last choice to skip an extended warranty ended up not paying out I expect that overall I will be financially ahead if I keep self-insuring, and I’m sure that I have already saved money by self-insuring all my previous devices.

Improving Memory

I’ve just attended a lecture about improving memory, mostly about mnemonic techniques. I’m not against learning techniques to improve memory and I think it’s good to teach kids a variety of things many of which won’t be needed when they are younger as you never know which kids will need various skills. But I disagree with the assertion that we are losing valuable skills due to “digital amnesia”.

Nowadays we have programs to check spelling so we can avoid the effort of remembering to spell difficult words like mnemonic, calendar apps on our phones that link to addresses and phone numbers, and the ability to Google the world’s knowledge from the bathroom. So the question is, what do we need to remember?

For remembering phone numbers it seems that all we need is to remember numbers that we might call in the event of a mobile phone being lost or running out of battery charge. That would be a close friend or relative and maybe a taxi company (and 13CABS isn’t difficult to remember).

Remembering addresses (street numbers etc) doesn’t seem very useful in any situation. Remembering the way to get to a place is useful and it seems to me that the way the navigation programs operate works against this. To remember a route you would want to travel the same way on multiple occasions and use a relatively simple route. The way that Google maps tends to give the more confusing routes (IE routes varying by the day and routes which take all shortcuts) works against this.

I think that spending time improving memory skills is useful, but it will either take time away from learning other skills that are more useful to most people nowadays or take time away from leisure activities. If improving memory skills is fun for you then it’s probably better than most hobbies (it’s cheap and provides some minor benefits in life).

When I was in primary school it was considered important to make kids memorise their “times tables”. I’m sure that memorising the multiplication of all numbers less than 13 is useful to some people, but I never felt a need to do it. When I was young I could multiply any pair of 2 digit numbers as quickly as most kids could remember the result. The big difference was that most kids needed a calculator to multiply any number by 13 which is a significant disadvantage.

What We Must Memorise

Nowadays the biggest memory issue is with passwords (the Correct Horse Battery Staple XKCD comic is worth reading [1]). Teaching mnemonic techniques for the purpose of memorising passwords would probably be a good idea – and would probably get more interest from the audience.

One interesting corner-case of passwords is ATM PIN numbers. The Wikipedia page about PIN numbers states that 4-12 digits can be used for PINs [2]. The 4 digit PIN was initially chosen because John Adrian Shepherd-Barron (who is credited with inventing the ATM) was convinced by his wife that 6 digits would be too difficult to memorise. The fact that hardly any banks outside Switzerland use more than 4 digits suggests that Mrs Shepherd-Barron had a point. The fact that this was decided in the 60’s proves that it’s not “digital amnesia”.

We also have to memorise how to use various supposedly user-friendly programs. If you observe an iPhone or Mac being used by someone who hasn’t used one before it becomes obvious that they really aren’t so user friendly and users need to memorise many operations. This is not a criticism of Apple, some tasks are inherently complex and require some complexity of the user interface. The limitations of the basic UI facilities become more obvious when there are operations like palm-swiping the screen for a screen-shot and a double-tap plus drag for a 1 finger zoom on Android.

What else do we need to memorise?