Archives

Categories

How to Choose a Free Software Mission

Jane McGoningal gave an interesting TED talk about how Online Gaming can Make a Better World [1]. One of her points is that there is no unemployment in games such as World of Warcraft, there is always a “world saving” mission available to you which is just within reach of your skill level – and no-one is assigned a mission that they can’t possibly do. It seems to me that the free software development community has a similar issue, there are always “missions” available at all skill levels. Our challenge is to find ways to encourage people to accept the missions and to provide them appropriate levels of support to encourage them on their path to an “epic win“. Choosing a suitable mission is a particularly difficult problem as you often don’t know how difficult a task will be until you are more than half complete.

Jane makes points about humans being happier when working hard and a desire for “epic meaning“. She says that it’s a problem that gamers believe that they can change a virtual world but not the “real world“. If you change the “virtual world” of software development then that changes the “real-world“.

Jane cites Herodotus as reporting a kingdom that was gripped in a famine for 18 years where the king instituted a policy of playing games and eating on alternate days with the aim being that the games would distract people from their hunger. I’m sure that I’m not the only person who’s gone without food or water for a day because of being too busy coding…

She has a lot of other interesting points and I recommend that you read the Institute For The Future [2] web site for more background information.

Now my question is, how can we encourage programmers to start doing Free Software and Open Source development and achieving some Epic Wins? I don’t claim to have good answers and I would appreciate any suggestions. If you blog about this please leave a comment on this post to direct readers to your blog.

Hacker Spaces

When in California last year I visited the NoiseBridge [1] Hackerspace. I was very impressed with what I saw, good equipment and very friendly people. The general concept of a “HackerSpace” is that it is an environment to support random creative projects. The first picture is a sign near the door which is clearly visible to anyone who is leaving, it encourages people to be “AWESOME” and “EXCELLENT” by cleaning up after themselves (and maybe others). I think that this demonstrates the core of what is needed to get such a community project going.

Generosity towards others was on display everywhere, there was some free fruit on a table as well as a bottle of Port for anyone to drink. Someone had written a note saying that it’s “not an insecure Port” (a computer security joke). Someone had created an artwork that resembled an advert which some idiots had mistaken for a terrorist bomb (the creature displaying the Impudent Finger).

The main (only?) phone in NoiseBridge is apparently a VOIP phone, it is located next to an old pay-phone along with some Magnetix and other toys that can be used by curious people of any age. Magnetix have had repeated safety problems that caused recalls so maybe such things are best placed in an 18+ environment.

When I visited about 10 people were working on electronics projects. There were a number of soldering irons in use and some serious test equipment (including a couple of CROs) was available. The people doing the soldering were eager to teach other people about their work. Other equipment that was available included some serious industrial sewing machines and some drill presses. A lot of that equipment is unreasonably expensive to buy for personal use and is also rather bulky to store, having it available in a central location is a great service for the community.

Finally Noisebridge has a lot of space. There are rooms that could be used for giving small lectures and couches in the central area for people to relax and have impromptu meetings. Of course they had wireless net access too.

Australian Hacker Spaces

Kylie Willison has written about the Adelaide Hackerspace which sounds promising [2].

The Connected Community Hackerspace is a new one in Melbourne [3]. It operates out of the homes of members so it’s not nearly as big as Noisebridge (which has a substantial property rented for 24*7 operation). I hope that we can get something running permanently in the Melbourne city area in the near future. The Noisebridge membership dues are $80 per month (or $40 for starving hackers). I would pay that for a comfortable chair in a convenient city location with net access surrounded by cool people!

Poster telling people that they are AWESOME and EXCELLENT if they clean upFree bottle of port with sign saying - this is not an insecure portVOIP phone in use, pay-phone for decoration, and MagnetixDrill presses and other heavy equipmentParts and CROs for electronic workRecreation of an advert that some idiots thought was a bombIndustrial sewing machineShelves full of random spare parts

Bose vs Bauhn/Aldi Noise Canceling Headphones

Me wearing Bauhn HeadphonesInside the case of Bauhn Noise Canceling Headphones showing the cable and connectorsOutside of the cases of Bauhn and Bose Headphones

Overview

The German supermarket chain Aldi has been running in Australia for 8 years now [1]. Their standard practice for a long time has been to offer regular special deals on a few items of consumer electronics every week, my chocolate fridge is one thing I bought from Aldi [2].

Today Aldi have started selling Noise Canceling Headphones [3]. These headphones are badged by Bauhn – but that name is apparently applied to random products from cheap manufacturers, it may be an Aldi name that is applied to stuff that they sell. The headphones cost $69AU which is really cheap. But the deal will probably end in less than a week when stock runs out.

Noise canceling headphones can be used in server rooms and other noisy environments. Every company that has a server room should buy a few sets. One of the features of noise-canceling is that it works best on low frequencies and on regular sounds – it specifically doesn’t block human voice well. In some noisy environments it will be easier to hear people talk if you wear such headphones!

Noise canceling headphones are also very useful to people who are on the autism spectrum and other people who get more annoyed by noise pollution than average people. I have been wearing my Bose headphones on public transport and when walking around in the city, this not only stops traffic noise but it also helps to avoid people thinking that I want to talk to them.

Features

The first picture shows me wearing the Bauhn NC headphones, it’s from the right to show the controls for the built-in MP3 player. I have not yet tested the MP3 functionality. It appears that as the controls are one power button, buttons for next/previous track, and for controlling the volume. This is fairly poor for MP3 functionality, ideally you would want to have a display to see a list of tracks, maybe have directories to store files, etc. I guess this could be a convenient feature on occasion, but you wouldn’t buy the headphones for the MP3 functionality.
The next two pictures show a comparison of the Bauhn headset with the Bose QC-15 headset that I bought last year [4].

The cases of the Bauhn and Bose devices are almost exactly the same size and of a very similar shape, the Bose case is tapered and indented and also has a finer weave on the cloth covering – it looks much nicer. Both devices come with an adapter for an airline socket and with a detachable cable. They also both have pouches attached to the inside of the case with velcro. But the Bauhn headphones come with an adapter for the 6.5mm TRS connector which could be convenient if you want to plug them in to a larger amplifier, the basic connector is 3.5mm in both cases. The Bauhn device uses a standard TRS connector at the headphone end while the Bose QC-15 use a special connector that matches the shape of the headset and which has a TRRS plug (to cater for the high/low volume switch), so it seems that a damaged Bauhn cable could be replaced cheaply while a replacement Bose cable would have to be purchased from Bose (presumably at great expense and delay). The Bauhn case also has a velcro attached pocket for storing business cards (or maybe a name tag or something).

The supplied cable for the Bauhn is described as being 5 feet long – which isn’t quite long enough to reach a tower PC that is sitting on the floor. The Bose has a cable that is about a foot longer (maybe 6 feet total), but due to the non-standard connector you can’t replace it. I presume that I could easily buy a 4 meter cable for the Bauhn headphones, but I could of course buy an extension cable to use with the Bose.

Bose advertise the QC-15 headphones as having 35 hours of battery life from a single AAA battery. Aldi advertise the Bauhn headset as having 5 hours of battery life when NC is turned on – and they use two AAA batteries. It’s widely regarded that rechargeable batteries don’t last as long as the batteries used for estimating the battery life (which presumably are the most expensive long-life batteries available). I’ve found a single rechargeable AAA battery to last well over 5 hours in my Bose headphones, so it seems that battery life is considerably worse for the Bauhn device.

One feature of the Bauhn device is that it can be used without any batteries for playing external music. The Bose headphones can’t be used at all without a battery. So while the Bauhn will use the batteries faster it will at least be usable when the batteries run out. But if you are buying headphones for the purpose of avoiding noise then the Bose headphones are simply better.

Comfort

The Bose headphones have significantly deeper ear wells than the Bauhn – about 23mm vs 18mm. If your ears stick out more than 18mm (as mine apparently do) then this is a good reason for choosing Bose.

The Bose headphones are a tighter fit, the spring that pushes the ear-pieces together is stronger. But they have better padding so this doesn’t cause me any discomfort. Also the Bose headphones have better passive noise reduction due to having a more snug fit around the ears. I’ve worn my Bose headphones on a flight from the US to Australia with hardly a break and they were quite comfortable – I would never want to do that with the Bauhn headphones.

Noise Reduction

I tested the Bose and Bauhn products against three noise scenarios, external music, an air-conditioner, and a car engine.

The Bose headphones made good reductions of the noise from the external music (Numb by Linkin Park) and the air-conditioner. The Bauhn headphones did little to stop Linkin Park and was not very effective against the air-conditioner noise. I think that this is largely due to the lack of passive noise reduction, the air-conditioner in question makes little vibration noise and the sound of rushing air is generally immune to active noise cancellation. Both headphones were very effective when in a car with the engine idling. The engine noise of vehicles seems to fall in an ideal frequency range for active cancellation.

Music Quality

When listening to Youtube music played on my Thinkpad I could not notice any quality difference between the two sets of headphones. I did notice that the Bose headphones seemed to have a greater response in the higher frequency range, but that doesn’t necessarily mean that one set is better than the other. Maybe if I was listening to FLAC0 encoded music that I had personally ripped from a CD then I would notice a difference. But for most people the Bauhn music quality should be good enough.

Design Quality

The Bose product is solidly designed, while the Bauhn product appears cheap in every way. Opening the battery compartment on the Bauhn headphones is difficult and if you do it wrong you could easily break the lid off, I expect that every set of Bauhn headphones that is used by children will be broken in a small amount of time – but it should still be fully functional with a broken lid. The matt finish of the Bose headphones should hide minor scratches much better than the shiny Bauhn headphones. The Bauhn headphones also have lower quality plastic parts, it appears that the molds used were designed cheaply and without adequate care to prevent marking the final product.

The design flaws that affect usage of the Bauhn product are the shallow ear wells, the poor fit of the cushions around the ears (which is probably mostly due to a weak spring pressing the ear cups to the wearer’s head), and the battery compartment lid which is difficult to open and appears prone to breakage. The other flaws are all cosmetic.

I wonder whether the Bauhn product was made by one of the big name manufacturers who deliberately reduced the quality to avoid competing with their more expensive products. It seems that the major flaws could have been corrected at design time with almost no increase in manufacturing costs.

Recommendations

If you can afford the Bose® QuietComfort® 15 Acoustic Noise Cancelling® Headphones then they are really worth the extra expense, I have no regrets at all about spending about $320US (including tax) on my Bose QC-15. The Bauhn product is good for when you want something cheap, for example a set to be used in a server room, or for the use of children. I bought a Bauhn headset for a friend who is a pilot, he spent $1,100 on a noise-canceling headset for his plane but had never got around to buying one for recreational use – I expect that he will allow his children to use his new Bauhn headphones, if they get broken it’s only a $69 expense.

The second cheapest NC headphones I’ve seen on offer in Australia is Harvey Norman selling Phillips HN-110 Noise Canceling Headphones for $100AU [5].

Amazon sells Philips HN 110 Folding Noise-Canceling Headphones for $50US but doesn’t seem to ship them outside the US (at least not to Australia).

JB Hifi also has some NC headphones on sale in Australia [6], but they are more expensive at $219 for AKG and $319 for Sennheiser. Also the models they sell are on-ear which means that they will inherently have very little passive noise reduction – and will also annoy anyone who doesn’t like having their ears squashed.

If I was buying NC headphones for my own use and didn’t want to spend $300US then I would either buy the Philips HN 110 Folding Noise-Canceling Headphones from Amazon and get a friend in the US to post them to me or I would buy them from Harvey Norman.

But the Bauhn product is good if you want cheap headphones to stop engine noise and give reasonable quality when playing music.

Links March 2010

Blaise Aguera y Arcas gave an exciting demonstration of new augmented reality mapping software from Microsoft that combines video (including live video) with static mapping data and pictures [1]. This is a significant advance over current mapping systems such as Google Earth – but it’s not released yet either. It will be interesting to see whether Google or Microsoft gets this released first.

The New York Review of Books has an insightful atricle by Garry Kasparov about human/computer chess [2]. It’s surprising the degree to which a combination of human and computer chess playing can give a good result. Amateur human chess players plus regular PCs can beat grandmasters with computers or high-end computers with human help. It’s apparently the quality of human-computer interaction that determines the quality of play. But the article contains a lot more, I recommend reading it.

Daniel Kahneman gave an interesting TED talk about the difference between experiential and memory happyness [3]. As the concept of the moment is so short (about 3 seconds) apparently most people try to optimise their actions for the best memories of being happy. But to do so requires some different strategies. For example a two week vacation gives a memory that’s not much different from a one week vacation. Therefore it seems that you would be better off staying in a five star hotel for a week than a four star hotel for two weeks, and eat dinner at a Michelin Star restaurant at least once per holiday even if it means eating at McDonalds on other occasions due to lack of funds.

Temple Grandin gave an interesting TED talk “The World Needs all Kinds of Minds” [4] which mostly focussed on teaching children who are on the Autism spectrum. She is concerned that autistic children won’t end up where they belong “in Silicon Valley”.

Anupam Mishra gave an interesting TED talk about how the people of India’s Golden desert built structures to harvest and store water [5]. Some of their ideas should be copied in Australia, due to mismanagement and stupidity Australians are failing to survive in much more hospitable places.

Michael Tieman wrote an insightful and well researched article about the OSI’s rejection of the IIPA’s attacks on Open Source [6]. This is worth reading by anyone who wants to make a business or social case for free software.

Mark Shuttleworth wrote an interesting post about the new visual style for Ubuntu and Canonical [7]. Apparently this includes the creation of a new font set which will be available for free use.

Divorced Before Puberty – an informative New York Times article by Nicholas Kristof about the links between treatment of women and terrorism [8].

The New York Times has an interesting article on “Human Flesh Searches” on the Internet in China [9]. It’s basically crowds targetting people to find private information and harass them (similar to what some griefers are known for doing on the English-language part of the Internet). But they seem more interested in vigilante justice than lulz.

The New York Times has an informative article about the Cult of Scientology (Co$) [10]. Among other interesting news it suggests that the number of cult victims in the US has dropped from 55,000 to 25,000 in the 2001-2008 time period. Senator Xenophon has called for an inquiry into the crimes committed by the cult and a review of it’s tax-exempt status [11]. As always Xenu.net is the authoritative source for information on the Cult of Scientology AKA the Church of Scientology.

The New York Times has an interesting article about formally studying the skills related to school teaching [12]. It largely focuses on Doug Lemov’s Taxonomy of Effective Teaching describes 49 techniques that improve school results and some other related research. The article also mentions that increasing teacher salaries is not going to help much due to the large number of teachers, it’s only professions that employ small numbers of people that can potentially have their overall skills improved by increasing salaries.

Andy Wingo wrote an interesting article about Julius Caesar [13] based on the book The Assassination of Julius Caesar: A People’s History of Ancient Rome by Michael Parenti. It seems that Caesar was more of a populist than a despot.

Interesting article in The Register about the Large Hadron Collider (LHC) [14]. Apparently one 3.5TeV proton beam has as much energy as a British aircraft carrier running at 8 knots.

Citing Wikipedia

A meme that has been going around is that you can’t cite Wikipedia.

You can’t Cite Wikipedia Academically

Now it’s well known and generally agreed that you can’t cite Wikipedia for a scientific paper or other serious academic work. This makes sense firstly because Wikipedia changes, both in the short term (including vandalism) and in the long term (due to changes in technology, new archaeological discoveries, current events, etc). But you can link to a particular version of a Wikipedia page, you can just click on the history tab at the top of the screen and then click on the date of the version for which you want a direct permanent link.

The real reason for not linking to Wikipedia articles in academic publications is that you want to reference the original research not a report on it, which really makes sense. Of course the down-side is that you might reference some data that is in the middle of a 100 page report, in which case you might have to mention the page number as well. Also often the summary of the data you desire simply isn’t available anywhere else, someone might for example take some facts from 10 different pages of a government document and summarise them neatly in a single paragraph on Wikipedia. This isn’t a huge obstacle but just takes more time to create your own summary with references.

When Wikipedia is Suitable

The real issue however is how serious the document you are writing is and how much time you are prepared to spend on it. If I’m writing a message to a mailing list or a comment on a blog post then I probably won’t bother reading all the primary sources of Wikipedia pages, it would just waste too much of my time. Wikipedia is adequate for the vast majority of mailing list discussions.

If I’m discussing several choices for software with some colleagues we will probably start by reading the Wikipedia pages, if one option doesn’t appear to have the necessary features (according to Wikipedia) then we may ask the vendor if those features are really missing and if so whether they will be added in the next version – but we may decide that we don’t really need the features in question and modify our deployment plans. Many business decisions are made with incomplete data, time is money and there often isn’t time to do everything you want to do. Using Wikipedia as a primary source for business decisions is a way of trading off a little accuracy for a huge time saving. This is significantly better than the old fashioned approach of comparing products by reading their brochures – companies LIE in their advertising!

When writing blog posts the choice of whether to use Wikipedia as a reference depends on the point that you are trying to make and how serious the post is. If the post isn’t really serious or contentious or if the Wikipedia reference is for some facts that are not likely to be disputed then Wikipedia will probably do. For some posts a reference to a primary source will be better.

A blog post that references data that is behind a pay-wall (such as a significant portion of academic papers and news articles) is practically of less use than a post that cites Wikipedia. In most cases Wikipedia references free primary sources on the Internet (although it does sometimes refer to dead tree products and data that is behind a pay-wall). In the minority of cases where the primary references for a Wikipedia page are not available for free on the Internet there will be people searching for freely available references to replace the non-free ones. So if you refer to a Wikipedia page with non-free references a future reader might find that someone has added free references to it.

The Annoying People

One thing that often happens is that an Internet discussion contains no references for anything – it’s all just unsupported assertions. Then if anyone cites Wikipedia someone jumps in with “you can’t cite Wikipedia“. If you want to criticise Wikipedia references then please first start by criticising people who state opinions as fact and people who provide numbers without telling anyone where they came from! The Guinness Book of Records (now known as “Guinness World Records”) was devised as a reference to cite in debates in pubs [1]. It seems that most of the people who dismiss references to Wikipedia on the net would prefer that Internet debates have lower requirements for references than a pub debate.

When Wikipedia is cited in an online discussion it is usually a matter of one mouse click to check the references for the data in question. If Wikipedia happens to be wrong then anyone who cares can correct it. Saying “the Wikipedia page you cited had some transcription errors in copying data from primary sources and some of the other data was not attributed, I’ve corrected the numbers and noted that it contains original research” would be a very effective rebuttal to an argument that relies on data in Wikipedia. Saying “you can’t cite Wikipedia” means little, particularly if you happen to be strongly advocating an opposing position while not providing any references.

If one person cites an academic paper and someone else cites Wikipedia then it seems reasonable to assume that the academic paper is the better reference. But when it’s a choice between Wikipedia and no reference then surely Wikipedia should win! Also references to non-free data are not much good for supporting an argument, that’s really just unverified claims as far as most people can determine – therefore the issue becomes how much the person citing the non-free reference can be trusted to correctly understand and summarise the non-free data.

Also it has to be considered that not all primary sources are equal. Opinion pieces should be considered to have a fairly low value and while they are authoritative for representing the opinion of the person who wrote them they often prove little else – unless they happen to cite good references which brings them to the same level as Wikipedia. The main benefit for linking to opinion pieces is that it saves time typing and gives a better product for the readers – it’s sometimes easier to find someone else expressing an opinion well than to express it yourself.

So please, don’t criticise me for citing Wikipedia unless others in the discussion are citing better references. If most people are not citing any references or only citing opinion pieces then a Wikipedia page may be the best reference that is being provided!

Xen and Debian/Squeeze

Ben Hutchings announced that the Debian kernel team are now building Xen flavoured kernels for Debian/Unstable [1]. Thanks to Max Attems and the rest of the kernel team for this and all their other great work! Thanks Ben for announcing it. The same release included OpenVZ, updated DRM, and the kernel mode part of Nouveau – but Xen is what interests me most.

I’ve upgraded the Xen server that I use for my SE Linux Play Machine [2] to test this out.

To get this working you first need to remove xen-tools as the Testing version of bash-completion has an undeclared conflict, see Debian bug report #550590.

Then you need to upgrade to Unstable, this requires upgrading the kernel first as udev won’t upgrade without it.

If you have an existing system you need to install xen-hypervisor-3.4-i386 and purge xen-hypervisor-3.2-1-i386 as the older Xen hypervisor won’t boot the newer kernel. This also requires installing xen-utils-3.4 and removing xen-utils-3.2-1 as the utilities have to match the kernel. You don’t strictly need to remove the old hypervisor and utils packages as it should be possible to have dual-boot configured with old and new versions of Xen and matching Linux kernels. But this would be painful to manage as update-grub doesn’t know how to match Xen and Linux kernel versions so you will get Grub entries that are not bootable – it’s best to just do a clean break and keep a non-Xen version of the older kernel installed in case it doesn’t initially boot.

A apt-get dist-upgrade operation will result in installing the grub-pc package. The update-grub2 command doesn’t generate Xen entries. I’ve filed Debian bug report #574666 about this.

Because the Linux kernel doesn’t want to reduce in size to low values I use “xenhopt=dom0_mem=142000” in my GRUB 0.98 configuration so that the kernel doesn’t allocate as much RAM to it’s internal data structures. In the past I’ve encountered a kernel memory management bug related to significantly reducing the size of the Dom0 memory after boot [3].

Before I upgraded I had the dom0_mem size set to 122880 but when running Testing that seems to get me a kernel Out Of Memory condition from udev in the early stages of boot which prevents LVM volumes from being scanned and therefore prevents swap from being enabled so the system doesn’t work correctly (if at all). I had this problem with 138000M of RAM so I chose 142000 as a safe number. Now I admit that the system would probably boot with less RAM if I disabled SE Linux, but the SE Linux policy size of the configuration I’m using in the Dom0 has dropped from 692K to 619K so it seems likely that the increase in required memory is not caused by SE Linux.

The Xen Dom0 support on i386 in Debian/Unstable seems to work quite well. I wouldn’t recommend it for any serious use, but for something that’s inherently designed for testing (such as a SE Linux Play Machine) then it works well. My Play Machine has been offline for the last few days while I’ve been working on it. It didn’t take much time to get Xen working, it took a bit of time to get the SE Linux policy for Unstable working well enough to run Xen utilities in enforcing mode, and it took three days because I had to take time off to work on other projects.

Maintaining Screen Output

In my post about getting started with KVM I noted the fact that I had problems keeping screen output after the program exits [1].

The following snippet of shell code demonstrates the solution I’ve discovered for this problem. It determines whether SCREEN is the parent process of the shell script and if so it sleeps for 60 seconds before exiting so I can see the KVM error messages. The other option is for the script to call “exec bash” to give me a new shell in the same window. Note that if I start a screen session and then run my KVM script I don’t want it to do anything special on exit as I will return to the command-line in the same window. If I run “exec kvm-unstable” or have a system boot script run “start-stop-daemon -S -c USER --exec /usr/bin/screen -- -S kvm-unstable -d -m /usr/local/bin/kvm-unstable” then on exit I will be able to see what happened.

#!/bin/bash
set -e
kvm ETC || echo “KVM gave an error return code”
COUNT=$(ps aux|grep $PPID|grep SCREEN|wc -l)
if [ "$COUNT" = "1" ]; then
  echo "screen is the parent"
  sleep 60
else
  echo no screen
fi

Update: Thanks to John for the Slee for suggesting the following:
#!/bin/bash
set -e
kvm ETC || echo “KVM gave an error return code”
if grep -q SCREEN /proc/$PPID/cmdline ; then
  echo "screen is the parent"
  sleep 60
else
  echo no screen
fi

Starting with KVM

I’ve just bought a new Thinkpad that has hardware virtualisation support and I’ve got KVM running.

HugePages

The Linux-KVM site has some information on using hugetlbfs to allow the use of 2MB pages for KVM [1]. I put “vm.nr_hugepages = 1024” in /etc/sysctl.conf to reserve 2G of RAM for KVM use. The web page notes that it may be impossible to allocate enough pages if you set it some time after boot (the kernel can allocate memory that can’t be paged and it’s possible for RAM to become too fragmented to allow allocation). As a test I reduced my allocation to 296 pages and then increased it again to 1024, I was surprised to note that my system ran extremely slow while reserving the pages – it seems that allocating such pages is efficient when done at boot time but not so efficient when done later.

hugetlbfs /hugepages hugetlbfs mode=1770,gid=121 0 0

I put the above line in /etc/fstab to mount the hugetlbfs filesystem. The mode of 1770 allows anyone in the group to create files but not unlink or rename each other’s files. The gid of 121 is for the kvm group.

I’m not sure how hugepages are used, they aren’t used in the most obvious way. I expected that allocating 1024 huge pages would allow allocating 2G of RAM to the virtual machine, that’s not the case as “-m 2048” caused kvm to fail. I also expected that the number of HugePages free according to /proc/meminfo would reliably drop by an amount that approximately matches the size of the virtual machine – which doesn’t seem to be the case.

I have no idea why KVM with Hugepages would be significantly slower for user and system CPU time but still slightly faster for the overall build time (see the performance section below). I’ve been unable to find any documents explaining in which situations huge pages provide advantages and disadvantages or how they work with KVM virtualisation – the virtual machine allocates memory in 4K pages so how does that work with 2M pages provided to it by the OS?

But Hugepages does provide a slight benefit in performance and if you have plenty of RAM (I have 5G and can afford to buy more if I need it) you should just install it as soon as you start.

I have filed Debian bug report #574073 about KVM displaying an error you normally can’t see when it can’t access the hugepages filesystem [6].

Permissions

open /dev/kvm: Permission denied
Could not initialize KVM, will disable KVM support

One thing that annoyed me about KVM is that the Debian/Lenny version will run QEMU instead if it can’t run KVM. I discovered this when a routine rebuild of the SE Linux Policy packages in a Debian/Unstable virtual machine took an unreasonable amount of time. When I halted the virtual machine I noticed that it had displayed the above message on stderr before changing into curses mode (I’m not sure the correct term for this) such that the message was obscured until the xterm was returned to the non-curses mode at program exit. I had to add the user in question to the kvm group. I’ve filed Debian bug report #574063 about this [2].

Performance

Below is a table showing the time taken for building the SE Linux reference policy on Debian/Unstable. It compares running QEMU emulation (using the kvm command but without permission to access /dev/kvm), KVM with and without hugepages, Xen, and a chroot. Xen is run on an Opteron 1212 Dell server system with 2*1TB SATA disks in a RAID-1 while the KVM/QEMU tests are on an Intel T7500 CPU in a Thinkpad T61 with a 100G SATA disk [4]. All virtual machines had 512M of RAM and 2 CPU cores. The Opteron 1212 system is running Debian/Lenny and the Thinkpad is running Debian/Lenny with a 2.6.32 kernel from Testing.

Elapsed User System
QEMU on Opteron 1212 with Xen installed 126m54 39m36 8m1
QEMU on T7500 95m42 42m57 8m29
KVM on Opteron 1212 7m54 4m47 2m26
Xen on Opteron 1212 6m54 3m5 1m5
KVM on T7500 6m3 2m3 1m9
KVM Hugepages on T7500 with NCurses console 5m58 3m32 2m16
KVM Hugepages on T7500 5m50 3m31 1m54
KVM Hugepages on T7500 with 1800M of RAM 5m39 3m30 1m48
KVM Hugepages on T7500 with 1800M and file output 5m7 3m28 1m38
Chroot on T7500 3m43 3m11 29

I was surprised to see how inefficient it is when compared with a chroot on the same hardware. It seems that the system time is the issue. Most of the tests were done with 512M of RAM for the virtual machine, I tried 1800M which improved performance slightly (less IO means less context switches to access the real block device) and redirecting the output of dpkg-buildpackage to /tmp/out and /tmp/err reduced the built time by 32 seconds – it seems that the context switches for networking or console output really hurt performance. But for the default build it seems that it will take about 50% longer in a virtual machine than in a chroot, this is bearable for the things I do (of which building the SE Linux policy is the most time consuming), but if I was to start compiling KDE then I would be compelled to use a chroot.

I was also surprised to see how slow it was when compared to Xen, for the tests on the Opteron 1212 system I used a later version of KVM (qemu-kvm 0.11.0+dfsg-1~bpo50+1 from Debian/Unstable) but could only use 2.6.26 as the virtualised kernel (the Debian 2.6.32 kernels gave a kernel Oops on boot). I doubt that the lower kernel version is responsible for any significant portion of the extra minute of build time.

Storage

One way of managing storage for a virtual machine is to use files on a large filesystem for it’s block devices, this can work OK if you use a filesystem that is well designed for large files (such as XKS). I prefer to use LVM, one thing I have not yet discovered is how to make udev assign the KVM group to all devices that match /dev/V0/kvm-*.

Startup

KVM seems to be basically designed to run from a session, unlike Xen which can be started with “xm create” and then run in the background until you feel like running “xm console” to gain access to the console. One way of dealing with this is to use screen. The command “screen -S kvm-foo -d -m kvm WHATEVER” will start a screen session named kvm-foo that will be detached and will start by running kvm with “WHATEVER” as the command-line options. When screen is used for managing virtual machines you can use the command “screen -ls” to list the running sessions and then commands such as “screen -r kvm-unstable” to reattach to screen sessions. To detach from a running screen session you type ^A^D.

The problem with this is that screen will exit when the process ends and that loses the shutdown messages from the virtual machine. To solve this you can put “exec bash” or “sleep 200” at the end of the script that runs kvm.

start-stop-daemon -S -c USERNAME --exec /usr/bin/screen -- -S kvm-unstable -d -m /usr/local/sbin/kvm-unstable

On a Debian system the above command in a system boot script (maybe /etc/rc.local) could be used to start a KVM virtual machine on boot. In this example USERNAME would be replaced by the name of the account used to run kvm, and /usr/local/sbin/kvm-unstable is a shell script to run kvm with the correct parameters. Then as user USERNAME you can attach to the session later with the command “screen -x kvm-unstable“. Thanks to Jason White for the tip on using screen.

I’ve filed Debian bug report #574069 [3] requesting that kvm change it’s argv[0] so that top(1) and similar programs can be used to distinguish different virtual machines. Currently when you have a few entries named kvm in top’s output it is annoying to match the CPU hogging process to the virtual machine it’s running.

It is possible to use KVM with X or VNC for a graphical display by the virtual machine. I don’t like these options, I believe that Xephyr provides better isolation, I’ve previously documented how to use Xephyr [5].

kvm -kernel /boot/vmlinuz-2.6.32-2-amd64 -initrd /boot/initrd.img-2.6.32-2-amd64 -hda /dev/V0/unstable -hdb /dev/V0/unstable-swap -m 512 -mem-path /hugepages -append "selinux=1 audit=1 root=/dev/hda ro rootfstype=ext4" -smp 2 -curses -redir tcp:2022::22

The above is the current kvm command-line that I’m using for my Debian/Unstable test environment.

Networking

I’m using KVM options such as “-redir tcp:2022::22” to redirect unprivileged ports (in this case 2022) to the ssh port. This works for a basic test virtual machine but is not suitable for production use. I want to run virtual machines with minimal access to the environment, this means not starting them as root.

One thing I haven’t yet investigated is the vde2 networking system which allows a private virtual network over multiple physical hosts and which should allow kvm to be run without root privs. It seems that all the other networking options for kvm which have appealing feature sets require that the kvm process be started with root privs.

Is KVM worth using?

It seems that KVM is significantly slower than a chroot, so for a basic build environment a secure chroot environment would probably be a better option. I had hoped that KVM would be more reliable than Xen which would offset the performance loss – however as KVM and Debian kernel 2.6.32 don’t work together on my Opteron system it seems that I will have some reliability issues with KVM that compare with the Xen issues. There are currently no Xen kernels in Debian/Testing so KVM is usable now with the latest bleeding edge stuff (on my Thinkpad at least) while Xen isn’t.

Qemu is really slow, so Xen is the only option for 32bit hardware. Therefore all my 32bit Xen servers need to keep running Xen.

I don’t plan to switch my 64bit production servers to KVM any time soon. When Debian/Squeeze is released I will consider whether to use KVM or Xen after upgrading my 64bit Debian server. I probably won’t upgrade my 64bit RHEL-5 server any time soon – maybe when RHEL-7 is released. My 64bit Debian test and development server will probably end up running KVM very soon, I need to upgrade the kernel for Ext4 support and that makes KVM more desirable.

So it seems that for me KVM is only going to be seriously used on my laptop for a while.

Generally I am disappointed with KVM. I had hoped that it would give almost the performance of Xen (admittedly it was only 14.5% slower). I had also hoped that it would be really reliable and work with the latest kernels (unlike Xen) but it is giving me problems with 2.6.32 on Opteron. Also it has some new issues such as deciding to quietly do something I don’t want when it’s unable to do what I want it to do.

Thinkpad T61

picture of my new Thinkpad T61

I’ve now had my new Thinkpad T61 [1] for almost a month. The letters on the keyboard are not even starting to wear off which is unusual, either this Thinkpad is built with harder plastic than the older ones or I’m typing more softly.

Memory

The first thing I did after receiving it was to arrange a RAM upgrade. It shipped with two 1GB DDR2 666MHz PC2-5300 SODIMM modules and as I want to run KVM I obviously need a lot more than that. The Intel Chipset page on Wikipedia is one of the resources that documents the Intel GM965 chipset as supporting up to 8G of RAM. Getting 4G in two 2G modules seemed like a bad idea as that would limit future expansion options and also result in two spare modules. So I decided to get a 4G module for a total of 5G of RAM. I’ve updated my RAM speed page with the test results of this system [2], I get 2,823MB/s with a matched pair of DIMMs and 2,023MB/s with a single DIMM. But strangely with a pair of unmatched DIMMs Memtest86+ reported 2,823MB/s – I wonder whether the first 2G of address space is interleaved for best performance and the last 3G runs at 2,023MB/s. In any case I think that losing 29% of the maximum RAM speed would be an acceptable trade-off for saving some money and I can always buy another 4G DIMM later. I had to order a DDR2-800MHz PC2-6400 module because they are cheaper than the PC2-5300 modules and my Thinkpad works equally well with either speed. I have used the spare 1G SODIMM in my EeePC701 which takes the same RAM – presumably because the EeePC designers found PC2-5300 modules to be cheaper than slower modules (I think that the 701 was at the time it was released the slowest PC compatible system that was selling in quantity). The EeePC gets only 798MB/s out of the same memory. My document about Memtest86+ results has these results and more [2].

I noticed that if I run Memtest86+ booted from a USB flash device then inserting or removing a USB device can cause memory errors, but if I boot memtest86+ from a CD it seems to work correctly. So it seems that Memtest86+ doesn’t disable some aspect of USB hardware, this might be considered a bug – or it might just be a “don’t do that” issue.

Misc

To get the hardware virtualisation working (needed to load the kvm_intel kernel module) I had to enable it in the BIOS and then do a hard reset (power off). Telling the BIOS to save and reboot was not adequate. This would be a BIOS bug, it knew that I had changed the virtualisation setting so it should have either triggered a hard reset or instructed me to do so.

The default configuration of Debian/Lenny results in sound not working, I had to run alsaconf as suggested on the Debian Etch on Thinkpad T61 howto [3] which solved it.

Generally I’m happy with this system, the screen resolution is 1680*1050 which has 20% more pixels than the 1400*1050 screen on my Thinkpad T41p, it’s a lot faster for CPU operations and should be a lot faster for video when I get the drivers sorted out (currently it’s a lot slower), and I have virtualisation working again. But when you buy a system that’s much like the last one but 6 years newer you expect it to be better.

Generally the amount of effort involved in the process of buying a new system, upgrading the RAM to the desired specs, installing Linux and tweaking all the options is enough to make me want to wait at least another 6 years before buying another. Part of the reason for this difficulty is that I want to get so much functionality from the machine, a machine with more modest goals (such as a Netbook) takes a lot less time to configure.

Problems

There is Bluetooth hardware which is apparently enabled by default. But a quick search didn’t turn up any information on how to do the basic functions, I would like to just transfer files from my mobile phone in the same way that I transfer files between phones.

The video card is a nVidia Corporation Quadro NVS 140M (rev a1). 3D games seem slow but glxgears reports 300fps. It doesn’t have Xvideo support which appears to be the reason why mplayer won’t allow resizing it’s display area unless run with the -zoom option, and it’s also got performance problems such that switching between virtual desktops will interrupt the sound on a movie that mplayer is playing – although when alsaplayer is playing music the sound isn’t interrupted. Also when I play a Youtube video at twice the horizontal and vertical resolution it takes half of one CPU core. It’s a pity that I didn’t get an Intel video controller.

It seems that Debian is soon going to get the Nouveau NVidia drivers so hopefully video performance will improve significantly when I get them [4].

The next thing I have to do is to get the sound controls working. The older Thinkpads that I used had hardware controls, the T41p that was my previous system had buttons for increasing and decreasing the volume and a mute button that interacted directly with the hardware. The down-side of this was that there was no way for the standard software to know what the hardware was going to do, the up-side was that I could press the mute button and know that it would be silent regardless of what the software wants. Now I have the same buttons on my T61 but they don’t do anything directly, they just provide key-press events. According to showkeys the mute key gives “0x71 0xf1“, the volume down button gives “0x72 0xf2“, and the volume up button gives “0x73 0xf3“. Daniel Pittman has made some suggestions to help me get the keyboard events mapped to actions that can change the sound via software [5] – which I haven’t yet had time to investigate. I wonder if it will ever be possible to change the volume of the system beep.

The system has an SD card slot, but that doesn’t seem to work. I’m not really worried at the moment but in the future I will probably try and get it going. It has a 100G disk which isn’t that big, adding a 32G SD card at some future time might be the easiest way to upgrade the storage – copying 100G of data is going to be painful and usually a small increment in storage capacity can keep a system viable for a while.

Any advice on getting sound, the SD card, and Bluetooth working would be appreciated. I’ll probably upgrade to Debian/Testing in the near future so suggestions that require testing features won’t be ruled out.

The Yubikey

Picture of Yubikey

Some time ago Yubico were kind enough to send me an evaluation copy of their Yubikey device. I’ve finally got around to reviewing it and making deployment plans for buying some more. Above is a picture of my Yubikey on the keyboard of my Thinkpad T61 for scale. The newer keys apparently have a different color in the center of the circular press area and also can be purchased in white plastic.

The Yubikey is a USB security token from Yubico [1]. It is a use-based token that connects via the USB keyboard interface (see my previous post for a description of the various types of token [2]). The Yubikey is the only device I know of which uses the USB keyboard interface, it seems that this is an innovation that they invented. You can see in the above picture that the Yubikey skips the metal that is used to surround most USB devices, this probably fails to meet some part of the USB specification but does allow them to make the key less than half as thick as it might otherwise be. Mechanically it seems quite solid.

The Yubikey is affordable, unlike some token vendors who don’t even advertise prices (if you need to ask then you can’t afford it) and they have an online sales site. $US25 for a single key and discounts start when you buy 10. As it seems quite likely that someone who wants such a token will want at least two of them for different authentication domains, different users in one home, or as a backup in case one is lost or broken (although my experiments have shown that Yubikeys are very hardy and will not break easily). The discount rate of $20 will apply if you can find four friends who want to use them (assuming two each), or if you support several relatives (as I do). The next discount rate of $15 applies when you order 100 units, and they advise that customers contact their sales department directly if purchasing more than 500 units – so it seems likely that a further discount could be arranged when buying more than 500 units. They accept payment via Paypal as well as credit cards. It seems to me that any Linux Users Group could easily arrange an order for 100 units (that would be 10 people with similar needs to me) and a larger LUG could possibly arrange an order of more than 500 units for a better discount. If an order of 500 can’t be arranged then an order of 200 would be a good thing to get half black keys and half white ones – you can only buy a pack of 100 in a single color.

There is a WordPress plugin to use Yubikey authentication [3]. It works, but I would be happier if it had an option to accept a Yubikey OR a password (currently it demands both a Yubikey AND a password). I know that this is less secure, but I believe that it’s adequate for an account that doesn’t have administrative rights.

To operate the Yubikey you just insert it into a USB slot and press the button to have it enter the pass code via the USB keyboard interface. The pass code has a prefix that can be used to identify the user so it can replace both the user-name and password fields – of course it is technically possible to use one Yubikey for authentication with multiple accounts in which case a user-name would be required. Pressing the Yubikey button causes the pass code to be inserted along with the ENTER key, this can take a little getting used to as a slow web site combined with a habit of pressing ENTER can result in a failed login (at least this has happened to me with Konqueror).

As the Yubikey is use-based, it needs a server to track the usage count of each key. Yubico provides source to the server software as well as having their own server available on the net – obviously it might be a bad idea to use the Yubico server for remote root access to a server, but for blog posting that is a viable option and saves some effort.

If you have multiple sites that may be disconnected then you will either need multiple Yubikeys (at a cost of $20 or $15 each) or you will need to have one Yubikey work with multiple servers. Supporting a single key with multiple authentication servers means that MITM attacks become possible.

The full source to the Yubikey utilities is available under the new BSD license. In Debian the base functionality of talking to the Yubikey is packaged as libyubikey0 and libyubikey-dev, the server (for validating Yubi requests via HTTP) is packaged as yubikey-server-c, and the utility for changing the AES key to use your own authentication server is packaged as yubikey-personalization – thanks Tollef Fog Heen for packaging all this!

The YubiPAM project (a PAM module for Yubikey) is licensed under the GPL [4]. It would be good if this could be packaged for Debian (unfortunately I don’t have time to adopt more packages at the moment).

There is a new model of Yubikey that has RFID support. They suggest using it for public transport systems where RFID could be used for boarding and the core Yubikey OTP functionality could be used for purchasing tickets. I don’t think it’s very interesting for typical hobbyist and sysadmin work, but RFID experts such as Jonathan Oxer might disagree with me on this issue.