|
The Security Token Wikipedia page doesn’t seem to clearly describe the types of token.
Categories of Security Token
It seems to me that the following categories encompass all security tokens:
- Biometric tokens – which seems rather pointless to me. Having a device I control verify my biometric data doesn’t seem to provide a benefit. The only possible benefit seems to be if the biometric token verifies the identity of the person holding it before acting as one of the other types of token.
- Challenge-response devices. The server will send a challenge (usually a random number) and will expect a response (usually some sort of cryptographically secure hash of the number and a shared secret). A challenge-response device may take a password from the user and combine it with the challenge from the server and the shared secret when calculating the response.
- Time-based tokens (one-time passwords). They will provide a new pseudo-random number that changes periodically, often a 30 second time interval is used and the number is presumably a cryptographically secure hash of the time and a shared secret. This requires a battery in the token and the token will become useless when the battery runs out. It also requires that the server have an accurate clock.
- Use-based tokens. They will give a new pseudo-random number every time a button is pressed (or some other event happens to indicate that a number has been used). These do not work well if you have multiple independent servers and an untrusted network.
Here is my analysis of the theory of token use, note that I am not sure how the implementations of the various token systems deal with these issues.
- Biometric security seems like a bad idea for most non-government use. I have seen a retina scanner in use at a government office – that made some sense as the people being scanned were in a secure area (they had passed some prior checks) and they were observed (to prevent coercion and the use of fake eyes). Biometric authentication for logging in over the net just seems like a bad idea as you will never know if you can trust the scanner.
- It seems to me that challenge-response devices are by far the most secure option. CR is resistant to replay attacks provided that it is not possible to have re-used challenges. If the calculation of the response includes a password (which is performed on some tokens that resemble pocket calculators) then a CR token will meet the “something you have and something you know” criteria.
One potential problem with CR systems is that of not including the server or account ID in the calculation. So if I was to use a terminal in an insecure location to login to a server or account with data that is not particularly important then it would be possible for an attacker who had compromised the terminal to perform a Man In The Middle (MITM) attack against other servers. Of course you are supposed to use a different password for each account, if you do this then a CR token that includes a password will be resistant to this attack – but I expect that people who use tokens are more likely to use one password for multiple accounts.
- Time-based tokens have a weakness in that an attacker who can immediately discover the number used for one connection could then immediately login to other servers. One example of a potential attack using this avenue would be to compromise a terminal in an Internet cafe, steal a hash used for logging in to server A and then immediately login to server B. This means that it may not be safe to use the same token for logging in to servers (or accounts) that have different sensitivity levels unless a strong password was used as well – I expect that people who have hardware tokens tend to use weaker passwords.
Also one factor that will make some MITM attacks a lot easier is the fact that the combination of the hash from the token and the password are valid for a period of time so an attacker could establish a second connection within the 30 second interval. It seems that only allowing one login with a particular time-coded password is the correct thing to do, but this may be impossible if multiple independent servers use the same token. Time based tokens expire when the battery runs out. The measures taken to make them tamper-proof may make it difficult or impossible to replace the battery so a new token must be purchased every few years.
- Use-based tokens are very similar to time-based tokens, it’s just a different number that is used for generating the hash. The difference in the token is that a time-based token needs a battery so that it can track the time while a use-based token needs a small amount of flash memory to store the usage count. The difference at the server end is that for a use-based token the server needs a database of the use count of each token, which is usually not overly difficult for a single server.
One problem is the case of a restore from backup of the server which maintains the use count database. The only secure way of managing this is to either inspect every token (to discover it’s use count) or to issue a new password (for using password plus token authentication). Either option would be really painful if you have many users at remote sites. Also it would be required to get the database transaction committed to disk before an authentication attempt is acknowledged so that a server crash could not lose the count – this should be obvious but many people get these things wrong. An additional complication for use-based tokens comes with the case of a token that is used for multiple servers. One server needs to maintain the database of the usage counts and the other servers need to query it by secure links. If a login attempt with use count 100 has been made to server A then server B must not accept a login with a hash that has a use count less than or equal to 100. This is firstly to cover the case where a MITM attack is used to login to server B with credentials that were previously used for server A. The second aim of this is to cover the case where a token that is temporarily unguarded is used to generate dozens of hashes – while the hashes could be immediately used it is desirable to have them expire as soon as possible, and having the next login update the use count and invalidate such hashes is a good thing. The requirement that all servers know the current use count requires that they all trust a single server. In some situations this may not be possible, so it seems that this only works for servers within a single authentication domain or for access to less important data.
Methods of Accessing Tokens
It seems that the following are the main ways of accessing tokens.
- Manual entry – the user reads a number from an LCD display and types it in. This is the most portable and often the cheapest – but does require a battery.
- USB keyboard – the token is connected to a PC via the USB port and reports itself as a keyboard. It can then enter a long password when a button is pressed. This is done by the Yubikey [1], I am not aware of anyone else doing it. It would be possible to have a token connect as a USB keyboard and also have it’s own keypad for entry of a password and a challenge used for CR authentication.
- USB (non-keyboard) or PCCard/Cardbus (AKA PCMCIA). The token has it’s own interface and needs a special driver for the OS in question. This isn’t going to work if using an Internet cafe or an OS that the token vendor doesn’t support.
- Bluetooth/RFID – has similar problems to the above but also can potentially be accessed by hostile machines without the owner knowing. I wouldn’t want to use this.
- SmartCard – the card reader connects to the PC via Cardbus or USB and it has all the same driver issues. Some SmartCard devices are built in to a USB device that looks to the OS like a card reader plus a card, so it’s a USB interface with SmartCard protocols.
To avoid driver issues and allow the use on random machines it seems that the USB keyboard and manual entry types of token are best. While for corporate Intranet use it seems that a SmartCard is best as it can be used for opening doors as well, you could use a USB keyboard token (such as a Yubikey) to open doors – but it would be slower and there is no off the shelf hardware.
For low cost and ease of implementation it seems that use-based tokens that connect via the USB keyboard interface are best. For best security it seems that a smart-card or USB interface to a device with a keypad for entering a password is ideal.

Brendan Scott linked to a couple of articles about CAL (the Copyright Agency Limited) [1]. I have previously written about CAL and the way that they charge organisations for the work of others without their consent [2]. My personal dispute with CAL is that they may be charging people to use my work, I have not given them permission to act on my behalf and will never do so. If they ever bill anyone for my work then it will be an act of piracy. The fact that the government through some bad legislation permitted them to do such things doesn’t prevent it from being piracy – you can’t disagree with this claim without supporting the past actions of China and other countries that have refrained from preventing factories from mass-producing unauthorised copies of software.
The first article concerns the fact that last year CAL paid more than $9,400,000 in salary to it’s employees (including $350,000 to it’s CEO) while it only paid $9,100,000 directly to the authors [3]. It also spent another $300,000 to send it’s executives to a junket in Barbados. It did give $76,000,000 to publishers “on the assumption that a proportion of this money will be returned to authors” – of course said publishers could have used the money to have holidays in Barbados. CAL doesn’t bother to check who ends up with shares of the $76,000,000 so it’s anyone’s guess where it ends up.
The second article is by James Bradley who is an author and director of CAL [4]. He claims that “much” of the $76,000,000 was distributed to authors, although I’m not sure how he would have any idea of how much it was – which is presumably why he used the word “much” instead of some other word with a clearer meaning such as “most“. He also notes that CAL invested $1,000,000 in “projects specifically designed to promote the development and dissemination of Australian writing“, which sounds nice until you consider the fact that none of the authors (apart from presumably the few who sit on the CAL board) had any say in the matter. Can I take a chunk of the $9,400,000 that is paid to CAL employees and invest it in something? If not then why not? If they can “invest” money that was owed to other people then why can’t I invest their salaries?
James also says “The issue of how well CAL serves rights-holders – and authors and artists in particular – is a vital one” which is remarkably silly. He is entirely ignoring the fact that some rights holders don’t want to be “served” by CAL at all. The fact that CAL can arbitrarily take money for other people’s work is an infringement on their rights. He further demonstrates his ignorance by saying “Without CAL and the licences we administer, users – educational institutions, government agencies and corporate organisations, to name just a few – would be required to seek permission every time they reproduced copyright material or run the risk of legal action for copyright infringement” – of course any educational institution can use Creative Commons licensed work [5].
I’ve previously written about the CK12 project to develop CC licensed text books for free use [6]. There’s no reason why the same thing can’t be done for university text books. In the discussion following Claudine Chionh’s LUV talk titled “Humanities computing, Drupal and What I did on my holidays” [7] it was suggested that it should be possible to gain credit towards a post-graduate degree based on work done to share information – this could mean setting up a Drupal site and populating the database or it could mean contributing to CC licensed text books. Let’s face it, a good CC text book will be read by many more people than the typical conference proceedings!
James says that CAL is used “Instead of having to track down individual rights-holders every time they want to reproduce copyright material“. The correct solution to this problem would be to change the copyright law such that if a reasonable attempt to discover the rights-holder fails then work is deemed to be in the public domain. The solution to the problem of tracking down rights-holders is not to deny them their rights entirely and grant CAL the right to sub-license their work!
He also makes the ridiculous claim “Whereas in the age of the physical book schools and universities could have bought fewer books and made up the difference by using photocopies, it is now possible for an organisation to buy a single set of digital materials and reproduce them ad infinitum” which implies that CAL is the only thing saving the profits of authors from unrestricted digital copying. Of course as CAL seems to have no active enforcement mechanisms and they apparently charge a per-student fee they really have no impact on the issue of a single licensed copy being potentially used a million times – extra use apparently won’t provide benefits to the author and use in excess of the licensing scheme won’t be penalised.
He asks the rhetorical question “After all, why go to the expense of creating a textbook (or some form of digital course materials) if you are going to sell only a half-dozen copies to state education departments“. The answer is obvious to anyone who has real-world experience with multiple licensing schemes – you can sell one single copy and make a profit if the price is high enough. The smart thing for the education departments to do would be to pool their resources and pay text book companies for writing CC licensed texts (or releasing previously published texts under the CC). The average author of a text book would probably be very happy to earn $100,000 for their work, the editorial process probably involves a similar amount of work. So if the government was to offer $300,000 for the entire rights to a text book then I’m sure that there would be more than a few publishers tendering for the contract.
According to the CIA World Fact Book there are 2,871,482 people in Australia aged 0-14 [8], that means about 205,000 per year level. CAL charges $16 for each primary and secondary student so the government is paying about $3,280,000 every year per year level. Even in year 12 the number of text books used is probably not more than 10, so it seems to me that if all the money paid to CAL by schools in a single year was instead used to fund Creative Commons licensed text books then the majority of the school system would be covered! The universities have a much wider range of text books but they also have higher CAL fees of $40 per student. After cutting off the waste of taxpayer money on CAL fees for schools that money could be invested in the production of CC licensed university text books.
The Threat
Bruce Schneier’s blog post about the Mariposa Botnet has an interesting discussion in the comments about how to make a secure system [1]. Note that the threat is considered to be remote attackers, that means viruses and trojan horses – which includes infected files run from USB devices (IE you aren’t safe just because you aren’t on the Internet). The threat we are considering is not people who can replace hardware in the computer (people who have physical access to it which includes people who have access to where it is located or who are employed to repair it). This is the most common case, the risk involved in stealing a typical PC is far greater than the whatever benefit might be obtained from the data on it – a typical computer user is at risk of theft only for the resale value of a second-hand computer.
So the question is, how do can we most effectively use free software to protect against such threats?
The first restriction is that the hardware in common use is cheap and has little special functionality for security. Systems that have a TPM seem unlikely to provide a useful benefit due to the TPM being designed more for Digital Restrictions Management than for protecting the user – and due to TPM not being widely enough used.
The BIOS and the Bootloader
It seems that the first thing that is needed is a BIOS that is reliable. If an attacker manages to replace the BIOS then it could do exciting things like modifying the code of the kernel at boot time. It seems quite plausible for the real-mode boot loader code to be run in a VM86 session and to then have it’s memory modified before it starts switches to protected mode. Every BIOS update is a potential attack. Coreboot replaces the default PC BIOS, it initialises the basic hardware and then executes an OS kernel or boot loader [2] (the Coreboot Wikipedia page has a good summary). The hardest part of the system startup process is initialising the hardware, Coreboot has that solved for 213 different motherboards.
If engineers were allowed to freely design hardware without interference then probably a significant portion of the computers in the market would have a little switch to disable the write line for the flash BIOS. I heard a rumor that in the days of 286 systems a vendor of a secure OS shipped a scalpel to disable the hardware ability to leave protected mode, cutting a track on the motherboard is probably still an option. Usually once a system is working you don’t want to upgrade the BIOS.
One of the payloads for Coreboot is GRUB. The Grub Feature Requests page has as it’s first entry “Option to check signatures of the bootchain up to the cryptsetup/luksOpen: MBR, grub partition, kernel, initramfs” [3]. Presumably this would allow a GPG signature to be checked so that a kernel and initrd would only be used if they came from a known good source. With this feature we could only boot a known good kernel.
How to run User Space
The next issue is how to run the user-space. There has been no shortage of Linux kernel exploits and I think it’s reasonable to assume that there will continue to be a large number of exploits. Some of the kernel flaws will be known by the bad guys for some time before there are patches, some of them will have patches which don’t get applied as quickly as desired. I think we have to assume that the Linux kernel will be compromised. Therefore the regular user applications can’t be run against a kernel that has direct hardware access.
It seems to me that the best way to go is to have the Linux kernel run in a virtual environment such as Xen or KVM. That means you have a hypervisor (Xen+Linux or Linux+KVM+QEMU) that controls the hardware and creates the environment for the OS image that the user interacts with. The hypervisor could create multiple virtual machines for different levels of data in a similar manner to the NSA NetTop project, not that this is really a required part of solving the general secure Internet terminal problem but as it would be a tiny bit of extra work you might as well do it.
One problem with using a hypervisor is that the video hardware tends to want to use features such as bus-mastering to give best performance. Apparently KVM has IOMMU support so it should be possible to grant a virtual machine enough hardware access to run 3D graphics at full speed without allowing it to break free.
Maintaining the Virtual Machine Image
Google has a good design for the ChromiumOS in terms of security [4]. They are using CGroups [5] to control access to device nodes in jails, RAM, CPU time, and other resources. They also have some intrusion detection which can prompt a user to perform a hardware reset. Some of the features would need to be implemented in a different manner for a full desktop system but most of the Google design features would work well.
For an OS running in a virtual machine when an intrusion is detected it would be best to have the hypervisor receive a message by some defined interface (maybe a line of text printed on the “console”) and then terminate and restart the virtual machine. Dumping the entire address space of the virtual machine would be a good idea too, with typical RAM sizes at around 4G for laptops and desktops and typical storage sizes at around 200G for laptops and 2T for new desktops it should be easy to store a few dumps in case they are needed.
The amount of data received by a typical ADSL link is not that great. Apart from the occasional big thing (like downloading a movie or listening to Internet radio for a long time) most data transfers are from casual web browsing which doesn’t involve that much data. A hypervisor could potentially store the last few gigabytes of data that were received which would then permit forensic analysis if the virtual machine was believed to be compromised. With cheap SATA disks in excess of 1TB it would be conceivable to store the last few years of data transfer (with downloaded movies excluded) – but such long-term storage would probably involve risks that would outweigh the rewards, probably storing no more than 24 hours of data would be best.
Finally in terms of applying updates and installing new software the only way to do this would be via the hypervisor as you don’t want any part of the virtual machine to be able to write to it’s data files or programs. So if the user selects to install a new application then the request “please install application X” would have to be passed to the hypervisor. After the application is installed a reboot of the virtual machine would be needed to apply the change. This is a common experience for mobile phones (where you even have to reboot if the telco changes some of their network settings) and it’s something that MS-Windows users have become used to – but it would get a negative reaction from the more skilled Linux users.
Would this be Accepted?
The question is, if we built this would people want to use it? The NetTop functionality of having two OSs interchangeable on the one desktop would attract some people. But most users don’t desire greater security and would find some reason to avoid this. They would claim that it lowered the performance (even for aspects of performance where benchmarks revealed no difference) and claim that they don’t need it.
At this time it seems that computer security isn’t regarded as a big enough problem for users. It seems that the same people who will avoid catching a train because one mugging made it to the TV news will happily keep using insecure computers in spite of the huge number of cases of fraud that are reported all the time.
In a comment on my post Shared Objects and Big Applications about memlockd [1] mic said that they use memlockd to lock the entire root filesystem in RAM. Here is a table showing my history of desktop computers with the amounts of RAM, disk capacity, and CPU power available. All systems better than a 386-33 are laptops – a laptop has been my primary desktop system for the last 12 years. The columns for the maximum RAM and disk are the amounts that I could reasonably afford if I used a desktop PC instead of a laptop and used the best available technology of the day – I’m basing disk capacity on having four hard drives (the maximum that can be installed in a typical PC without extra power cables and drive controller cards) and running RAID-5. For the machines before 2000 I base the maximum disk capacity on not using RAID as Linux software RAID used to not be that good (lack of online rebuild for starters) and hardware RAID options have always been too expensive or too lame for my use.
Year |
CPU |
RAM |
Disk |
Maximum RAM |
Maximum Disk |
1988 |
286-12 |
4M |
70M |
4M |
70M |
1993 |
386-33 |
16M |
200M |
16M |
200M |
1998 |
Pentium-M 233 |
96M |
3G |
128M |
6G |
1999 |
Pentium-2 400 |
256M |
6G |
512M |
40G |
2000 |
Pentium-2 600 |
384M |
10G |
512M |
150G |
2003 |
Pentium-M 1700 |
768M |
60G |
2048M |
400G |
2009 |
Pentium-M 1700 |
1536M |
100G |
8192M |
4500G |
2010 |
Core 2 Duo T7500 2200 |
5120M |
100G |
8192M |
6000G |

The above graph shows how the modern RAM capacities have overtaken older disk capacities. So it seems that a viable option on modern systems is to load everything that you need to run into RAM. Locking it there will save spinning up the hard drive on a laptop. With a modern laptop it should be possible to lock most of the hard drive contents that are regularly used (IE the applications) into RAM and run with /home on a SD flash storage device. Then the hard drive would only need to be used if something uncommon was accessed or if something large (like a movie) was needed. It also shows that there is potential to run diskless workstations that copy the entire contents of their root filesystem when they boot so that they can run independently of the server and only access the server for /home.
Note that the size of the RAM doesn’t need to be larger than the disk capacity of older machines (some of the disk was used for swap, /home, etc). But when it is larger it makes it clear that the disk doesn’t need to be accessed for routine storage needs.
I generated the graph with GnuPlot [2], the configuration files I used are in the directory that contains the images and the command used was “gnuplot command.txt“. I find the GnuPlot documentation to be difficult to use so I hope that this example will be useful for other people who need to produce basic graphs – I’m not using 1% of the GnuPlot functionality.
The Opera-Mini Dispute
I have just read an interesting article about the Opera browser [1]. The article is very critical of Opera-Mini on the iPhone for many reasons – most of which don’t interest me greatly. There are lots of technical trade-offs that you can make when designing an application for a constrained environment (EG a phone with low resolution and low bandwidth).
What does interest me is the criticism of the Opera Mini browser for proxying all Internet access (including HTTPS) through their own servers, this has been getting some traction around the Internet. Now it is obvious that if you have one server sitting on the net that proxies connections to lots of banks then there will be potential for abuse. What apparently isn’t obvious to as many people is the fact that you have to trust the application.
Causes of Software Security Problems
When people think about computer security they usually think about worms and viruses that exploit existing bugs in software and about Trojan horse software that the user has to be tricked into running. These are both significant problems.
But another problem is that of malicious software releases. I think that this is significantly different from Trojan horses because instead of having an application which was written for the sole purpose of tricking people (as is most similar to Greek history) you have an application that was written by many people who genuinely want to make a good product but you have a single person or small group that hijacks it.
Rumor has it that rates well in excess of $10,000 are sometimes paid for previously unknown security vulnerabilities in widely used software. It seems likely that a programmer who was in a desperate financial situation could bolster their salary by deliberately putting bugs in software and then selling the exploits, this would not be a trivial task (making such bugs appear to be genuine mistakes would take some skill) – but there are lots of people who could do it and plausibly deny any accusation other than carelessness. There have been many examples of gambling addicts who have done more foolish things to fund their habit.
I don’t think it’s plausible to believe that every security flaw which has been discovered in widely used software was there purely as the result of a mistake. Given the huge number of programmers who have the skill needed to deliberately introduce a security flaw into the source of a program and conceal it from their colleagues I think it’s quite likely that someone has done so and attempted to profit from it.
Note that even if it could be proven that it was impossible to profit from creating a security flaw in a program that would not be sufficient to prove that it never happened. There is plenty of evidence of people committing crimes in the mistaken belief that it would be profitable for them.
Should We Trust a Proprietary Application or an Internet Server?
I agree with the people who don’t like the Opera proxy idea, I would rather run a web browser on my phone that directly accesses the Internet. But I don’t think that the web browser that is built in to my current smart-phone is particularly secure. It seems usual for a PC to need a security update for the base OS or the web browser at least once a year while mobile phones have a standard service life of two years without any updates. I suspect that there is a lot of flawed code running on smart phones that never get updated.
It seems to me that the risks with Opera are the single point of failure of the proxy server in addition to the issues of code quality while the risks with the browser that is on my smart-phone is just the quality of the code. I suspect that Opera may do a better job of updating their software to fix security issues so this may mitigate the risk from using their proxy.
At the moment China is producing a significant portion of the world’s smart-phones. Some brands like LG are designed and manufactured in China, others are manufactured in China for marketing/engineering companies based in Europe and the US. A casual browse of information regarding Falun Gong makes the character of the Chinese leadership quite clear [2], I think that everything that comes out of China should be considered to be less trustworthy than equivalent products from Europe and the US. So I think that anyone who owns a Chinese mobile phone and rails against the Opera Mini hasn’t considered the issue enough.
I don’t think it’s possible to prove that an Opera Mini with it’s proxy is more or less of a risk than a Chinese smart-phone. I’m quite happy with my LG Viewty [3] – but I wouldn’t use it for Internet banking or checking my main email account.
Also we have to keep in mind that mobile phones are really owned by telephone companies. You might pay for your phone or even get it “unlocked” so you can run it on a different network, but you won’t get the custom menus of your telco removed. Most phones are designed to meet the needs of telcos not users and I doubt that secure Internet banking is a priority for a telco.
Update: You can buy unlocked mobile phones. But AFAIK the Android is the only phone which might be described as not being designed for the needs of the telcos over the needs of the users. So while you can get a phone without custom menus for a telco, you probably can’t get a phone that was specifically designed for what you want to do.
The Scope of the Problem
Mobile phones are not the extent of the problem, I think that anyone who buys a PC from a Chinese manufacturer and doesn’t immediately wipe the hard drive and do a fresh OS install is taking an unreasonable risk. The same thing goes for anyone who buys a PC from a store where it’s handled by low wage employees, I can imagine someone on a minimum income accepting a cash payment to run some special software on every PC before it goes out the door – that wouldn’t be any more difficult or risky than the employees who copy customer credit card numbers (a reasonably common crime).
It’s also quite conceivable that any major commercial software company could have a rogue employee who is deliberately introducing bugs into it’s software. That includes Apple. If the iPhone OS was compromised before it shipped then the issue of browser security wouldn’t matter much.
I agree that having the minimum possible number of potential security weak points is a good idea. They should allow Opera Mini users to select that HTTPS traffic should not be proxied. But I don’t think that merely not using a proxy would create a safe platform for Internet banking. In terms of mobile phones most things are done in the wrong way to try and get more money out of the users. Choose whichever phone or browser you want and it will probably still be a huge security risk.
Harald Welte is doing some really good work on developing free software for running a GSM network [4]. But until that project gets to the stage of being widely usable I think that we just have to accept a certain level of security risk when using mobile phones.

Diagnosis
A few weeks ago I was referred to a specialist for the treatment of Carpal Tunnel Syndrome. I first noticed the symptoms in early January, it started happening at night with a partial numbness in the fingers of my left hand. I didn’t think much of it at first as it’s the expected symptom of sleeping in a position that reduces the blood flow. But when it kept happening with my left hand and never happening with my right and then started getting worse (including happening during the day) I sought medical advice.
The doctor asked me to bend my hand down (as if trying to touch my left elbow with the fingers of my left hand). Within about 10 seconds this caused numbness – this result from bending one’s wrist is a major symptom of CTS.
Treatment
On Thursday I saw a specialist about this, she agreed with the GP’s diagnosis and made a wrist brace for me. She started by cutting off a length of a tube of elastic woven material (similar to a sock) and then cutting a thumb hole, that became the lining. Then to make the hard part she put a sheet of plastic in an electric saucepan (which had water simmering) until it started to melt and then used a spatula to fish it out. The melting temperature of the plastic wasn’t that high (it was soft at about 50C when she put it on my arm), it wasn’t at all sticky when it was partially melted, and it didn’t seem to conduct heat well.
After wearing the wrist brace non-stop for a few days I have noticed an improvement already. Hopefully I will make a full recovery in a matter of a month or so, I will probably have to wear a wrist brace when sleeping for the rest of my life, but that’s no big deal – it’s more comfortable to sleep with a wrist brace than a partially numb hand. I’ve also been prescribed a set of exercises to help remove scar tissue from the nerves. I haven’t done them much yet.
In terms of being annoying, the wrist-brace has 3mm diameter holes in a square grid pattern with a 25mm spacing. This doesn’t let much air through and in warm weather my arm gets sweaty and starts to itch. I’m thinking of drilling some extra holes to alleviate this – the part which makes my arm itch doesn’t need much mechanical strength. The only task which has been really impeded has been making peanut butter sandwiches, maybe it was making sandwiches not typing that caused CTS? ;) In any case I’m not giving up typing but I would consider giving up peanut butter sandwiches.
I really hope to avoid the surgical option, it doesn’t seem pleasant at all.
Other
One final thing to note is that Repetitive Strain Injury (RSI) is entirely different. RSI is a non-specific pain associated with repetitive tasks while CTS is a specific problem in one or two nerves where they go through the wrist. RSI apparently tends to reduce the effective strength of the afflicted limb, while milder cases of CTS (such as mine) cause no decrease in strength – of course a severe case of CTS results in muscle atrophy due to reduced nerve signals, but I shouldn’t ever get that. Many people think that RSI and CTS are essentially the same thing – I used to think that until a few weeks ago when I read the Wikipedia pages in preparation to seeing a doctor about it.
I want to obtain some of the plastic that was used to make my wrist brace, it could be really handy to have something that convenient for making boxes, containers, and supports for various things – among other things it doesn’t appear to generate static. The low melting temperature will prevent certain computer uses (the hot air that comes out of a cooling system for a high-end CPU would probably melt it), but it could probably be used to make the case for an Opteron system with careful design. I’m guessing that the cost of the plastic is a very small portion of the $150 I paid to the specialist so it shouldn’t be that expensive – and I’m sure it would be even cheaper if it wasn’t bought from a medical supply store. If I ever get time to do some work on an Arduino system or something similar then I will definitely try to get some of this plastic for the case.
Also the Wikipedia page has a picture of what appears to be a mass-produced wrist brace. I think that it might be improved by having the picture of the custom one that I wear added to the page. I unconditionally license the picture for free use by Wikipedia and others under the CC by attribution license. So if anyone thinks that a picture of my hand would improve Wikipedia then they can make the change.

The German supermarket chain Aldi recently had a special deal of a “wine-fridge” for $99. A wine fridge really isn’t that specialised for wine, it is merely a fridge that has a heater and is designed for temperatures in the 11C to 18C range. An good wine fridge will have special wood (or plastic if cheap) holders for wine bottles. A particularly cheap wine fridge (such as the one from Aldi) doesn’t even have special holders for wine bottles. But this does however make it more convenient for storing other things.
In the hotter days in summer outside temperatures of over 35C are common and it’s possible for an uncommonly hot day to be in excess of 45C. My home air-conditioning system is only able to keep the ambient temperature about 10C cooler than the outside temperature if there are a few hot days in a row.
According to the Wikipedia page the best chocolate is supposed to have type V crystals which melt at 34C [1]. So if the outside temperature is 45C then the temperature inside my home is almost guaranteed to be hot enough to melt chocolate. If I’m not at home (and therefore the air-conditioner is off) during a moderately hot day then it’s common to have a temperature of about 30C inside my house. The Wikipedia page also notes that moving chocolate between extremes of temperature can cause an oily texture and that storing chocolate in the fridge can cause a white discoloration. I’ve experienced these effects and find that they significantly decrease the enjoyment of chocolate.
So now I have a fridge in my computer room set to 16C because according to Wikipedia the ideal temperature range for storing chocolate is from 15C to 17C (the photo was taken shortly after turning it on and it hadn’t reached the correct temperature). Every computer room should have a fridge full of chocolate!
If my stockpile of chocolate reduces I may even put some wine in the fridge (I could probably fit some now if I organised the chocolate in a better way). But that depends on the supermarkets, if they have a special on Green and Black’s “Maya Gold” organic fair-trade chocolate then my fridge will become full again.
Due to the comments on my blog post about Divisive Behavior [1] I’ve been considering the issue of terms of abuse of minority groups – a topic of which racial abuse is only one aspect.
It seems that there are many discussions about which terms are offensive and when they are offensive, most of which are very misguided. Solving this problem would be an almost impossible task, but I have some ideas which may help to improve the situation. I would appreciate any pointers to better ideas in comments or in blog posts that are inspired by this one.
One common mistake seems to be the idea that there are global objective criteria by which a statement can be proven to not be offensive. In any language that is constantly evolving (IE any language that’s not dead) this seems impossible. It is particularly difficult with a language like English which is widely used in different countries and cultures. If a member of a minority group claims that you have just offended them then it seems most reasonable to have the default assumption be that you have said or done something which is actually offensive. In such a situation an immediate apology for the misunderstanding should be well accepted, but a debate about who’s cultural standards should be used for determining what is offensive probably won’t get a positive reception.
It seems extremely difficult (if not impossible) for a member of the majority group to properly understand what members of a minority group experience. So therefore attempts to understand why certain terms are offensive are likely to be doomed to failure. Sometimes if a certain word is used and then a group of people immediately get really angry you just have to accept the fact that it’s not a good word to use. One common example of this is words that are associated with violence – if someone associates a particular term with a threat of serious injury or death then you won’t be able to convince them that it’s not a big deal, but there are more subtle things of a similar nature.
It’s worth trying to understand people and these things can be productively discussed between friends. But a discussion of such things is not viable during the course of a debate. If involved in a debate with someone you really dislike it doesn’t seem like a good strategy to start a meta-discussion about whether they should regard one of your statements as being unreasonably offensive (regarding minority group status) as opposed to the level of offense that is acceptable in debate (something like “that’s the most ridiculous argument I’ve ever heard” may offend the recipient but is not inherently unreasonable).
Words tend to have multiple meanings. Claiming that your intent was significantly different from the way your message was interpreted probably isn’t going to work well unless accompanied by an apology. Even “I’m sorry you were offended” (which is not the best apology) will probably do.
Finally I’m sure that anyone who does Google searches dating back to 2005 (as an arbitrary year that references my previous post) can find examples of me doing things that go against some of these suggestions. I’ve learned things since then, it’s an ongoing process.
Past Sins
Sam Varghese wrote an article about Matthew Garrett’s LCA talk “The Linux community: what is it and how to be a part of it” [1]. In page 2 Sam quotes Martin Krafft as asking about how Matthew’s behavior had changed between 2004 and the present, Sam cites some references for Matthew’s actions in 2005 to demonstrate. I think that this raises the issue of how far back it is reasonable to go in search of evidence of past behavior, something that I think is far more important than the specific details of what Matthew said on mailing lists many years ago and whether he now regrets such email.
If someone did something that you consider to be wrong yesterday and did the same thing five years ago you might consider it to be evidence of a pattern of behavior. If someone’s statements today don’t match their actions yesterday then you should consider it to be evidence of hypocrisy. But if someone did something five years ago which doesn’t match their current statements then in many situations it seems more reasonable to consider it as evidence that they have changed their mind.
Then of course there is the significance of what was done. Flaming some people excessively on some mailing lists is something that can be easily forgiven – and forgotten if Google doesn’t keep bringing it up. But as a counter-example I don’t think that Hans Reiser will be welcome in any part of the Linux community when he gets out of jail.
For the development of the Linux community (and society in general) I think it’s best to not tie people to their past minor mistakes. While it is nice when someone apologises for their past actions, the practical benefits of someone just quietly improving their behavior are almost identical. A particular corner case in this regard is the actions of young people, anyone who was born after about 1980 will have had great access to electronic media for their entire life and will have left a trail. Most people do a variety of silly things when they were young, the older members of the Linux community were fortunate enough not to have electronic records remaining where Google could find them.
Back to Matthew, I think that if he is to be criticised about such things then evidence that is much more recent than 2005 needs to be used.
Cultural Differences
Sam quotes Matthew as claiming that “the Linux community was largely a Western, English-speaking one, those who participated in it necessarily had to adapt to the norms of this group“.
I don’t believe that there is a single Linux community. There are a number of different communities that are formed around free software, which have significant amounts of overlap. I don’t believe that there’s any reason why a Chinese or Indian Linux mailing list should conform to the same standards as those of an American list. But there will be a trend towards meme propagation through people who are associated with the Linux communities in multiple countries – every time you meet Linux people in another country you are helping to reduce the cultural differences in the Linux community.
Someone from China or India who joins a LUG in Australia will have to adapt to some of the norms of Australian behavior – in the same way as an Australian who migrates to China or India would have to adapt to some local norms.
On page 4 of the discussion Matthew disagrees with Sam’s interpretation [2], maybe Matthew’s opinions on this matter are closer to mine than the way Sam describes them.
Is Division Inherently Bad?
I believe that the word “divisive” is overused. The only way to avoid division is to have everyone agree with the majority, but sometimes the minority will be right. Note that I am using the word “minority” to refer to any group of people who happen to disagree with the majority, among other things that includes people who vote for a political party that isn’t one of the two biggest ones. An entirely separate issue is that of the treatment of “minority groups“, one of the most divisive events in history was the US civil war – it’s good that slavery is outlawed but unfortunate that a war was required to gain that result.
On page 4 of the discussion Matthew says to Sam “Your writing is influenced by members of the Linux community, and in turn it influences the Linux community. The tone of it is entirely relevant to the behavioural standards of the community” [2]. If a community can be easily divided then the real problem probably isn’t the person who triggered a particular division. Also there is the issue that even if you could get a general agreement that certain issues shouldn’t be discussed in certain ways then with the wide range of cultural attitudes and ages of participants you have to expect someone to raise the issue you don’t want raised. Of course it’s impossible to objectively determine whether a division is productive or not, so it doesn’t seem at all viable to have any expectations regarding outsiders not being divisive. Sometimes you just have to deal with the fact that the Internet contains people who disagree with you.
It seems to me that the most divisive issues we face involve people who mostly agree on contentious issues. If someone entirely disagrees with you then it’s easy to ignore them (if you even have a conversation with them), but if they are someone that you communicate with and they are almost “right” in your opinion then there’s the potential for a big argument.
Probably the best way to minimise division in the community is to have the first people who get involved in a dispute take a Rogerian approach [3]. Failing that a good approach is to respond by writing an essay. When an issue is made popular by services such as Twitter that give little explanation then everyone rushes to the barricades.
It seems to me that the unreliability of some blogging platforms is part of the cause of the problem in this regard. I’ve just given up on writing comments on Blogger, I’m not going to write a good comment only to have it eaten by blogger. There are lots of blogs that have problems which discourage the population from writing anything other than a one-sentence response.
The Good that can come from Disputes
On page 5 of the discussion there is a comment from Anirudh – the Indian student who was criticised by Sam (which ended up inspiring part of Matthew’s talk and leading to more disputes) [4]. Here is the start:
I am the person who wrote the ill thought-out post that drew criticism eight months ago. There has been some discussion about that, so I wish to say something.
I am very grateful to Sam Varghese. I say this with utmost sincerity.
Read the rest, it’s educational. I’m sure that there are others who have had similar learning experiences but who don’t want to write about them. I’m sure that there have been many disputes which would appear to the casual observer to have resulted in no good at all, but which would actually have resulted in people learning things and amending their behavior.
The issue of Age
Car rental companies generally don’t do business with men who are less than 25 years old. Life insurance policies don’t offer reasonable rates to males between the ages of about 16 and 25. This is because the insurance companies have good statistical data on the results of the typical actions of people at various ages and know that young men tend to be at a high risk of earning a Darwin Award. The same combination of hormones and life experience that makes a young man a danger on the road will also tend to make him get involved in flame-wars on the net.
If we could figure out how to influence teenagers into being less anti-social then it would be a great achievement. The current young people will become older and more sensible soon enough but will be replaced by a larger number of young people who will do the same things. As things stand I don’t expect the next cohort of young people to learn from Anirudh.
Martin Krafft advocates a model of Internet access where advertisers pay for the Internet connection [1]. The first problem with this idea is the base cost of providing net access – which in most cases is wires to the premises. Every service that involves a cable to someone’s house (Cable TV, Cable/DSL net access, or phone service) has a base minimum monthly fee associated with the expense of installing and maintaining the cable and whatever hardware is at the other end of the cable. For DSL and basic phone service the pair of wires ends at an exchange and takes a share of the DSLAM or a telephone exchange. It seems that the minimum monthly cost for any wire that goes to the house in Australia is about $20. So if an advertiser makes $0.20 per click (which I believe to be higher than the average price for Google Adsense clicks) then the user would have to be clicking on adverts more than 3 times a day. This might be viable if an ISP runs a proxy that inserts adverts into all content (which technically shouldn’t be that difficult). But modifying content in-transit to introduce adverts is something that the net-discrimination crowd can only dream about.
3G net access has the lowest per-user costs. Based on current data costs it seems possible for an ISP to run a 3G service with bills for users as low as $15 per annum if they don’t transfer much data. Recouping $15 might be easy but it’s also a small enough amount of money that most users won’t mind paying it. What we really need is to have more competition in the 3G ISP business. When I investigated this issue last month I found that there are few 3G Internet providers in Australia and the cheapest is Dodo at $139 per annum with a 15G limit [2]. With a bit more competition I’m sure that someone would offer a really cheap plan for 1G or 2G of data access in a year.
Martin complains about users paying twice as “users pay to access the network (which is like paying a taxi to get to the market), so that they can visit sites where advertisers make money showing ads to the visitor”. But if the advertisers were to pay then there would be a lot of inefficiency in determining how much each advertiser should pay which would result in extra expenses – and therefore providing the service would cost more. I don’t think that paying for a taxi to get to the market is a bad thing, personally I prefer to save money and use a bus or tram. I think that the best analogy for this is comparing using your own choice of a bus, tram, taxi, etc to get to the market or having the market operator provide taxis for everyone and then make everything really expensive to cover the significant costs of doing so.
Finally there is the issue of video transfer which uses up a lot of bandwidth. According to both industry rumor and traceroute there is a significant mirror of youtube content in Australia. This means that youtube downloads will be cheap local transfers not expensive international transfers. I expect that most multi-national CDNs have nodes in Australia. So for Australia at least video transfer would not be as expensive as many people expect.
I think that to a large extent the concept of having content providers pay to host the content has been tried and found to fail. The Internet model of “you pay for your net access, I pay for mine, and then we transfer data between our computers without limit” has killed off all the closed services. Not only do I think that net-discrimination is a bad idea, I think that it would also turn out to be bad for business.
|
|