|
For best system security you want to apply kernel security patches ASAP. For an attacker gaining root access to a machine is often a two step process, the first step is to exploit a weakness in a non-root daemon or take over a user account, the second step is to compromise the kernel to gain root access. So even if a machine is not used for providing public shell access or any other task which involves giving user access to potential hostile people, having the kernel be secure is an important part of system security.
One thing that gets little consideration is the overall effect of applying security updates on overall uptime. Over the last year there have been 14 security related updates (I count a silent data loss along with security issues) to the main Debian Etch kernel package. Of those 14, it seems that if you don’t use DCCP, NAT for CIFS or SNMP, IA64, the dialout group, then you will only need to patch for issues 2, 3 (for SMP machines), 4, 5, 7 (sound drivers get loaded on all machines by default), 9, 10, 11, 12, 13, and 14.
This means 11 reboots a year for SMP machines and 10 a year for uni-processor machines. If a reboot takes three minutes (which is an optimistic assumption) then that would be 30 or 33 minutes of downtime a year due to kernel upgrades. In terms of uptime we talk about the number of “nines”, where the ideal is generally regarded as “five nines” or 99.999% uptime. 33 minutes of downtime a year for kernel upgrades means that you get 99.993% uptime (which is “four nines”). If a reboot takes six minutes (which is not uncommon for servers) then it’s 99.987% uptime (“thee nines”).
While it doesn’t seem likely to affect the number of “nines” you get, not using SMP has the potential to avoid future security issues. So it seems that when using a Xen (or other virtualisation technology) assigning only one CPU to the DomUs that don’t need any more could improve uptime for them.
For Xen Dom0’s which don’t have local users or daemons, don’t use DCCP, NAT for CIFS or SNMP, wireless, CIFS, JFFS2, PPPoE, bluetooth, H.323 or SCTP connection tracking, then only issue 11 applies. However for “five nines” you need to have 5 minutes of downtime a year or less. It seems unlikely that a busy Xen server can be rebooted in 5 minutes as all the DomUs need to have their memory saved to disk (writing out the data to disk and reading it back in after a reboot will probably take at least a couple of minutes) or they need to be shutdown and booted again after the Dom0 is rebooted (which is a good procedure if the security fix affects both Dom0 and DomU use), and such shutdowns and reboots of DomU’s will take a lot of time.
Based on the past year, it seems that a system running as a basic server might get “four nines” if configured for a fast boot (it’s surprising that no-one seems to be talking about recent improvements to the speed of booting as high-availability features) and if the boot is slower then you are looking at “three nines”. For a Xen server unless you have some sort of cluster it seems that “five nines” is unattainable due to reboot times if there is one issue a year, but “four nines” should be easy to get.
Now while the 14 issues over the last year for the kernel seems likely to be a pattern that will continue, the one issue which affects Xen may not be representative (small numbers are not statistically significant). I feel confident in predicting a need for between 5 and 20 kernel updates next year due to kernel security issues, but I would not be prepared to bet on whether the number of issues affecting Xen will be 0, 1, or 4 (it seems unlikely that there would be 5 or more).
I will write a future post about some strategies for mitigating these issues.
Here is my summary of the Debian kernel linux-image-2.6.18-6-686 (Etch kernel) security updates according to it’s changelog, they are not in chronological order, it’s the order of the changelog file:
Continue reading Kernel Security vs Uptime
Today I received a Dell PowerEDGE T105 for use by a client. My client had some servers for development and testing hosted in a server room at significant expense. They also needed an offsite backup of critical data. So I suggested that they buy a cheap server-class machine, put it on a fast ADSL connection at their home, and use Xen DomU’s on that for development, testing, and backup. My client liked the concept but didn’t like the idea of having a server in his home.
So I’m going to run the server from my home. I selected a Dell PowerEDGE tower system because it’s the cheapest server-class machine that can be purchased new. I have a slight preference for HP hardware but HP gear is probably more expensive and they are not a customer focussed company (they couldn’t even give me a price).
So exactly a week after placing my order I received my shiny new Dell system, and it didn’t work. I booted a CentOS CD and ran “memtest” and the machine performed a hard reset. When it booted again it informed me that the event log had a message, and the message was “Uncorrectable ECC Error” with extra data of “DIMM 2,2“. While it sucks quite badly to receive a new machine that doesn’t work, that’s about the best result you can hope for when you have a serious error on the motherboard or the RAM. A machine without ECC memory would probably just randomly crash every so often and maybe lose data (see my previous post on the relative merits of ECC RAM and RAID [1]).
So I phoned up Dell (it’s a pity that their “Packing Slip” was a low quality photocopy which didn’t allow me to read their phone number and that the shipping box also didn’t include the number so I had to look them up on the web) to get technical support. Once we had established that by removing the DIMMs and reinserting them I had proved that there was a hardware fault they agreed to send out a technician with a replacement motherboard and RAM.
I’m now glad that I bought the RAM from Dell. Dell’s business model seems to revolve around low base prices for hardware and then extremely high prices for extras, for example Dell sells 1TB SATA disks for $818.40 while MSY [1] has them for $215 or $233 depending on brand.
When I get the machine working I will buy two 1TB disks from MSY (or another company with similar prices). Not only does that save some money but it also means that I can get different brands of disk. I believe that having different brands of hard disk in a RAID-1 array will decrease the probability of having them both fail at the same time.
One interesting thing about the PowerEdge T105 is that Dell will only sell two disks for it, but it has four SATA connectors on the motherboard, one is used for a SATA DVD player so it would be easy to support three disks. Four disks could be installed if a PCIe SATA controller was used (one in the space for a FDD and another in the space for a second CD/DVD drive), and if you were prepared to go without a CD/DVD drive then five internal disks could probably work. But without any special hardware the space for a second CD/DVD drive is just begging to be used for a third hard disk, most servers only use the primary CD/DVD drive for installing the OS and I expect that the demand for two CD/DVD drives in a server is extremely low. Personally I would prefer it if servers shipped with external USB DVD drives for installing the OS. Then when I install a server room I could leave one or two drives there in case a system recovery is needed and use the rest for desktop machines.
One thing that they seem to have messed up is the lack of a filter for the air intake fan at the front of the case. The Opteron CPU has a fan that’s about 11cm wide which sucks in air from the front of the machine, in front of that fan there is a 4cm gap which would nicely fit a little sponge filter. Either they messed up the design or somehow my air filter got lost in transit.
Incidentally if you want to buy from Dell in Australia then you need to configure your OS to not use ECN (Explicit Congestion Notification [2] as the Dell web servers used for sales rejects all connections from hosts with ECN enabled. It’s interesting that the web servers used for providing promotional information work fine with ECN and it’s only if you want to buy that it bites you.
But in spite of these issues, I am still happy with Dell overall. Their machine was DOA, that happens sometimes and the next day service is good (NB I didn’t pay extra for better service). I expect that they will fix it tomorrow and I’ll buy more of their gear in future.
Update: I forgot to mention that Dell shipped the machine with two power cables. While two power cables is a good thing for the more expensive servers that have redundant PSUs, for a machine with only one PSU it’s a bit of a waste. For some time I’ve been collecting computer power cables faster than I’ve been using them (due to machines dying and due to clients who want machines but already have spare power cables). So I’ve started giving them away at meetings of my local LUG. At the last meeting I gave away a bag of power cables and I plan to keep taking cables to the meetings until people stop accepting them.
I have just watched an interesting lecture by Steven Levitt about car safety for children in the 2-6 age range [1]. The evidence he presents shows that the benefits for children in that age range are at best insignificant and that in some corner cases (EG rear impacts) the child seat may give a worse result than an adult seat belt!
He advocates a 5-point harness [2] for children in the 2-6 age range that is based on a standard adult seat and seems to be advocating a child “booster seat” integrated into the adult seat (which approximates the booster seats offered by some recent cars such as the VW Passat). He has a picture of a child in a child-sized 5-point harness to illustrate his point. But one thing that should be considered is the benefit of a 5 point harness for adults. Race car drivers use 5 point harnesses, I wonder how the probability of a race car dying during the course of their employment compares with the probability of an average adult dying while doing regular driving. I also wonder how a 5 point harness compares to a three point harness with a pre-tensioner, it seems quite possible that a 5 point harness would be cheaper and safer than the 3 point harness with pre-tensioner that is found in all the most expensive cars manufactured in the last few years.
He believes (based on tests with crash-test dummies) that part of the problem is that the child seat will move in an accident (it’s attached to a soft seat). It seems that one potential solution to this is to have child seats that firmly attach to some solid part of a vehicle. I had previously suggested that child seats which replace existing seats as an option from the manufacturer would be a good idea [3].
But there is a good option for making better child seats for existing vehicles. It is becoming common in the “people mover” market segment to design vehicles with removable seats. For example the Kia Carnival has three seats in the middle row which are removable and which attach to four steel bars in the floor. It should not be difficult to design a child seat which attaches to those bars and could therefore be plugged in to a Carnival in a matter of minutes. The Carnival is designed to have the mid row seats installed or removed easily and safely by someone who is untrained, while for comparison it is recommended that a regular child seat should only be installed by a trained professional (IE your regular mechanic can’t do it).
I’ve just been considering when it’s best to drive and when it’s best to take public transport to save money. My old car (1999 VW Passat) uses 12.8L/100km which at $1.65 per liter means 21.1 cents per km on fuel. A new set of tires costs $900 and assuming that they last 20,000km will cost 4.5 cents per km. A routine service every 10,000Km will cost about $300 so that’s another 3 cents per km. While it’s difficult to estimate the cost per kilometer of replacing parts that wear out, it seems reasonable to assume that over 100,000Km of driving at least $20,000 will be spent on parts and the labor required to install them, this adds another 20 cents per km.
The total then would be 48.6 cents per km. The tax deduction for my car is 70 cents per km of business use, so if my estimates are correct then the tax deductions exceed the marginal costs of running a vehicle (the costs of registration, insurance, and asset depreciation however make the car significantly more expensive than that – see my previous post about the costs of owning a small car for more details [1]). So for business use the marginal cost after tax deductions are counted is probably about 14 cents per km.
Now a 2 hour ride on Melbourne’s public transport costs $2.76 (if you buy a 10 trip ticket). For business use that’s probably the equivalent cost to 20Km of driving. The route I take when driving to the city center is about 8Km, that gets me to the nearest edge of the CBD (Central Business District) and doesn’t count the amount of driving needed to find a place to park. This means the absolute minimum distance I would drive when going to the CBD would be 16Km. The distance I would drive on a return trip to the furthest part of the CBD would be almost exactly 20km. So on a short visit to the central city area I might save money by using my car if it’s a business trip and I tax-deduct the distance driven. A daily ticket for the public transport is equivalent to two 2 hour tickets (if you have a 10 trip ticket then if you use it outside the two hour period it becomes a daily ticket and uses a second credit). If I could park my car for an out of pocket expense of less than $2.76 (while I can tax-deduct private parking it’s so horribly expensive that it would cost at least $5 after deductions are counted) then I could possibly save money by driving. There were some 4 hour public parking spots that cost $2.
So it seems that for a basic trip to the CBD it’s more expensive to use a car than to take a tram when car expenses are tax deductible. For personal use a 5.7km journey would cost as much as a 2 hour ticket for public transport and a 11.4km journey would cost as much as a daily ticket. The fact that public transport is the economical way to travel for such short distances is quite surprising. In the past I had thought of using a tram ticket as an immediate cost while considering a short car drive as costing almost nothing (probably because the expense comes days later for petrol and years later for servicing the car).
Also while there is a lot of media attention recently about petrol prices, it seems that for me at least petrol is still less than half the marginal cost of running a car. Cars are being advertised on the basis of how little fuel they use to save money, but cars that require less service might actually save more money. There are many cars that use less fuel than a VW Passat, and also many cars that are less expensive to repair. It seems that perhaps the imported turbo-Diesel cars which are becoming popular due to their fuel use may actually be more expensive than locally manufactured small cars which have cheap parts.
Update: Changed “Km” to “km” as suggested by Lars Wirzenius.
Paul Graham has recently published an essay titled How To Disagree [1]. One form that he didn’t mention is to claim that a disagreement is a matter of opinion. Describing a disagreement about an issue which can be proved as a matter of opinion is a commonly used method of avoiding the need to offer any facts or analysis.
Sam Varghese published an article about the Debian OpenSSL issue and quoted me [2].
The Basic AI Drives [3] is an interesting papar about what might motivate an AI and how AIs might modify themselves to better achieve their goals. It also has some insights into addiction and other vulnerabilities in human motivation.
It seems that BeOS [4] is not entirely dead. The Haiku OS project aims to develop an open source OS for desktop computing based on BeOS [5]. It’s not nearly usable for end-users yet, but they have vmware snapshots that can be used for development.
On my Document Blog I have described how to debug POP problems with the telnet command [6]. Some users might read this and help me fix their email problems faster. I know that most users won’t be able to read this, but the number of people who can use it will surely be a lot greater than the number of people who can read the RFCs…
Singularity tales is an amusing collection of short stories [7] about the Technological Singularity [8].
A summary of the banana situation [9]. Briefly describes how “banana republics” work and the fact that a new variety of the Panama disease is spreading through banana producing countries. Given the links between despotic regimes and banana production it’s surprising that no-one is trying to spread the disease faster. Maybe Panama disease could do for South America what the Boll weevil did for the south of the US [10].
Jeff Dean gives an interesting talk about the Google server architecture [11]. One thing I wonder about is whether they have experimented with increasing the chunk size over the years. It seems that the contiguous IO performance of disks has been steadily increasing while the seek performance has stayed much the same, and the dramatic increases in the amount of RAM you can get for any given amount of money over the last few years have been amazing. So it seems that now it’s possible to read larger chunks of data in the same amount of time and more easily store such large chunks in memory.
Solving Rubiks Cube by treating disk as RAM: Gene Cooperman gave an interesting talk at Google about how he proved that Rubik’s Cube can be solved in 26 moves and how treating disk as RAM was essential for this. The Google talk is on Youtube [1]. I recommend that you read the ACM paper he wrote with Daniel Kunkle first before watching the talk. Incidentally due to the resolution of Youtube it would have been good if the notes had less than 10 lines per screen.
Here is the main page for the Rubiks Cube project with source and math [2], note that I haven’t been interested enough to read the source but I’m including the link for reference.
The main concept is that modern disks can deliver up to 100MB/s (I presume that’s from the outer tracks, I suspect that the inner tracks wouldn’t deliver that speed) for contiguous IO. Get 50 disks running at the same speed and you get 5GB/s for contiguous IO which is a typical speed for RAM. Of course that RAM speed is for a single system while getting 50 disks running at that speed will require either a well-tuned system from SGI (who apparently achieved such speeds on a single process on a single system many years ago – but I can’t find a reference) or 5+ machines from anyone else. The configuration that Gene describes apparently involves a network of machines with one disk each, he takes advantage of hardware purchased for other tasks (where the disks are mostly wasted).
I believe that SGI sells Altix machines which can have enough RAM to store all that data. It is NUMA RAM, even the “slow” access to RAM on another NUMA node should be a lot faster in most cases for sequential access and when there are seeks the benefits of NUMA RAM over disk will be dramatic. Of course the cost of a large NUMA installation is also significant, while a set of 50 quad-core machines with 500G disks is affordable by some home users.
I recently joined the community based around the TED conference [1]. The TED conference is expensive ($6000US) and has a long waiting list (the 2009 conference is sold out) so it seems quite unlikely that I will ever attend one. But signing up to the web site is easy and might offer some benefit.

One thing that interested me was that part of the sign-up process requests that you select up to 10 words from the list above to describe yourself. Some of the words seem almost mandatory for anyone who is interested in what TED has to offer (I find it difficult to imagine someone declaring that they are not an “activist” or a “change agent” while wanting to be involved with TED in any way). The range of words also seems quite strange, there are some professions mixed with educational status, marital status, and religion. The way it is laid out would tend to encourage people to make a decision as to which aspects of their life are more important, is career, marital status, or religion more important?
Given the nature of TED I’m wondering whether the intentionally did a bad job of that part of the site design to encourage people to think about these issues.
It seems to me that a better way of doing this would be to provide a few suggestions and allow people to fill in text fields with their own values. Even defining marital status can require many choices and there is no limit to the number of religions and careers. If you try to make a comprehensive list then you will end up doing what British Airways did with their frequent flyer membership application page [2]. Even disregarding the choices of spelling (EG Admiral vs Admiraal and Brig Gen vs Brig General vs Brigadier General) the British Airways list is unreasonably long, and I doubt that anyone who deserves the title “Her Magesty” or “His Holyness” is going to be interested in frequent flyer points.
Also I wonder which of the entries in the TED list would be most commonly accepted by the free software community. It seems that activist and technologist would be quite popular.
Here is the list in text form for those who can’t get the picture above:
Continue reading TED – Defining Words
If you want a reliable network then you need to determine an appropriate level of redundancy. When servers were small and there was no well accepted virtual machine technology there were always many points at which redundancy could be employed.
A common example is a large mail server. You might have MX servers to receive mail from the Internet, front-end servers to send mail to the Internet, database or LDAP servers (of which there is one server for accepting writes and redundant slave servers for allowing clients to read data), and some back-end storage. The back-end storage is generally going to lack redundancy to some degree (all the common options involve mail being stored in one location). So the redundancy would start with the routers which direct traffic to redundant servers (typically a pair of routers in a failover configuration – I would use OpenBSD boxes running CARP if I was given a choice in how to implement this [1], in the past I’ve used Cisco devices).
The next obvious place for redundancy is for the MX servers (it seems that most ISPs have machines with names such as mx01.example.net to receive mail from the Internet). The way that MX records are used in the DNS means that there is no need for a router to direct traffic to a pair of servers, and even a pair of redundant routers is another point of failure so it’s best to avoid them where possible. A smaller ISP might have two MX machines that are used for both sending outbound mail from their users (which needs to go through a load-balancing router) as well as inbound mail. A larger ISP will have two or more machines dedicated to receiving mail and two or more machines dedicated to sending mail (when you scan for viruses on both sent and received mail it can take a lot of compute power).
Now the database or LDAP servers used for storing user account data is another possible place for redundancy. While some database and LDAP servers support multi-master operation a more common configuration is to have a single master and multiple slaves which are read-only. This means that you want to have more slaves than are really required so that you can lose one without impacting the service.
There are several ways of losing a server. The most obvious is a hardware failure. While server class machines will have redundant PSUs, RAID, ECC RAM, and a general high quality of hardware design and manufacture, they still have hardware problems from time to time. Then there are a variety of software related ways of losing a server, most of which stem from operator error and bugs in software. Of course the problem with the operator errors and software bugs is that they can easily take out all redundant machines. If an operator mistakenly decides that a certain command needs to be run on all machines they will often run it on all machines before realising that it causes things to go horribly wrong. A software bug will usually be triggered by the same thing on all machines (EG I’ve had bad data written to a master LDAP server cause all slaves to crash and had a mail loop between two big ISPs take out all front-end mail servers).
Now if you have a mail server running on a virtual platform such that the MX servers, the mail store, and the database servers all run on the same hardware then redundancy is very unlikely to alleviate hardware problems. It’s difficult to imagine a situation where a hardware failure takes out one DomU while leaving others running.
It seems to me that if you are running on a single virtual server there is no benefit in having redundancy. However there is benefit in having an infrastructure which supports redundancy. For example if you are going to install new software on one of the servers there is a possibility that the software will fail. Doing upgrades and then having to roll them back is one of the least pleasant parts of sys-admin work, not only is it difficult but it’s also unreliable (new software writes different data to shared files and you have to hope that the old version can cope with them).
To implement this you need to have a Dom0 that can direct traffic to multiple redundant servers for services which only have a single server. Then when you need to upgrade (be it the application or the OS) you can configure a server on the designated secondary address, get it running, and then disable traffic to the primary server. If there are any problems you can direct traffic back to the primary server (which can be done much more quickly than downgrading software). Also if configured correctly you could have the secondary server be accessible from certain IP addresses only. So you could test the new version of the software using employees as test users while customers use the old version.
One advantage a virtual machine environment for load balancing is that you can have as many virtual Ethernet devices as you desire and you can configure them using software (without changing cables in the server room). A limitation on the use of load-balancing routers is that traffic needs to go through the router in both directions. This is easy for the path from the Internet to the server room and the path from the server room to the customer network. But when going between servers in the server room it’s a problem (which is not insurmountable, merely painful and expensive). Of course there will be a cost in CPU time for all the extra routing. If instead of having a single virtual ethernet device for all redundant nodes you have a virtual ethernet device for every type of server and use the Dom0 as a router you will end up doubling the CPU requirements for networking without even considering the potential overhead of the load balancing router functionality.
Finally there is a significant benefit in virtual machines for reliability of services. That is the ability to perform snapshot backups. If you have sufficient disk space and IO capacity you could have a snapshot of your server taken every day and store several old snapshots. Of course doing this effectively would require some minor changes to the configuration of machines to avoid unnecessary writes, this would include not compressing old log files and using a ram disk for /tmp and any other filesystem with transient data. When you have snapshots you can then run filesystem analysis tools on the snapshots to detect any silent corruption that may be occurring and give the potential benefit of discovering corruption before it gets severe (but I have yet to see a confirmed report of this saving anyone). Of course similar snapshot facilities are available on almost every SAN and on many NAS devices, but there are many sites that don’t have the budget to use such equipment.
It’s a common practice when hosting email or web space for large numbers of users to group the accounts by the first letter. This is due to performance problems on some filesystems with large directories and due to the fact that often a 16bit signed integer is used for the hard link count so that it is impossible to have more than 32767 subdirectories.
I’ve just looked at a system I run (Bluebottle anti-spam email service [1]) which has about half a million accounts and counted the incidence of each first letter. It seems that S is the most common at almost 10% and M and A aren’t far behind. Most of the clients have English as their first language, naturally the distribution of letters would be different for other languages.
Now if you were to have a server with less than 300,000 accounts then you could probably split them based on the first letter. If there were more than 300,000 accounts then you would face the risk of having there be too many account names starting with S. See the table below for the incidences of all the first letters.
The two letter prefix MA comprised 3.01% of the accounts. So if faced with a limit of 32767 sub-directories then if you split by two letters then you might expect to have no problems until you approached 1,000,000 accounts. There were a number of other common two-letter prefixes which also had more than 1.5% of the total number of accounts.
Next I looked at the three character prefixes and found that MAR comprised 1.06% of all accounts. This indicates that splitting on the first three characters will only save you from the 32767 limit if you have 3,000,000 users or less.
Finally I observed that the four character prefix JOHN (which incidentally is my middle name) comprised 0.44% of the user base. That indicates that if you have more than 6,400,000 users then splitting them up among four character prefixes is not necessarily going to avoid the 32767 limit.
It seems to me that the benefits of splitting accounts by the first characters is not nearly as great as you might expect. Having directories for each combination of the first two letters is practical I’ve seen directory names such as J/O/JOHN or JO/JOHN (or use J/O/HN or JO/HN if you want to save directory space). But it becomes inconvenient to have J/O/H/N and the form JOH/N will have as many as 17,576 subdirectories for the first three letters which may be bad for performance.
This issue is only academic as far as most sys-admins won’t ever touch a system with more than a million users. But in terms of how you would provision so many users, in the past the limits of server hardware were approached long before these issues. For example in 2003 I was running some mail servers on 2RU rack mounted systems with four disks in a RAID-5 array (plus one hot-spare) – each server had approximately 200,000 mailboxes. The accounts were split based on the first two letters, but even if it had been split on only one letter it would probably have worked. Since then performance has improved in all aspects of hardware. Instead of a 2RU server having five 3.5″ disks it will have eight 2.5″ disks – and as a rule of thumb increasing the number of disks tends to increase performance. Also the CPU performance of servers has dramatically increased, instead of having two single-core 32bit CPUs in a 2RU server you will often have two quad-core 64bit CPUs – more than four times the CPU performance. 4RU machines can have 16 internal disks as well as four CPUs and therefore could probably serve mail for close to 1,000,000 users.
While for reliability it’s not the best idea to have all the data for 1,000,000 users on internal disks on a single server (which could be the topic of an entire series of blog posts), I am noting that it’s conceivable to do so and provide adequate performance. Also of course if you use one of the storage devices that supports redundant operation (exporting data over NFS, iSCSI, or Fiber Channel) then if things are configured correctly then you can achieve considerably more performance and therefore have a greater incentive to have the data for a larger number of users in one filesystem.
Hashing directory names is one possible way of alleviating these problems. But this would be a little inconvenient for sys-admin tasks as you would have to hash the account name to discover where it was stored. But I guess you could have a shell script or alias to do this.
Here is the list of frequency of first letters in account names:
First Letter |
Percentage |
a |
7.65 |
b |
5.86 |
c |
5.97 |
d |
5.93 |
e |
2.97 |
f |
2.85 |
g |
3.57 |
h |
3.19 |
i |
2.21 |
j |
6.09 |
k |
3.92 |
l |
3.91 |
m |
8.27 |
n |
3.15 |
o |
1.44 |
p |
4.82 |
q |
0.44 |
r |
5.04 |
s |
9.85 |
t |
5.2 |
u |
0.85 |
v |
1.9 |
w |
2.4 |
x |
0.63 |
y |
0.97 |
z |
0.95 |
There has been a lot of talk recently about the cost of petrol, Colin Charles is one of the few people to consider the issue of wages in this discussion [1]. Unfortunately almost no-one seems to consider the overall cost of running a vehicle.
While I can’t get the figures for Malaysia (I expect Colin will do that) I can get them for Australia. First I chose a car that’s cheap to buy, reasonably fuel efficient (small) and common (cheap parts from the wreckers) – the Toyota Corolla seemed like a good option.
Continue reading The Cost of Owning a Car
|
|