Archives

Categories

Used Car Prices

There is an interesting article in The Age about the effect of petrol prices on the poorer people in Melbourne [1].

The article claims that people are unable to sell large old cars and buy smaller cars. To investigate that claim I did a price search on Ford Falcons and Holden Commodores on the web site www.drive.com.au . The Ford Falcon and Holden Commodore were for a long time the two leading marques of cars sold in Australia – both of them large cars. It seems that if I wanted to buy a Falcon that is less than 20 years old with an engine of 4.0L (or bigger) then I would have many choices available with list prices under $2500, including some cars in that price range which are as little as 10 years old (the average age of a car that’s registered for use in Australia). For Commodores there seems to be less choice, there are a few of them with 4L engines that are just over 10 years old being advertised for just under $5000 and a significant number being advertised in the $5,000 to $7,500 range. I don’t know whether the increased asking price for Commodores is due to greater optimism by the owners or a greater demand. One thing that we have to keep in mind is that due to the low price of advertising on the web site and the duration of the advert (which permits changing the price at any time) the sensible strategy is to start the advert with an optimistic price, and then gradually drop the price if there is little interest by the buyers.

There are also some Falcons on auction on Ebay that are going fairly cheaply, one example is less than $6000 for a 2000 Falcon with only a few minutes to go.

The HSV (Holden Special Vehicles) cars are listed on Drive as a different make on Drive (it’s just Holden’s range of faster vehicles), and surprisingly their prices are quite strong. There is only one vehicle on offer for less than $5000, and only a few for less than $10,000.

Now when it comes to buying a small car, on ebay there are a number of Toyota Corollas on sale, two 1997 models are on sale for just under $9,000 and just under $10,000. It seems that you could sell a 2000 model Ford Falcon on ebay and not receive enough money to buy a 1997 Toyota Corolla!

For the Corollas advertised on Drive the majority of them seem to be advertised for around $15,000, but the volume on sale is great enough that there is a significant minority advertised for lower prices. There are 173 Corollas advertised for between $2,500 and $5,000 it might be possible to find one of those that has no significant problems. So it seems that ebay is not the place to buy a Corolla!

So it seems that the main premise of the article (that you can’t sell a second-hand large car and buy a small car) is correct. If you were to sell a 1990’s Falcon or Commodore and buy a Corolla (which cost about half as much as a high-end Falcon or Commodore when new) of the same age then you would be lucky to get more than half the Corolla purchase price. Then of course there’s a 4% “stamp duty” tax to pay and the risk that a second-hand car you buy might have some hidden problem (the cheaper cars are unlikely to have been well serviced).

On Talking to Police

The lecture by Professor James Duane about why you should not talk to the police (in the US at least) is doing the rounds at the moment. The Google video site doesn’t work for me, so I downloaded it from youtube with the following references:
part 1 [rVq6N0xAEEM]
part 2 [-Z0bpj3EEHI]
part 3 [44-GSZofXIE]
part 4 [zSvxiaO-TG8]
part 5 [gzYHnWrqfWg]
part 6 [BErNzdOnWGY]

The first thing that struck me about this is that it’s the first time I’ve ever seen someone clearly state the problem with the excessive number and complexity of the laws. It’s impossible to know whether you are breaking the law. This is obviously wrong and it should be a legislative priority to reduce the number of laws and the complexity of the laws. If for every law they sought evidence regarding whether it helped people or helped preserve the state then that would be a good start.

The excessive number of laws is not just a problem due to the risk of accidentally breaking a law, but also due to the fact that a malicious cop could arrest anyone that they wished – it would merely be a matter of following them for long enough. In the second half of the lecture Officer George Bruch from the Virginia Beach Police Department gives his side of the story. At one point he cites an example of the road laws where he can follow a car and find a legitimate reason for booking the driver if he wishes.

Virginian law allows the police to destroy primary evidence. If they tape an interview then they can transcribe it to a written record and then destroy the original tape. If the transcription removes some of the content that has meaning then the defendant is probably going to suffer for it. It’s often stated that on the net you have to make extra effort to make your meaning clear as jokes can’t be recognised by the tone of voice. It seems that the same effort would need to be made in a police interview (but without the possibility of going back and saying “sorry I was misunderstood” that you have in email).

It’s also legal for police to lie to suspects. While I can imagine some situations where this could be reasonable, it seems that at a minimum there should be some protection for child suspects against being lied to by police (there have been some travesties of justice stemming from children trusting police in an interview room).

It seems that if someone doesn’t get a good lawyer and refuse to talk to the police then they will be likely to be convicted regardless of whether they are guilty or innocent. Officer Bruch says that he tries to avoid interviewing innocent people. So I guess whether someone ends up in jail or not will depend to some extent on the opinion of a cop who may or may not interview them.

While I guess a high rate of success at securing convictions is in most cases a good thing, the fact that convicting an innocent person is the absolute best way of letting a criminal escape justice seems to be forgotten.

Another issue is the fact that a witness may have reason to believe that they could be a suspect. The advice that a suspect should never talk to a police officer seems to logically imply that intelligent people will be hesitant about providing witness statements in situations where they may be a suspect.

I wonder how the legal system in Australia compares to the US in regard to these issues. I know that we have far too many laws and too many complex laws which can not be understood and obeyed by a reasonable person.

Upgrading SE Linux Policy

When I first packaged the SE Linux policy for Debian the only way to adjust the policy was to edit the source files and recompile. Often changes that you might desire involved changing macros so while it would have been theoretically possible to just tack a few type definitions and allow rules at the end, you often wanted to change a macro to have a change apply all through the policy. To deal with that I had the policy source installed on all machines and the package update policy would compile it into a binary form and load it into the kernel.

Now there was the issue of merging user changes with changes from a new policy package. For most configuration files on a Unix system you can just leave any files that are modified by the user, not taking the new default configuration might cause the user to miss out on some new features – but presumably they were happy with the way it worked in the past. However due to inter-dependencies this wasn’t possible for SE Linux, if one file was not ungraded due to user changes and other files related to it were then the result could be a compile failure.

Another issue was the fact that a newer version of the policy might permit operations that the sys-admin did not desire and therefore not meet their security goals, or it might not permit operations that are essential to the operation of the system and interrupt service.

To solve this I wrote a script that prompted for upgrades to policy source files and allowed the sys-admin to choose which files to upgrade. This worked reasonably well in the early days when the number of files was small. But as the policy increased in size it became increasingly painful to upgrade the policy with as many as 100 questions being asked.

The solution to this (as implemented in Fedora Core 5, Debian/Etch, and newer distributions) was to have binary policy modules that maintain their dependencies. Now there are binary policy modules which can be loaded at will (the default install for Debian only installs modules that match the installed programs) and the modules can have optional sections with dependencies. So if you remove a module that defines a domain and there are other modules which have rules to allow communication with that domain then the optional sections of policy in the other modules is disabled when the domain becomes undefined. This solves the technical issues related to module inter-dependencies but the issue of intent and interaction with the rest of the system remains.

With Red Hat distributions the solution has been to upgrade the running policy every time the policy package is upgraded and be cautious when changing policy. They do a good job of the upgrade process (including relabeling files when the file contexts change) and in terms of policy changes I have not heard complaints from users about that. Users who don’t want a newer version of the policy can always put the package on hold.

For the Debian distribution after Lenny I plan to have a policy upgrade process that relabels files and a debconf question as to whether package upgrades should upgrade the policy. But for Lenny the freeze is already in progress so it seems to late to make such changes. Instead I’m going to upload a new version of the selinux-basics package with a program named selinux-policy-upgrade that will upgrade all the policy modules that are in use. This is not the ideal solution, but I think it will keep Lenny users reasonably happy.

Postfix and chroot

I have written a script named postfix-nochroot to disable the chroot functionality of Postfix. I plan to initially include this in the selinux-basics package in Debian, but if the script was adopted by the Postfix package or some other package that seems more appropriate then I would remove it from selinux-basics.

The reason for disabling chroot is that when running SE Linux the actions of the various Postfix processes are restricted greatly, such that granting chroot access would increase the privileges. Another issue is the creation of the chroot environment, the Postfix package in Debian will recreate the files needed for the chroot under /var/spool/postfix when it is started. The first problem with this is that when a package is ugraded the chroot environment won’t be upgraded (with the exceptions of some packages that have special code to restart Postfix) and when the sys-admin edits files under /etc those changes won’t be mirrored in the chroot environment either.

The real problem when running SE Linux is that it requires extra privileges to be granted to the Postfix processes (to be able to call chroot()). While the SE Linux policy places much greater restrictions on the actions of daemons than a chroot would. For example a non-chrooted daemon process running with SE Linux will not be able to see most processes in ps output (it will be able to see that processes exist through entries under /proc, but without the ability to search the subdirectories of /proc related to other processes it won’t be able to see what they are).

It would be possible for my script to be used as the first step towards making a Postfix installation automatically use a chroot when SE Linux is disabled or in permissive mode, and not use a chroot when SE Linux is in enforcing mode. I’ve probably done about half the work that is needed if this was the end goal, but I have no great interest in such configuration and no time to work on it. I would be prepared to accept patches from other people who want to go in this direction.

selinux-activate

I have written a script for Debian named selinux-activate which is included in selinux-basics version 0.3.3+nmu1 (which I have uploaded to Debian/Unstable). The script when run with no parameters will change the GRUB configuration to include selinux=1 on the kernel command-line and enable SE Linux support in the PAM modules for login, gdm, and kdm. One issue with this is that if you run the command before installing kdm or gdm then you won’t have the pam configuration changed – but as it’s OK to run the script multiple times this shouldn’t be a problem.

The new selinux-basics package will also force a reboot after relabelling all filesystems. I had tested “umount -a ; reboot -f” but discovered that “reboot -f” causes filesystem corruption in some situations (my EeePC running an encrypted LVM volume on an SD card had this problem). So I now use a regular “reboot“.

If no-one points out any serious flaws I plan to ask the release team to include this version of selinux-basics in Lenny. I believe that it will make it significantly easier to install SE Linux while also reducing the incidence of systems being damaged due to mistakes. If you edit the GRUB configuration file by hand then there is a risk of a typo making a system unbootable.

The package in question is already in my Lenny repository, see my previous post about Lenny SE Linux for details [1].

Mental Benchmarking

One thing that seems overlooked by most people who discuss productive work habits is the varying mental capacity for performing different types of work. While it’s well known that alcohol and other substances decrease mental ability and it’s slightly less well known that sleep deprivation has a similar affect to being drunk [1], the reference cites the case of medical residents – the effects of sleep deprivation can affect people with medical training who presumably are better qualified to recognise and mitigate such problems than most people.

Previously I wrote about increasing efficiency through less work [2], which was mainly based on the total number of hours worked in a week – with some mention of extreme fatigue.

However it seems obvious that things aren’t that simple. The ability to perform work can be reduced due to temporary issues, poor sleep the previous night (not the same as sleep deprivation but enough to decrease performance), general health (I find that having a cold really reduces the quality and quantity of my work), and physical state (skipping a meal due to being busy can lead to lower quality work).

For the work I do (system programming and system/network administration) there are a range of tasks that I need to perform, which require different levels of skill. Now when I have several tasks that need to be done it makes sense to do the most demanding task that I can do well. The problem is in assessing my own ability to perform such tasks, while I can have a general idea of how alert I feel, it seems likely that self assessment by subjective criteria will decrease in accuracy at least as fast as my ability to perform the work in question.

So having an ability to assess my mental capacity at any time seems useful in determining when to work on the hard tasks, when to work on easy tasks, and when to just give up and go to bed! ;)

Fortunately I have found a benchmark that seems to give reasonable results. I have found that I can reliably solve the game Codebreaker (based on the board game Mastermind [3]) on the Familiar distribution of Linux running on my iPaQ in under 30 seconds when I’m fully alert. So far my tests have indicated that when I seem less alert (due to finding tasks difficult which should be easy or making mistakes) the amount of time taken to complete the game increases (my worst day was when I couldn’t complete it in under a minute).

It seems likely that someone who is doing intellectual work could take one day off work a week without a great decrease in productivity if it was their least productive day. If there was a factor of three difference in productivity between the best and worst days then skipping the worst day might decrease productivity by 10% if the 8 hours in question were used doing something as demanding as working. If what might have been the least productive day had been spent relaxing followed by going to sleep earlier than usual then it doesn’t seem impossible for productivity to be increased enough on the next day to provide a net benefit.

One thing I plan to do from now on is to use Codebreaker to help me determine when I should cease work in the evening. If I take more than a minute to complete it (or have difficulty in solving it) then it’s probably best to finish work for the day.

Installing SE Linux on Lenny

Currently Debian/Lenny contains all packages needed to run SE Linux apart from the policy. The policy package is missing because it needs to sit in unstable for a while before migrating to testing (Lenny), and I keep fixing bugs and uploading new versions.

I have set up my own APT repository for SE Linux packages (as I did for Etch [1]). The difference is that it’s working now (for i386 and AMD64) while I released my Etch repository some time after the release of Etch.

gpg --keyserver hkp://subkeys.pgp.net --recv-key F5C75256
gpg -a --export F5C75256 | apt-key add –

To enable the use of my repository you must first run the above two commands to retrieve and install my GPG key (take appropriate measures to verify that you have the correct key).

deb http://www.coker.com.au lenny selinux

Then add the above line to /etc/apt/sources.list and run “apt-get update” to download the list of packages.

Next run the command “apt-get install selinux-policy-default selinux-basics” to install all the necessary packages and then “touch /.autorelabel” to cause the filesystems to be labeled on the next boot. Edit the file /boot/grub/menu.lst and add “selinux=1” to the end of the line which starts with “# kopt=” and then run the command update-grub to apply this change.

Then reboot and the filesystems will be relabeled. Init will be running in the wrong context so you have to reboot again before everything is running correctly (I am thinking of having the autorelabel process automatically do the second reboot).

For future reference please use the page on my documents blog – I will update it regularly as needed [2]. This post will not be changed when it becomes outdated in a few days.

Variable Names

For a long time I have opposed single letter variable names. Often I see code which has a variable for a fixed purpose with a single letter name, EG “FILE *f;“, the problem with this is that unless you choose a letter such as ‘z‘ which has a high scrabble score (and probably no relation to what your program is doing) then it will occur in other variable names and in reserved words for the language in question. As a significant part of the time spent coding will involve reading code so even for programmers working on a project a useful amount of time can be saved by using variable names that can easily by found by a search program. Often it’s necessary to read source code to understand what a system does – so that is code reading without writing.

With most editors and file viewing tools searching for a variable with a single character name in a function (or source file for a global variable) is going to be difficult. Automated searching is mostly useless, probably the best option is to have your editor highlight every instance and visually scan for the ones with are not surrounded by brackets, braces, parenthesis, spaces, commas, or whatever else is not acceptable in a variable name in the language in question.

Of course if you have a syntax highlighting editor then it might parse enough of the language to avoid this. But the heavier editors are not always available. Often I edit code on the system where the crash occurs (it makes it easier to run a debugger). Installing one of the heavier editors is often not an option for such a task (the vim-full Debian/Lenny package for AMD64 has dependencies that involve 27M of packages files to download and would take 100M of disk space to install quite a lot to ask if you just want to edit a single source file). Incidentally I am interested in suggestions for the best combination of features and space in a vi clone (color syntax highlighting is a feature I desire).

But even if you have a fancy editor, there is still the issue of using tools such as less and grep to find uses of variables. Of course for some uses (such as a loop counter) there is little benefit in using grep.

Another issue to consider is the language. If you write in Perl then a search for \$i should work reasonably well.

One of the greatest uses of single letter variable names is the ‘i‘ and ‘j‘ names for loop counters. In the early days of computing FORTRAN was the only compiled language suitable for scientific tasks and it had no explicit way of declaring variables, if a variable name started with i, j, k, l, m, or n then it was known to be an integer. So i became the commonly used name for a loop counter (the first short integer variable name). That habit has been passed on through the years so now many people who have never heard of FORTRAN use i as the name for a loop counter and j as the name for the inner loop in nested loops. [I couldn’t find a good reference for FORTRAN history – I’ll update this post if someone can find one.]

But it seems to me that using idx, index, or even names such as item_count which might refer to the meaning of the program might be more efficient overall. Searching for instances of i in a program is going to be difficult at the best of times, even without having multiple loops (either in separate functions or in the same function) with the same variable name.

So if there is to be a policy for variable names for counters, I think that it makes most sense to have multiple letters in all variable names to allow for easy grepping, and to have counter names which apply to what is being counted. Some effort to give different index names to different for/while loops would make sense too. Having two different for loops with a counter named index is going to make things more difficult for someone who reads the code. Of course there are situations where two loops should have the same variable, for example if one loop searches through an array to find a particular item and then the next loop goes backward through the array to perform some operation on all preceding items then it makes sense to use the same variable.

SE Linux in Lenny Status

SE Linux is almost ready to use in Lenny. Currently I am waiting on the packages libsepol1 version 2.0.30-2, policycoreutils 2.0.49-3, and selinux-policy-default version 0.0.20080702-4 to make their way to testing. The first two should get there soon, the policy will take a little longer as I just made a new upload today (to make it correctly depend on libsepol1 and also some policy fixes).

Update: libsepol1 version 2.0.30-2 and policycoreutils 2.0.49-3 are now in Lenny (testing). Now I’m just waiting for the policy.

Ideally we would be able to pin the apt repositories to take just the packages we want from Unstable (here is a document on how it’s supposed to work [1]). That doesn’t work, so I also tried setting “APT::Default-Release “stable”;” in /etc/apt/apt.conf (as suggested on IRC). This gave better results than pinning (which seems to not work at all) but it still wanted to take unreasonably large numbers of packages from unstable.

Currently to get SE Linux in Lenny (Testing) working you must first upgrade everything to the testing versions, then install libsepol1 from Unstable (this is really important as until a few hours ago the Policy packages in Unstable didn’t depend on it). Then you install policycoreutils and finally the policy package which will be selinux-policy-default for almost everyone – I have not tested the MLS package (selinux-policy-mls) and it’s quite likely that it won’t work well.

The policycoreutils package has a bug related to Python libraries [2] which I don’t know how to fix. Any advice would be appreciated. It’s obvious that the package name needs to not contain a hyphen, but what the name should be and where the files should be stored. The release team have been pretty cooperative with my requests so far to get broken things fixed, hopefully I’ll find a solution to this (and the other similar issues) soon enough to avoid any great inconvenience to them. I’m sure that they will agree that significantly broken packages (which have syntax errors in scripts) need to be fixed before release.

There are also some last minute policy issues that need to be fixed. To properly test this I’m now running the server for my blog and mail server on Lenny with SE Linux. I think that I’m only one policy bug away from running in enforcing mode.

While the situation is pretty difficult at the moment (I’ve had a report forwarded to me from an OLS delegate who tried Lenny SE Linux with the older policy packages and got a bad result), I believe that once Lenny is released we will have the best ever support for SE Linux.

The Debian security team recently released an update to the SE Linux policy packages to match the recent updates to BIND [3]. I was grateful that they did this – and without any significant involvement from me. I was asked to advise on the patch that they had written, I confirmed that it looked good (which took hardly any effort), and they did the rest (which appears to be a moderate amount of work). Given the situation it would have been understandable if they had decided that it was something that could be worked around.

I expect that SE Linux on Lenny will get more users than on Etch, so therefore more issues of this nature will be discovered so I expect to have more interaction with the Debian security group in future.

Biba and BLP for Network Services

Michael Janke has written an interesting article about data flows in networks [1], he describes how data from the Internet should be considered to have low integrity (he refers to it as “untrusted”) and that as you get closer to the more important parts of the system it needs to be of higher integrity.

It seems to me that his ideas are very similar in concept to the Biba Integrity Model [2]. The Biba model is based around the idea that a process can only write data to a resource that is of equal or lower integrity and only read data from a resource that is of equal or higher integrity, this is often summarised as “no read-down and no write-up“. In a full implementation of Biba the OS would label all data (including network data) as to it’s integrity level and prevent any communication that violates the model (except of course for certain privileged programs – for example the file or database that stores user passwords must have high integrity but any user can run the program to change their password). A full Biba implementation would not work for a typical Internet service, but considering some of the concepts of Biba while designing an Internet service should lead to a much better design (as demonstrated in Michael’s post).

While considering the application of Biba to network design it makes sense to also consider consider the Bell LaPadula model (BLP) [3]. In computer systems designed for military use a combination of Biba and BLP is not uncommon, while a strict combination of those technologies would be an almost insurmountable obstacle to development of Internet services I think it’s worth considering the concepts.

BLP is a system that is primarily designed around the goal of protecting data confidentiality. Every process (subject) has a sensitivity label (often called a “clearance”) which is comprised of a sensitivity level and a set of categories and every resource that a process might access (object) also has a sensitivity label (often called a “classification”). If the clearance of the subject dominates the classification of the object (IE the level is equal or greater and the set of categories is a super-set) then read access is permitted, if the clearance of the subject is dominated by the classification of the object then write access is permitted, and the clearance and classification have to be equal for read/write access to be permitted. This is often summarised as “no write-down and no read-up“.

SGI has published a lot of documentation for their Trusted Irix (TRIX) product on the net, the section about mandatory access control covers Biba and BBLP [4]. I recommend that most people who read my blog not read the description of how Biba and BLP works, it will just give you nightmares.

The complexity of either Biba or BLP (including categories) is probably too great for consideration when designing network services which have much lower confidentiality requirements (even the loss of a few million credit card numbers is trivial compared to some of the potential results of leaks of confidential military data). But a simpler case of BLP with only levels is worth considering. You might have credit card numbers stored in a database classified as “Top Secret” and not allow less privileged processes to read from it. The data about customers addresses and phone numbers might be classified as “Secret” and all the other data might merely be “Classified”.

One way of using the concepts of Biba and BLP in the design of a complex system would be to label every process and data store in the system according to it’s integrity and classification/clearance. Then for the situations where data flows to processes with lower clearance the code could be well designed and audited to ensure that it does not leak data. For situations where data of low integrity (EG data from a web browser) is received by a process of high integrity (EG the login screen) the code would have to be designed and audited to ensure that it correctly parsed the data and didn’t allow SQL injection or other potential attacks.

I expect that many people who have experience with Biba and BLP will be rolling their eyes while reading this. The situation that we are dealing with in regard to PHP and SQL attacks over the Internet is quite different to the environments where proper implementations of Biba and BLP are deployed. We need to do what we can to try and improve things, and I think that the best way of improving things in terms of web application security would involve thinking about clearance and integrity as separate issues in the design phase.