|
|
A common idea among the less educated people who call themselves “conservative” seems to be that they should oppose tax cuts for themselves and support tax cuts for the rich because they might become rich and they want to prepare for that possibility.
The US census data [1] shows that less than 1% of males aged 15+ earn $250K. For females it’s less than 0.2%.
On the Wikipedia page about homosexuality [2] it is claimed that 2%-7% of the population are gay (and 12% of Norwegians have at least tried it out). Apparently homosexuality can strike suddenly, you never know when a right-wing politician or preacher will suddenly and unexpectedly be compelled to hire gay whores (as Ted Haggard [3] did) or come out of the closet (as Jim Kolbe [4] did).
So it seems that based on percentages you are more likely to become gay than to become rich. So it would be prudent to prepare for that possibility and lobby for gay marriage in case your sexual preference ever changes.
But on a serious note, of the people who earn $250K or more (an income level that has been suggested for higher tax rates) there will be a great correlation between the amount of education and the early start to a career. Go to a good university and earn more than the median income in your first job, and you will be well on track to earning $250K. A common misconception is that someone who has not had a great education can still be successful by starting their own company. While there are a few people who have done that, the vast majority of small companies fail in the first few years. Working hard doesn’t guarantee success, for a company to succeed you need to have the right product at the right time – this often depends on factors that you can’t predict (such as the general state of the economy and any new products released by larger companies).
I have previously written about my work packaging the tools to manage Amazon EC2 [1].
First you need to login and create a certificate (you can upload your own certificate – but this is probably only beneficial if you have two EC2 accounts and want to use the same certificate for both). Download the X509 private key file (named pk-X.pem) and the public key (named cert-X.pem). My Debian package of the EC2 API tools will look for the key files in the ~/.ec2 and /etc/ec2 directories and will take the first one it finds by default.
To override the certificate (when using my Debian package) or to just have it work when using the code without my package you set the variables EC2_PRIVATE_KEY and EC2_CERT.
This Amazon page describes some of the basics of setting up the client software and RSA keys [2]. I will describe some of the most important things now:
The command “ec2-add-keypair gsg-keypair > id_rsa-gsg-keypair” creates a new keypair for logging in to an EC2 instance. The public key goes to amazon and the private key can be used by any ssh client to login as root when you creat an instance. To create an instance with that key you use the “-k gsg-keypair” option, so it seems a requirement to use the same working directory for creating all instances. Note that gsg-keypair could be replaced by any other string, if you are doing something really serious with EC2 you might use one account to create instances that are run by different people with different keys. But for most people I think that a single key is all that is required. Strangely they don’t provide a way of getting access to the public key, you have to create an instance and then copy the /root/.ssh/authorized_keys file for that.
This Amazon page describes how to set up sample images [3].
The first thing it describes is the command ec2-describe-images -o self -o amazon which gives a list of all images owned by yourself and all public images owned by Amazon. It’s fairly clear that Amazon doesn’t expect you to use their images. The i386 OS images that they have available are Fedora Core 4 (four configurations with two versions of each) and Fedora 8 (a single configuration with two versions) as well as three other demo images that don’t indicate the version. The AMD64 OS images that they have available are Fedora Core 6 and Fedora Core 8. Obviously if they wanted customers to use their own images (which seems like a really good idea to me) they would provide images of CentOS (or one of the other recompiles of RHEL) and Debian. I have written about why I think that this is a bad idea for security [4], please make sure that you don’t use the ancient Amazon images for anything other than testing!
To test choose an i386 image from Amazon’s list, i386 is best for testing because it allows the cheapest instances (currently $0.10 per hour).
Before launching an instance allow ssh access to it with the command “ec2-authorize default -p 22“. Note that this command permits access for the entire world. There are options to limit access to certain IP address ranges, but at this stage it’s best to focus on getting something working. Of course you don’t want to actually use your first attempt at creating an instance, I think that setting up an instance to run in a secure and reliable manner would require many attempts and tests. As all the storage of the instance is wiped when it terminates (as we aren’t using S3 yet) and you won’t have any secret data online security doesn’t need to be the highest priority.
A sample command to run an instance is “ec2-run-instances ami-2b5fba42 -k gsg-keypair” where ami-2b5fba42 is a public Fedora 8 image available at this moment. This will give output similar to the following:
RESERVATION r-281fc441 999999999999 default
INSTANCE i-0c999999 ami-2b5fba42 pending gsg-keypair 0 m1.small 2008-11-04T06:03:09+0000 us-east-1c aki-a71cf9ce ari-a51cf9cc
The parameter after the word INSTANCE is the serial number of the instance. The command “ec2-describe-instances i-0c999999” will provide information on the instance, once it is running (which may be a few minutes after you request it) you will see output such as the following:
RESERVATION r-281fc441 999999999999 default
INSTANCE i-0c999999 ami-2b5fba42 ec2-10-11-12-13.compute-1.amazonaws.com domU-12-34-56-78-9a-bc.compute-1.internal running gsg-keypair 0 m1.small 2008-11-04T06:03:09+0000 us-east-1c aki-a71cf9ce ari-a51cf9cc
The command “ssh -i id_rsa-gsg-keypair root@ec2-10-11-12-13.compute-1.amazonaws.com” will then grant you root access. The part of the name such as 10-11-12-13 is the public IP address. Naturally you won’t see 10.11.12.13, it will instead be public addresses in the Amazon range – I replaced the addresses to avoid driving bots to their site.
The name domU-12-34-56-78-9a-bc.compute-1.internal is listed in Amazon’s internal DNS and returns the private IP address (in the 10.0.0.0/8 range) which is used for the instance. The instance has no public IP address, all connections (both inbound and outbound) run through some sort of NAT. This shouldn’t be a problem for HTTP, SMTP, and most protocols that are suitable for running on such a service. But for FTP or UDP based services it might be a problem. The part of the name such as12-34-56-78-9a-bc is the MAC address of the eth0 device.
To halt a service you can run shutdown or halt as root in the instance, or run the ec2-terminate-instances command and give it the instance ID that you want to terminate. It seems to me that the best way of terminating an instance would be to run a script that produces a summary of whatver the instance did (you might not want to preserve all the log data, but some summary information would be useful), and give all operations that are in progress time to stop before running halt. A script could run on the management system to launch such an orderly shutdown script on the instance and then uses ec2-terminate-instances if the instance does not terminate quickly enough.
In the near future I will document many aspects of using EC2. This will include dynamic configuration of the host, dynamic DNS, and S3 storage among other things.
The National Council of Churches in the US [1] has produced some advice for Christian voters titled Christian Principles in an Election Year [2]. It starts by saying “Your church, your communion, and the National Council of Churches USA do not endorse any political party or any candidate” (which is in bold in their text) and then lists 10 issues that Christians should consider when voting.
Here are the 10 section headings, the full article has a sentence or two explaining each one. I think that most good people (regardless of religion) will agree with all of this – maybe substitute “God” with the name of some other entity that has not been proven to exist or with “humanity”.
- War is contrary to the will of God.
- God calls us to live in communities shaped by peace and cooperation.
- God created us for each other, and thus our security depends on the well being of our global neighbors.
- God calls us to be advocates for those who are most vulnerable in our society.
- Each human being is created in the image of God and is of infinite worth.
- The earth belongs to God and is intrinsically good.
- Christians have a biblical mandate to welcome strangers.
- Those who follow Christ are called to heal the sick.
- Because of the transforming power of God’s grace, all humans are called to be in right relationship with each other.
- Providing enriched learning environments for all of God’s children is a moral imperative.
The Blogger Joy Reid [3] often uses the term “Matthew Christian” to refer to Christians who follow the book of Matthew and act in ways that would be considered to be good by most people regardless of belief. This is in stark contrast to some of the nasty people who call themselves Christian and who promote hatred, inequality, and war – such people give all Christians a bad reputation (see the comment section of any blog post concerning religion for examples).
John Goerzen’s post titled Politics and the Church (which references the NCCUSA article [4]) is also worth reading. Interestingly his blog post had a Google advert for “Christian Masturbation” when I viewed it. John also has a good post explaning why he is voting for Obama – based on his Christian beliefs and “Traditional Values” [5].
The term Cloud Computing seems to be poorly defined at the moment, as an example the Wikipedia page about it is rather incoherent [1].
The one area in which all definitions of the term agree is that a service is provided by a varying number of servers of which the details are unknown to the people who use the services – including the people who program them.
There seem to be four main definitions of cloud computing. The first one is Distributed Computing [2] (multiple machines in different administrative domains being used to perform a single task or several tasks), this definition does not seem to be well accepted and will probably disappear.
The next one is Software As A Service [3] which is usually based on the concept of outsourcing the ownership and management of some software and accessing it over the Internet. An example is the companies that offer outsourced management of mail servers that are compatible with MS Exchange (a notoriously difficult system to manage). For an annual fee per email address you can have someone else run an Exchange compatible mail server which can talk to Blackberries etc. The main benefit of SAAS is that it saves the risk and expense of managing the software, having the software deployed at a central location is merely a way of providing further cost reductions (some companies that offer SAAS services will also install the software on your servers at greater expense).
The last two definitions are the ones that I consider the most interesting, that is virtual machines (also known as Cloud Infrastructure [4]) and virtual hosting of applications (also known as Cloud Platforms [5]). The Amazon EC2 (Elastic Cloud Computing) service [6] seems to be the leading virtual machine cloud service at the moment. Google App Engine [7] seems to be the leading cloud application hosting service at the moment.
It is also claimed that “Cloud Storage” is part of cloud computing. It seems to me that storing data on servers all over the world is something that was done long before the term “Cloud Computing” was invented, so I don’t think it’s deserving of the term.
One significant dividing factor between mailing lists is the difference between summary lists (where the person who asks a question receives replies off-list and then sends a summary to the list) and the majority of mailing lists which are discussion lists (where every reply goes to the list by default).
I have seen an argument put forward that trusting the answers on a mailing list that operates under the summary list model is inherently risky and that peer review is required.
It could be argued that the process of sending a summary to the list is the peer review. I’m sure that if someone posts a summary which includes some outrageously bad idea then there will be some commentary in response. Of course the down-side to this is that it takes a few days for responses to the question to arrive and as it’s common that computer problems need to be solved in hours not days the problem will be solved (one way or another) before the summary message is written. But the idea of peer review in mailing lists seems to fall down in many other ways.
The first problem with the idea of peer review is that the usual aim of mailing lists is that most people will first ask google and only ask the list if a reasonable google search fails (probably most mailing lists would fall apart under the load of repeated questions otherwise). Therefore I expect that the majority of such problems to be solved by reading a web page (with no peer review that is easily accessible). Some of those web pages contain bad advice and part of the skill involved in solving any problem relates to recognising which advice to follow. Also it’s not uncommon for a question on a discussion list to result in a discussion with two or more radically different points of view being strongly supported. I think that as a general rule there is little benefit in asking for advice if you lack any ability to determine whether the advice is any good, and which of the possible pieces of good advice actually apply to your situation. Sometimes you can recognise good advice by the people who offer it, in a small community such as a mailing list it’s easy to recognise the people who have a history of offering reasonable advice. It seems that the main disadvantage of asking google when compared to asking a mailing list is that the google results will in most cases contain links to web sites written by people who you don’t know.
Sometimes the advice is easy to assess, for example if someone recommends a little-known and badly documented command-line option for a utility it’s easy to read the man page and not overly difficult to read the source to discover whether it is a useful solution. Even testing a suggested solution is usually a viable option. Also it’s often the case that doing a google search on a recommended solution will be very informative (sometimes you see web pages saying “here’s something I tried which failed”). Recommendations based on personal experience are less reliable due to statistical issues (consider the the regular disagreements about the reliability of hard disks where some people claim that RAID is not necessary due to not having seen failures while others claim that RAID-5 is inadequate because it has failed them). There are also issues of different requirements, trivial issues such as the amount of money that can be spent will often determine which (if any) of the pieces of good advice can be adopted.
The fact that a large number of people (possibly the majority of Internet users) regularly forward as fact rumors that are debunked by Snopes.com (the main site for debunking urban legends) seems to indicate that it is always going to be impossible to increase the quality of advice beyond a certain level. A significant portion of the people on the net are either unwilling to spend a small amount of effort in determining the accuracy of information that they send around or are so gullible that they believe such things beyond the possibility of doubt. Consider that the next time you ask for advice on a technical issue, you may receive a response from someone who forwarded a rumor that was debunked by Snopes.
Sometimes technical advice is just inherently dangerous because it is impossible to verify the integrity of some code that is being shared, or because it may be based on different versions of software. In a previous blog post I analyse some issues related to security of the Amazon EC2 service [1]. While the EC2 service is great in many ways (and implements a good well-documented set of security features on the servers) the unsigned code for managing it and the old versions of the images that they offer to customers raise some serious issues that provide avenues for attack. Getting the EC2 management tools to work correctly on Debian is not trivial, I have released patches but will not release packages for legal reasons. It seems most likely to me that someone will release packages based on my patches (either because they don’t care about the legal issues or they have legal advice suggesting that such things are OK – maybe due to residing in a different jurisdiction). Then people who download such packages will have to determine whether they trust the person who built them. They may also have the issue of Amazon offering a newer version of the software than that which is packaged for Debian (for all I know Amazon released a new version yesterday).
The term integrity when applied to computers refers to either accidental or malicious damage to data [2]. In the context of mailing list discussions this means both poorly considered advice and acts of malice (which when you consider spam and undisclosed conflicts of interest are actually quite common).
If you ask for advice in any forum (and I use the term in it’s broadest sense to cover web “forums”, IRC, twitter, etc) then getting a useful result will depend on having the majority of members of the forum possessing sufficient integrity and skill, being able to recognise the people whose advice should be followed, or being able to recognise good advice on it’s own.
I can think of few examples of forums of which I have been involved where the level of skill was sufficient to provide quality answers (and refutations for bad answers) for all areas of discussion that were on topic. People whose advice should generally be followed will often offer advice on areas where their skills are less well developed, someone whose advice can be blindly followed in regard to topic A may not be a reliable source for advice on topic B – which can cause confusion if the topics in question are closely related.
Finally a fundamental difference between “peer review” (as applied to conferences and academic journals) is that review for conferences and journals is conducted before the presentation. Not only does the work have to be good enough to pass the review, but the people doing it will never be sure what the threshold is (and will generally want to do more than a minimal effort) so the quality will be quite high. While peer review in mailing lists is mostly based around the presence or absence of flames. A message which doesn’t attract flames will either have some minimal quality or be related to a topic that is not well known (so no-one regards it as being obviously wrong).
Update: The “peer review” process of publishing a post on my blog revealed that I had incorrectly used who’s instead of whose.
I’ve just been setting up jabber.
I followed the advice from System Monitoring on setting up ejabberd [1]. I had previously tried the default jabber server but couldn’t get it working. The ejabberd is written in Erlang [2] which has it’s own daemon that it launches. It seems that Erlang is designed for concurrent and distributed programming so it has an Erlang Port Mapper Daemon (epmd) to manage communications between nodes. I’ve written SE Linux policy for epmd and for ejabberd, but I’m not sure how well it will work when there are multiple Erlang programs running in different security contexts. It seems that I might be the first person to try running a serious Jabber server on SE Linux. The policy was written a while ago and didn’t support connecting to TCP port 5269 – the standard port for Jabber inter-server communication and the port used by the Gmail jabber server.
The ejabberd has a default configuration file that only requires minor changes for any reasonable configuration and a command-line utility for managing it (adding users, changing passwords, etc). It’s so easy to set up that I got it working and wrote the SE Linux policy for ejabberd in less time than I spent unsuccessfully trying to get jabber to work!
It seems that Jabber clients default to using the domain part of the address to determine which server to talk to (it is possible to change this). So I setup an A record for coker.com.au pointing to my Jabber server, I’ll have the same machine run a web server to redirect http://coker.com.au to http://www.coker.com.au.
For Jabber inter-server communication you need a SRV record [3] in your zone. I used the following line in my BIND configuration:
_xmpp-server._tcp IN SRV 0 5 5269 coker.com.au.
Also for conferencing the default is to use the hostname “conference” in the domain of your Jabber server. So I’ve created conference.coker.com.au to point to my server. This name is used both in Jabber clients and in sample directives in the ejabberd configuration file, so it seemed too difficult to try something different (and there’s nothing wrong with conference as an A record).
I tried using the cabber client (a simple text-mode client), but found two nasty bugs within minutes (SEGV when a field is missing from the config file – Debian bug #503424 and not resetting the terminal mode on exit – Debian bug #503422). So I gave up on cabber as a bad idea.
I am now testing kopete (the KDE IM client) and GAIM aka Pidgin. One annoying bug in Kopete is that it won’t let me paste in a password (see Debian bug #50318). My wife is using Pidgin (formerly known as GAIM) on CentOS 5.2 and finding it to work just as well as GAIM has always worked for her. One significant advantage of Pidgin is that it seems impossible to create a conference in Kopete. Kopete uses one window for each chat and by default Pidgin/GAIM uses a single window with a tab for each chat (with an option to change it). I haven’t seen an option in Kopete to change this, so if you want to have a single window for all your chats and conferences with tabs then you might want to use Pidgin/GAIM.
Another annoying thing about Kopete is that it strictly has a wizard based initial install. I found it difficult to talk my mother through installing it because I couldn’t get my machine to see the same dialogs that were displayed on her machine. In retrospect I probably should have run “ssh -X test@localhost” to run it under a different account.
I was just reading the latest catalogue from Dick Smith Electronics (a chain of computer stores in Australia).
The first interesting thing that I noticed is that laptops are cheaper than desktops in all categories. For any combination of CPU power and RAM in a desktop system I can see a laptop advertised with similar specs at a lower price. Of course you won’t get such a big display in a laptop, but big displays don’t always work well. I just read an interesting review of LCD display technology [1] which states (among other things) that TN panels (which provide poor colors and a limited viewing angle) are used in all current 22 inch monitors! They state that the Dell 2007WFP (which I own) comes in two versions, I was fortunate to get the one that doesn’t suck. Based on that review I think I’ll refrain from all further monitor purchases until the technology gets sorted out and it becomes possible to reliably buy the better monitors at a decent price. The most expensive desktop system that Dick Smith advertised in their catalogue has a 22 inch monitor.
It seems that with desktop systems being more expensive an increasing number of home users will use laptops instead, which will of course change the economics of manufacture. Maybe the desktop computer is about to die out and be replaced by laptops, PDAs, and mobile phone type devices (blackberries etc).
Another interesting thing is an advert for a LASER pointer (it seems that they haven’t been banned as “terrorist weapons” yet). Being on special for a mere $27 is not the interesting thing, what is interesting is that the advert claims “projects up to 500m indoors“. I’m sure it will be handy if I ever have to give a presentation at the Airbus factory. But otherwise it seems quite unlikely that I will ever get an opportunity for a 500m indoor space.
The prices on digital cameras have been dropping consistently for some time. Now they are selling a Samsung S860 (8.1MP with 3* optical zoom) for $98. This is (according to the specs at least) a very powerful camera for a price that most people won’t think twice about. I expect that an increasing number of people will buy new digital cameras every year the way white-box enthusiasts buy new motherboards! Hopefully people will use services such as Freecycle [2] to dispose of all their old cameras, to both avoid pollution and get cameras into the hands of more people.
Very few monitors are being sold with resolutions greater than 2MP (1680*1050 is the highest you can get for a reasonable price). So an 8MP camera allows significant scope for cropping and resizing an image before publishing it on the web. Even the 4MP cameras that were on sale a few years ago (and which are probably being discarded now) are more than adequate for such use.
Here’s a blog post suggesting that anti-depressant drugs such as Prozac may have helped the US mortgage crisis [1]. Apparently such drugs cause poor impulse control, so it wouldn’t be a good idea to attend a house auction while using them.
Here’s an interesting idea about lecturing, give 20 minute talks with something else (practical work or group discussion) in between [2]. Michael Lee wants to “capture the power of that strict time limit, the intensity of a well-crafted 20 minutes”. While I’m not sure that a strict time limit is such a great idea. Having talks broken up into sections sounds like it has the potential to offer some benefits.
A bible from the 4th century has been found and is being digitised [3]. When the digitisation is complete (next year) it will be published on the net so everyone can see how the bible has changed over the years.
Interesting interview with Jim Gray (of MS Research) about storage [4]. It was conducted in 2003 so technology has moved on, but the concepts remain. His ideas for sharing two terabytes of data by using a courier to deliver an NFS or CIFS file server are interesting, the same thing could be done today with five terabytes for a lower cost.
Techtarget has a white paper sponsored by Intel about the price/performance of data centers in low-density and high-density designs [5]. I don’t think I’ll ever be in a position to design a data center, but the background information in the paper is very useful.
Google has an interesting set of pages describing their efforts to save power in their data centers [6]. They claim to have the most efficient server rooms ever built, and describe how it saves them a lot of money. One of the interesting things that they do is to use evaporative cooling as the primary cooling method. They also have a RE<C (Renewable Energy cheaper than Coal) project [7].
Here’s a Youtube video of an interesting presentation by Andy Thomson (a psychiatrist at the University of Virginia) about male-bonded coalitionary violence [8]. He shows the evidence of it in chimpanzees, humans, and evidence for it being in the common ancestry of chimps and humans (5-6 million years ago). He also shows a link to modern suicide bombing.
It’s widely regarded that Cyrus is the fastest IMAP server. Linux-Magazin.de published an article last year comparing Cyrus, UW-IMAP, Dovecot, and Courier and the conclusion is that Courier and Dovecot are the winners [9]. I used Google Translation but the results were not particularly good so I think I missed some of the points that they were trying to make.

Above is a picture of a moth that I found in a packet of shrink wrapped celery from Foodworks (a Melbourne chain of grocery stores).
I took several pictures from different angles, but I found that an almost direct photo captured it best, you can see the reflection of the flash covering part of the moth (showing that the plastic wrap is on top of it).
I opened the packet outside and after some prodding the moth flew off.
I have access to a server in Germany that was running Debian/Etch i386 but needed to be running Xen with the AMD64 version of Debian/Lenny (well it didn’t really need to be Lenny but we might as well get two upgrades done at the same time). Most people would probably do a complete reinstall, but I knew that I could do the upgrade while the machine is in a server room without any manual intervention. I didn’t achieve all my goals (I wanted to do it without having to boot the recovery system – we ended up having to boot it twice) but no dealings with the ISP staff were required.
The first thing to do is to get a 64bit kernel running. Based on past bad experiences I’m not going to use the Debian Xen kernel on a 64bit system (in all my tests it has had kernel panics in the Dom0 when doing any serious disk IO). So I chose the CentOS 5 kernel.
To get the kernel running I copied the kernel files (/boot/vmlinuz-2.6.18-92.1.13.el5xen /boot/System.map-2.6.18-92.1.13.el5xen /boot/config-2.6.18-92.1.13.el5xen) and the modules (/lib/modules/2.6.18-92.1.13.el5xen) from a CentOS machine. I just copied a .tgz archive as I didn’t want to bother installing alien or doing anything else that took time. Then I ran the Debian mkinitramfs program to create the initrd (the 32bit tools for creating an initrd work well with a 64bit kernel). Then I created the GRUB configuration entry (just copied the one from the CentOS box and changed the root= kernel parameter and the root GRUB parameter), crossed my fingers and rebooted. I tested this on a machine in my own computer room to make sure it worked before deploying it in Germany, but there was still some risk.
After rebooting it the command arch reported x86_64 – so it had a 64bit Xen kernel running correctly.
The next thing was to create a 64bit Lenny image. I got the Lenny Beta 2 image and used debootstrap to create the image (I consulted my blog post about creating Xen images for the syntax [1] – one of the benefits of blogging about how you solve technical problems). Then I used scp to copy a .tgz file of that to the server in Germany. Unfortunately the people who had set up that server had used all the disk space in two partitions, one for root and one for swap. While I can use regular files for Xen images (with performance that will probably suck a bit – Ext3 is not a great filesystem for big files) I can’t use them for a new root filesystem. So I formatted the swap space as ext3.
Then to get it working I merely had to update the /etc/fstab, /etc/network/interfaces, and /etc/resolv.conf files to make it basically functional. Of course ssh access is necessary to do anything with the server once it boots, so I chrooted into the environment and ran “apt-get update ; apt-get install openssh-server udev ; apt-get dist-upgrade“.
I stuffed this up and didn’t allow myself ssh access the first time, so the thing to do is to start sshd in the chroot environment and make sure that you can really login. Without having udev running a ssh login will probably result in the message “stdin: is not a tty“, that is not a problem. Getting that to work by the commands ‘ssh root@server “mkdir /dev/pts”‘ and ‘ssh root@server “mount -t devpts devpts /dev/pts”‘ is not a challenge. But installing udev first is a better idea.
Then after that I added a new grub entry as the default which used the CentOS kernel and /dev/sda1 (the device formerly used for swap space) as root. I initially used the CentOS Xen kernel (all Red Hat based distributions bundle the Xen kernel with the Linux kernel – which makes some sense). But the Debian Xen utilities didn’t like that so I changed to the Debian Xen kernel.
Once I had this basically working I copied the 64bit installation to the original device and put the 32bit files in a subdirectory named “old” (so configuration can be copied). When I changed the configuration and rebooted it worked until I installed SE Linux. It seems that the Debian init scripts will in many situations quietly work when the root device is incorectly specified in /etc/fstab. This however requires creating a device node somewhere else for fsck and the SE Linux policy version 2:0.0.20080702-12 was not permitting this. I have since uploaded policy 2:0.0.20080702-13 to fix this bug and requested that the release team allow it in Lenny – I think that a bug which can make a server fail to boot is worthy of inclusion!
Finally to get the CentOS kernel working with Debian you need to load the following modules in the Dom0 (as discussed in my previous post about kernel issues [2]):
blktap
blkbk
netbk
It seems that the Debian Xen kernel has those modules linked in and the Debian Xen utilities expect that.
Currently I’m using Debian kernels 2.6.18 and 2.6.26 for the DomUs. I have considered using the CentOS kernel but they decided that /dev/console is not good enough for the console of a DomU and decided to use something else. Gratuitous differences are annoying (every other machine both real and virtual has /dev/console). If I find problems with the Debian kernels in DomUs I will change to the CentOS kernel. Incidentally one problem I have had with a CentOS kernel for a DomU (when running on a CentOS Dom0) was that the CentOS initrd seems to have some strange expectations of the root filesystem, when they are not met things go wrong – a common symptom is that the nash process will go in a loop and use 100% CPU time.
One of the problems I had was converting the configuration for the primary network device from eth0 to xenbr0. In my first attempt I had not installed the bridge-utils package and the machine booted up without network access. In future I will setup xenbr1 (a device for private networking that is not connected to an Ethernet device) first and test it, if it works then there’s a good chance that the xenbr0 device (which is connected to the main Ethernet port of the machine) will work.
After getting the machine going I found a number of things that needed to be fixed with the Xen SE Linux policy. Hopefully the release team will let me get another version of the policy into Lenny (the current one doesn’t work).
|
|