Archives

Categories

Christian Principles in an Election Year

The National Council of Churches in the US [1] has produced some advice for Christian voters titled Christian Principles in an Election Year [2]. It starts by saying “Your church, your communion, and the National Council of Churches USA do not endorse any political party or any candidate” (which is in bold in their text) and then lists 10 issues that Christians should consider when voting.

Here are the 10 section headings, the full article has a sentence or two explaining each one. I think that most good people (regardless of religion) will agree with all of this – maybe substitute “God” with the name of some other entity that has not been proven to exist or with “humanity”.

  1. War is contrary to the will of God.
  2. God calls us to live in communities shaped by peace and cooperation.
  3. God created us for each other, and thus our security depends on the well being of our global neighbors.
  4. God calls us to be advocates for those who are most vulnerable in our society.
  5. Each human being is created in the image of God and is of infinite worth.
  6. The earth belongs to God and is intrinsically good.
  7. Christians have a biblical mandate to welcome strangers.
  8. Those who follow Christ are called to heal the sick.
  9. Because of the transforming power of God’s grace, all humans are called to be in right relationship with each other.
  10. Providing enriched learning environments for all of God’s children is a moral imperative.

The Blogger Joy Reid [3] often uses the term “Matthew Christian” to refer to Christians who follow the book of Matthew and act in ways that would be considered to be good by most people regardless of belief. This is in stark contrast to some of the nasty people who call themselves Christian and who promote hatred, inequality, and war – such people give all Christians a bad reputation (see the comment section of any blog post concerning religion for examples).

John Goerzen’s post titled Politics and the Church (which references the NCCUSA article [4]) is also worth reading. Interestingly his blog post had a Google advert for “Christian Masturbation” when I viewed it. John also has a good post explaning why he is voting for Obama – based on his Christian beliefs and “Traditional Values” [5].

Types of Cloud Computing

The term Cloud Computing seems to be poorly defined at the moment, as an example the Wikipedia page about it is rather incoherent [1].

The one area in which all definitions of the term agree is that a service is provided by a varying number of servers of which the details are unknown to the people who use the services – including the people who program them.

There seem to be four main definitions of cloud computing. The first one is Distributed Computing [2] (multiple machines in different administrative domains being used to perform a single task or several tasks), this definition does not seem to be well accepted and will probably disappear.

The next one is Software As A Service [3] which is usually based on the concept of outsourcing the ownership and management of some software and accessing it over the Internet. An example is the companies that offer outsourced management of mail servers that are compatible with MS Exchange (a notoriously difficult system to manage). For an annual fee per email address you can have someone else run an Exchange compatible mail server which can talk to Blackberries etc. The main benefit of SAAS is that it saves the risk and expense of managing the software, having the software deployed at a central location is merely a way of providing further cost reductions (some companies that offer SAAS services will also install the software on your servers at greater expense).

The last two definitions are the ones that I consider the most interesting, that is virtual machines (also known as Cloud Infrastructure [4]) and virtual hosting of applications (also known as Cloud Platforms [5]). The Amazon EC2 (Elastic Cloud Computing) service [6] seems to be the leading virtual machine cloud service at the moment. Google App Engine [7] seems to be the leading cloud application hosting service at the moment.

It is also claimed that “Cloud Storage” is part of cloud computing. It seems to me that storing data on servers all over the world is something that was done long before the term “Cloud Computing” was invented, so I don’t think it’s deserving of the term.

Integrity and Mailing Lists

One significant dividing factor between mailing lists is the difference between summary lists (where the person who asks a question receives replies off-list and then sends a summary to the list) and the majority of mailing lists which are discussion lists (where every reply goes to the list by default).

I have seen an argument put forward that trusting the answers on a mailing list that operates under the summary list model is inherently risky and that peer review is required.

It could be argued that the process of sending a summary to the list is the peer review. I’m sure that if someone posts a summary which includes some outrageously bad idea then there will be some commentary in response. Of course the down-side to this is that it takes a few days for responses to the question to arrive and as it’s common that computer problems need to be solved in hours not days the problem will be solved (one way or another) before the summary message is written. But the idea of peer review in mailing lists seems to fall down in many other ways.

The first problem with the idea of peer review is that the usual aim of mailing lists is that most people will first ask google and only ask the list if a reasonable google search fails (probably most mailing lists would fall apart under the load of repeated questions otherwise). Therefore I expect that the majority of such problems to be solved by reading a web page (with no peer review that is easily accessible). Some of those web pages contain bad advice and part of the skill involved in solving any problem relates to recognising which advice to follow. Also it’s not uncommon for a question on a discussion list to result in a discussion with two or more radically different points of view being strongly supported. I think that as a general rule there is little benefit in asking for advice if you lack any ability to determine whether the advice is any good, and which of the possible pieces of good advice actually apply to your situation. Sometimes you can recognise good advice by the people who offer it, in a small community such as a mailing list it’s easy to recognise the people who have a history of offering reasonable advice. It seems that the main disadvantage of asking google when compared to asking a mailing list is that the google results will in most cases contain links to web sites written by people who you don’t know.

Sometimes the advice is easy to assess, for example if someone recommends a little-known and badly documented command-line option for a utility it’s easy to read the man page and not overly difficult to read the source to discover whether it is a useful solution. Even testing a suggested solution is usually a viable option. Also it’s often the case that doing a google search on a recommended solution will be very informative (sometimes you see web pages saying “here’s something I tried which failed”). Recommendations based on personal experience are less reliable due to statistical issues (consider the the regular disagreements about the reliability of hard disks where some people claim that RAID is not necessary due to not having seen failures while others claim that RAID-5 is inadequate because it has failed them). There are also issues of different requirements, trivial issues such as the amount of money that can be spent will often determine which (if any) of the pieces of good advice can be adopted.

The fact that a large number of people (possibly the majority of Internet users) regularly forward as fact rumors that are debunked by Snopes.com (the main site for debunking urban legends) seems to indicate that it is always going to be impossible to increase the quality of advice beyond a certain level. A significant portion of the people on the net are either unwilling to spend a small amount of effort in determining the accuracy of information that they send around or are so gullible that they believe such things beyond the possibility of doubt. Consider that the next time you ask for advice on a technical issue, you may receive a response from someone who forwarded a rumor that was debunked by Snopes.

Sometimes technical advice is just inherently dangerous because it is impossible to verify the integrity of some code that is being shared, or because it may be based on different versions of software. In a previous blog post I analyse some issues related to security of the Amazon EC2 service [1]. While the EC2 service is great in many ways (and implements a good well-documented set of security features on the servers) the unsigned code for managing it and the old versions of the images that they offer to customers raise some serious issues that provide avenues for attack. Getting the EC2 management tools to work correctly on Debian is not trivial, I have released patches but will not release packages for legal reasons. It seems most likely to me that someone will release packages based on my patches (either because they don’t care about the legal issues or they have legal advice suggesting that such things are OK – maybe due to residing in a different jurisdiction). Then people who download such packages will have to determine whether they trust the person who built them. They may also have the issue of Amazon offering a newer version of the software than that which is packaged for Debian (for all I know Amazon released a new version yesterday).

The term integrity when applied to computers refers to either accidental or malicious damage to data [2]. In the context of mailing list discussions this means both poorly considered advice and acts of malice (which when you consider spam and undisclosed conflicts of interest are actually quite common).

If you ask for advice in any forum (and I use the term in it’s broadest sense to cover web “forums”, IRC, twitter, etc) then getting a useful result will depend on having the majority of members of the forum possessing sufficient integrity and skill, being able to recognise the people whose advice should be followed, or being able to recognise good advice on it’s own.

I can think of few examples of forums of which I have been involved where the level of skill was sufficient to provide quality answers (and refutations for bad answers) for all areas of discussion that were on topic. People whose advice should generally be followed will often offer advice on areas where their skills are less well developed, someone whose advice can be blindly followed in regard to topic A may not be a reliable source for advice on topic B – which can cause confusion if the topics in question are closely related.

Finally a fundamental difference between “peer review” (as applied to conferences and academic journals) is that review for conferences and journals is conducted before the presentation. Not only does the work have to be good enough to pass the review, but the people doing it will never be sure what the threshold is (and will generally want to do more than a minimal effort) so the quality will be quite high. While peer review in mailing lists is mostly based around the presence or absence of flames. A message which doesn’t attract flames will either have some minimal quality or be related to a topic that is not well known (so no-one regards it as being obviously wrong).

Update: The “peer review” process of publishing a post on my blog revealed that I had incorrectly used who’s instead of whose.

Jabber

I’ve just been setting up jabber.

I followed the advice from System Monitoring on setting up ejabberd [1]. I had previously tried the default jabber server but couldn’t get it working. The ejabberd is written in Erlang [2] which has it’s own daemon that it launches. It seems that Erlang is designed for concurrent and distributed programming so it has an Erlang Port Mapper Daemon (epmd) to manage communications between nodes. I’ve written SE Linux policy for epmd and for ejabberd, but I’m not sure how well it will work when there are multiple Erlang programs running in different security contexts. It seems that I might be the first person to try running a serious Jabber server on SE Linux. The policy was written a while ago and didn’t support connecting to TCP port 5269 – the standard port for Jabber inter-server communication and the port used by the Gmail jabber server.

The ejabberd has a default configuration file that only requires minor changes for any reasonable configuration and a command-line utility for managing it (adding users, changing passwords, etc). It’s so easy to set up that I got it working and wrote the SE Linux policy for ejabberd in less time than I spent unsuccessfully trying to get jabber to work!

It seems that Jabber clients default to using the domain part of the address to determine which server to talk to (it is possible to change this). So I setup an A record for coker.com.au pointing to my Jabber server, I’ll have the same machine run a web server to redirect http://coker.com.au to http://www.coker.com.au.

For Jabber inter-server communication you need a SRV record [3] in your zone. I used the following line in my BIND configuration:

_xmpp-server._tcp IN SRV 0 5 5269 coker.com.au.

Also for conferencing the default is to use the hostname “conference” in the domain of your Jabber server. So I’ve created conference.coker.com.au to point to my server. This name is used both in Jabber clients and in sample directives in the ejabberd configuration file, so it seemed too difficult to try something different (and there’s nothing wrong with conference as an A record).

I tried using the cabber client (a simple text-mode client), but found two nasty bugs within minutes (SEGV when a field is missing from the config file – Debian bug #503424 and not resetting the terminal mode on exit – Debian bug #503422). So I gave up on cabber as a bad idea.

I am now testing kopete (the KDE IM client) and GAIM aka Pidgin. One annoying bug in Kopete is that it won’t let me paste in a password (see Debian bug #50318). My wife is using Pidgin (formerly known as GAIM) on CentOS 5.2 and finding it to work just as well as GAIM has always worked for her. One significant advantage of Pidgin is that it seems impossible to create a conference in Kopete. Kopete uses one window for each chat and by default Pidgin/GAIM uses a single window with a tab for each chat (with an option to change it). I haven’t seen an option in Kopete to change this, so if you want to have a single window for all your chats and conferences with tabs then you might want to use Pidgin/GAIM.

Another annoying thing about Kopete is that it strictly has a wizard based initial install. I found it difficult to talk my mother through installing it because I couldn’t get my machine to see the same dialogs that were displayed on her machine. In retrospect I probably should have run “ssh -X test@localhost” to run it under a different account.

The Latest Dick Smith Catalogue

I was just reading the latest catalogue from Dick Smith Electronics (a chain of computer stores in Australia).

The first interesting thing that I noticed is that laptops are cheaper than desktops in all categories. For any combination of CPU power and RAM in a desktop system I can see a laptop advertised with similar specs at a lower price. Of course you won’t get such a big display in a laptop, but big displays don’t always work well. I just read an interesting review of LCD display technology [1] which states (among other things) that TN panels (which provide poor colors and a limited viewing angle) are used in all current 22 inch monitors! They state that the Dell 2007WFP (which I own) comes in two versions, I was fortunate to get the one that doesn’t suck. Based on that review I think I’ll refrain from all further monitor purchases until the technology gets sorted out and it becomes possible to reliably buy the better monitors at a decent price. The most expensive desktop system that Dick Smith advertised in their catalogue has a 22 inch monitor.

It seems that with desktop systems being more expensive an increasing number of home users will use laptops instead, which will of course change the economics of manufacture. Maybe the desktop computer is about to die out and be replaced by laptops, PDAs, and mobile phone type devices (blackberries etc).

Another interesting thing is an advert for a LASER pointer (it seems that they haven’t been banned as “terrorist weapons” yet). Being on special for a mere $27 is not the interesting thing, what is interesting is that the advert claims “projects up to 500m indoors“. I’m sure it will be handy if I ever have to give a presentation at the Airbus factory. But otherwise it seems quite unlikely that I will ever get an opportunity for a 500m indoor space.

The prices on digital cameras have been dropping consistently for some time. Now they are selling a Samsung S860 (8.1MP with 3* optical zoom) for $98. This is (according to the specs at least) a very powerful camera for a price that most people won’t think twice about. I expect that an increasing number of people will buy new digital cameras every year the way white-box enthusiasts buy new motherboards! Hopefully people will use services such as Freecycle [2] to dispose of all their old cameras, to both avoid pollution and get cameras into the hands of more people.

Very few monitors are being sold with resolutions greater than 2MP (1680*1050 is the highest you can get for a reasonable price). So an 8MP camera allows significant scope for cropping and resizing an image before publishing it on the web. Even the 4MP cameras that were on sale a few years ago (and which are probably being discarded now) are more than adequate for such use.

Links October 2008

Here’s a blog post suggesting that anti-depressant drugs such as Prozac may have helped the US mortgage crisis [1]. Apparently such drugs cause poor impulse control, so it wouldn’t be a good idea to attend a house auction while using them.

Here’s an interesting idea about lecturing, give 20 minute talks with something else (practical work or group discussion) in between [2]. Michael Lee wants to “capture the power of that strict time limit, the intensity of a well-crafted 20 minutes”. While I’m not sure that a strict time limit is such a great idea. Having talks broken up into sections sounds like it has the potential to offer some benefits.

A bible from the 4th century has been found and is being digitised [3]. When the digitisation is complete (next year) it will be published on the net so everyone can see how the bible has changed over the years.

Interesting interview with Jim Gray (of MS Research) about storage [4]. It was conducted in 2003 so technology has moved on, but the concepts remain. His ideas for sharing two terabytes of data by using a courier to deliver an NFS or CIFS file server are interesting, the same thing could be done today with five terabytes for a lower cost.

Techtarget has a white paper sponsored by Intel about the price/performance of data centers in low-density and high-density designs [5]. I don’t think I’ll ever be in a position to design a data center, but the background information in the paper is very useful.

Google has an interesting set of pages describing their efforts to save power in their data centers [6]. They claim to have the most efficient server rooms ever built, and describe how it saves them a lot of money. One of the interesting things that they do is to use evaporative cooling as the primary cooling method. They also have a RE<C (Renewable Energy cheaper than Coal) project [7].

Here’s a Youtube video of an interesting presentation by Andy Thomson (a psychiatrist at the University of Virginia) about male-bonded coalitionary violence [8]. He shows the evidence of it in chimpanzees, humans, and evidence for it being in the common ancestry of chimps and humans (5-6 million years ago). He also shows a link to modern suicide bombing.

It’s widely regarded that Cyrus is the fastest IMAP server. Linux-Magazin.de published an article last year comparing Cyrus, UW-IMAP, Dovecot, and Courier and the conclusion is that Courier and Dovecot are the winners [9]. I used Google Translation but the results were not particularly good so I think I missed some of the points that they were trying to make.

Moth in my Celery

moth in shrink-wrapped celery packet
Above is a picture of a moth that I found in a packet of shrink wrapped celery from Foodworks (a Melbourne chain of grocery stores).

I took several pictures from different angles, but I found that an almost direct photo captured it best, you can see the reflection of the flash covering part of the moth (showing that the plastic wrap is on top of it).

I opened the packet outside and after some prodding the moth flew off.

Upgrading a server to 64bit Xen

I have access to a server in Germany that was running Debian/Etch i386 but needed to be running Xen with the AMD64 version of Debian/Lenny (well it didn’t really need to be Lenny but we might as well get two upgrades done at the same time). Most people would probably do a complete reinstall, but I knew that I could do the upgrade while the machine is in a server room without any manual intervention. I didn’t achieve all my goals (I wanted to do it without having to boot the recovery system – we ended up having to boot it twice) but no dealings with the ISP staff were required.

The first thing to do is to get a 64bit kernel running. Based on past bad experiences I’m not going to use the Debian Xen kernel on a 64bit system (in all my tests it has had kernel panics in the Dom0 when doing any serious disk IO). So I chose the CentOS 5 kernel.

To get the kernel running I copied the kernel files (/boot/vmlinuz-2.6.18-92.1.13.el5xen /boot/System.map-2.6.18-92.1.13.el5xen /boot/config-2.6.18-92.1.13.el5xen) and the modules (/lib/modules/2.6.18-92.1.13.el5xen) from a CentOS machine. I just copied a .tgz archive as I didn’t want to bother installing alien or doing anything else that took time. Then I ran the Debian mkinitramfs program to create the initrd (the 32bit tools for creating an initrd work well with a 64bit kernel). Then I created the GRUB configuration entry (just copied the one from the CentOS box and changed the root= kernel parameter and the root GRUB parameter), crossed my fingers and rebooted. I tested this on a machine in my own computer room to make sure it worked before deploying it in Germany, but there was still some risk.

After rebooting it the command arch reported x86_64 – so it had a 64bit Xen kernel running correctly.

The next thing was to create a 64bit Lenny image. I got the Lenny Beta 2 image and used debootstrap to create the image (I consulted my blog post about creating Xen images for the syntax [1] – one of the benefits of blogging about how you solve technical problems). Then I used scp to copy a .tgz file of that to the server in Germany. Unfortunately the people who had set up that server had used all the disk space in two partitions, one for root and one for swap. While I can use regular files for Xen images (with performance that will probably suck a bit – Ext3 is not a great filesystem for big files) I can’t use them for a new root filesystem. So I formatted the swap space as ext3.

Then to get it working I merely had to update the /etc/fstab, /etc/network/interfaces, and /etc/resolv.conf files to make it basically functional. Of course ssh access is necessary to do anything with the server once it boots, so I chrooted into the environment and ran “apt-get update ; apt-get install openssh-server udev ; apt-get dist-upgrade“.

I stuffed this up and didn’t allow myself ssh access the first time, so the thing to do is to start sshd in the chroot environment and make sure that you can really login. Without having udev running a ssh login will probably result in the message “stdin: is not a tty“, that is not a problem. Getting that to work by the commands ‘ssh root@server “mkdir /dev/pts”‘ and ‘ssh root@server “mount -t devpts devpts /dev/pts”‘ is not a challenge. But installing udev first is a better idea.

Then after that I added a new grub entry as the default which used the CentOS kernel and /dev/sda1 (the device formerly used for swap space) as root. I initially used the CentOS Xen kernel (all Red Hat based distributions bundle the Xen kernel with the Linux kernel – which makes some sense). But the Debian Xen utilities didn’t like that so I changed to the Debian Xen kernel.

Once I had this basically working I copied the 64bit installation to the original device and put the 32bit files in a subdirectory named “old” (so configuration can be copied). When I changed the configuration and rebooted it worked until I installed SE Linux. It seems that the Debian init scripts will in many situations quietly work when the root device is incorectly specified in /etc/fstab. This however requires creating a device node somewhere else for fsck and the SE Linux policy version 2:0.0.20080702-12 was not permitting this. I have since uploaded policy 2:0.0.20080702-13 to fix this bug and requested that the release team allow it in Lenny – I think that a bug which can make a server fail to boot is worthy of inclusion!

Finally to get the CentOS kernel working with Debian you need to load the following modules in the Dom0 (as discussed in my previous post about kernel issues [2]):
blktap
blkbk
netbk

It seems that the Debian Xen kernel has those modules linked in and the Debian Xen utilities expect that.

Currently I’m using Debian kernels 2.6.18 and 2.6.26 for the DomUs. I have considered using the CentOS kernel but they decided that /dev/console is not good enough for the console of a DomU and decided to use something else. Gratuitous differences are annoying (every other machine both real and virtual has /dev/console). If I find problems with the Debian kernels in DomUs I will change to the CentOS kernel. Incidentally one problem I have had with a CentOS kernel for a DomU (when running on a CentOS Dom0) was that the CentOS initrd seems to have some strange expectations of the root filesystem, when they are not met things go wrong – a common symptom is that the nash process will go in a loop and use 100% CPU time.

One of the problems I had was converting the configuration for the primary network device from eth0 to xenbr0. In my first attempt I had not installed the bridge-utils package and the machine booted up without network access. In future I will setup xenbr1 (a device for private networking that is not connected to an Ethernet device) first and test it, if it works then there’s a good chance that the xenbr0 device (which is connected to the main Ethernet port of the machine) will work.

After getting the machine going I found a number of things that needed to be fixed with the Xen SE Linux policy. Hopefully the release team will let me get another version of the policy into Lenny (the current one doesn’t work).

Kernel issues with Debian Xen and CentOS Kernels

Last time I tried using a Debian 64bit Xen kernel for Dom0 I was unable to get it to work correctly, it continually gave kernel panics when doing any serious disk IO. I’ve just tried to reproduce that problem on a test machine with a single SATA disk and it seems to be working correctly so I guess that it might be related to using software RAID and LVM (LVM is really needed for Xen and RAID is necessary for every serious server IMHO).

To solve this I am now experimenting with using a CentOS kernel on Debian systems.

There are some differences between the kernels that are relevant, the most significant one is the choice of which modules are linked in to the kernel and which ones have to be loaded with modprobe. The Debian choice is to have the drivers blktap blkbk and netbk linked in while the Red Hat / CentOS choice was to have them as modules. Therefore the Debian Xen utilities don’t try and load those modules and therefore when you use the CentOS kernel without them loaded Xen simply doesn’t work.

Error: Device 0 (vif) could not be connected. Hotplug scripts not working.

You will get the above error (after a significant delay) from the command “xm create -c name” if you try and start a DomU that has networking when the driver netbk is not loaded.

XENBUS: Timeout connecting to device: device/vbd/768 (state 3)

You will get the above error (or something similar with a different device number) for every block device from the kernel of the DomU if using one of the Debian 2.6.18 kernels, if using a 2.6.26 kernel then you get “XENBUS: Waiting for devices to initialise“.

Also one issue to note is that when you use a file: block device (IE a regular file) then Xen will use a loopback device (internally it seems to only like block devices). If you are having this problem and you destroy the DomU (or have it abort after trying for 300 seconds) then it will leave the loopback device enabled (it seems that the code for freeing resources in the error path is buggy). I have filed Debian bug report #503044 [1] requesting that the Xen packages change the kernel configuration to allow more loopback devices and Debian bug report #503046 [2] requesting that the resources be freed correctly.

Finally the following messages appear in /var/log/daemon.log if you don’t have the driver blktap installed:
BLKTAPCTRL[2150]: couldn’t find device number for ‘blktap0’
BLKTAPCTRL[2150]: Unable to start blktapctrl

It doesn’t seem to cause a problem (in my tests I can’t find something I want to do with Xen that required blktap), but I have loaded the driver – even removing error messages is enough of a benefit.

Another issue is that the CentOS kernel packages include a copy of the Xen kernel, so you have a Linux kernel matching the Xen kernel. So of course it is tempting to try and run that CentOS Xen kernel on a Debian system. Unfortunately the Xen utilities in Debian/Lenny don’t match the Xen kernel used for CentOS 5 and you get messages such as the following in /var/log/xen/xend-debug.log:

sysctl operation failed — need to rebuild the user-space tool set?
Exception starting xend: (13, ‘Permission denied’)

Update: Added a reference to another Debian bug report.

The Next Miserable Failure?

Until very recently I thought that it would be almost impossible to get someone worse than George W Bush as the leader of any significant country. Unfortunately it seems that I was wrong and John McCain and Sarah Palin promise more of the economic, regulatory, and military disasters that are the trademarks of the US Republican party (or at least the dominant Neo-Con branch).

Here are some links about John McCain:

Here’s a good summary of the racial issues in the current US presidential campaign (This is Your Nation on White Privilege) [1].

The Obama campaign is highlighting the connection between John McCain and Charles Keating [2]. McCain was one of the senators helping Keating while his bank (the Lincoln Savings and Loan Association) was going under. In the end 20,000 people lost their savings and the US taxpayers ended up losing $120,000,000,000.

Frank Rich has written an article for the New York Times about the racist attacks on Barack Obama [5]. The current actions of the McCain campaign only barely stop short of calling for an assassination.

The South Florida Times has an interesting article about the McCain family’s history of slave ownership [7]. Now John McCain is not responsible for the actions of his great-great-grandfather in owning slaves, and there’s nothing wrong with having black relatives who are the descendants of some of those slaves (even though there is doubt about whether the female slaves were legally adults or even consented to the sex acts in question). But he should be honest about it. Denying having non-white relatives in the face of the facts seems to be strong evidence of racism. It is however understandable that John doesn’t want to discuss the fact that some of his relatives have announced plans to vote against him.

Rolling Stone magazine published an interesting article about John McCain’s history as a spoiled brat in the navy [10]. It seems that if your father is an admiral you can ignore orders, crash planes, and basically do whatever you like. It also reveals that John was broken by the Viet Cong torturers and provided the name of his ship, the number of raids he had flown, his squadron number and the target of his final raid. I’m not going to criticise John for breaking under torture – I think that the assessment of wing commander John Dramesi (who was tortured by the same Viet Cong torturers but didn’t break) should be accepted. John Dramesi says that McCain “wasn’t exceptional one way or the other” while in captivity. However McCain’s use of his former POW status in propaganda is quite dishonest. John McCain is also documented as having described his wife as a “cunt” and a “trollop“.

Here are some links about Sarah Palin:

Former US Army Brigadier General (retired) Janis L. Karpinski writes about Sarah Palin [3], it’s interesting to hear what an intelligent female soldier has to say about her. One thing that I found noteworthy was the repeated references to “murdering” wild animals, shooting at a defenseless animal is of course quite different from shooting at a person who can shoot back (and different again from commanding an army). Janis also makes reference to Sarah setting the feminist cause back decades – I think that is what Sarah desires though. Also Janis points out the emotional problems for which pit bull terriers are known.

There are many claims that Sarah is a “Maverick” and has a record of opposing corruption. This article in the Village Voice documents some of her corrupt activities – including having her home built for free in exchange for assigning the contract to build the Wasilia ice-hocky rink [4].

Thomas L. Friedman has written an article about Palin’s Kind of Patriotism [6]. According to Sarah it’s not patriotic to pay taxes, it seems to me that encouraging citizens to disobey the law should disqualify her from being elected without all the other issues. Thomas notes that Sarah is promoting the interests of Saudi Arabia by prolonging the US dependence on oil imports.

The Huffington Post has an interesting article about Sarah Palin’s church [8]. It’s strange how little notice has been taken of Sarah’s former pastor who stated that people who didn’t vote for Bush were likely to go to hell.

The Times has an article about “Troopergate”, some of Sarah Palin’s other corrupt practices, and the role of her husband as a shadow governor [9].

Update: Corrected URL [6].