Archives

Categories

Suggestions and Thanks

One problem with the blog space is that there is a lot of negativity. Many people seem to think that if they don’t like a blog post then the thing to do is to write a post complaining about it – or even worse a complaint that lacks specific details to such an extent that the subject of the complaint would be unable to change their writing in response. The absolute worst thing to do is to post a complaint in a forum that the blog author is unlikely to read – which would be a pointless whinge that benefits no-one.

Of course an alternate way for the recipient to taking such complaints as suggested by Paul Graham is “you’re on the right track when people complain that you’re unqualified, or that you’ve done something inappropriate” and “if they’re driven to such empty forms of complaint, that means you’ve probably done something good” (Paul was talking about writing essays not blogs, but I’m pretty sure that he intended it to apply to blogs too). If you want to actually get a blog author (or probably any author) to make a change in their material in response to your comments then trying to avoid empty complaints is a good idea. Another useful point Paul makes in the same essay is ““Inappropriate” is the null criticism. It’s merely the adjective form of “I don’t like it.”” – something that’s worth considering given the common criticism of particular blog content as being “inappropriate” for an aggregation feed that is syndicating it. Before criticising blog posts you should consider that badly written criticism may result in more of whatever it is that you object to.

If you find some specific objective problem in the content or presentation of a blog the first thing to do is to determine the correct way of notifying the author. I believe that it’s a good idea for the author to have an about page which either has a mailto URL or a web form for sending feedback, I have a mailto on my about page – (here’s the link). Another possible method of contact is a comment on a blog post, if it’s an issue for multiple posts on the blog then writing a comment on the most recent post will do (unless of course it’s a comment about the comment system being broken). For those who are new to blogging, the blog author has full control over what happens to comments. If they decide that your comment about the blog color scheme doesn’t belong on a post about C programming then they can respond to the comment in the way that they think best (making a change or not and maybe sending you an email about it) and then delete the comment if they wish.

If there is an issue that occurs on multiple blogs then a good option is to write a post about the general concept as I did in the case of column width in blogs where I wrote about one blog as an example of a problem that affects many blogs. I also described how I fixed my own blog in this regard (in sufficient detail to allow others to do the same). Note that most blogs have some degree of support for Linkback so any time you link to someone else’s blog post they will usually get notified in some way.

On my blog I have a page for future posts where I invite comments from readers as to what I plan to write about next. Someone who prefers that I not write about topic A could write a comment requesting that I write about topic B instead. WordPress supports pages as a separate type of item to posts. A post is a dated entry while pages are not sorted in date order and in most themes are displayed prominently on the front page (mine are displayed at the top). I suggest that other bloggers consider doing something comparable.

One thing I considered is running a wiki page for the future posts. One of the problems with a wiki page is that I would need to maintain my own private list which is separate, while a page with comments allows only me to edit the page in response to comments and then use the page as my own to-do list. I may experiment with such a wiki page at some future time. One possibility that might be worth considering is a wiki for post requests for any blog that is syndicated by a Planet. For example a wiki related to Planet Debian might request a post about running Debian on the latest SPARC systems, the first blogger to write a post on this topic could then remove the entry from the wish-list (maybe adding the URL to a list of satisfied requests). If the person who made the original request wanted a more detailed post covering some specific area they could then add such a request to the wish-list page. If I get positive feedback on this idea I’ll create the wiki pages and add a few requests for articles that would interest me to start it up.

Finally to encourage the production of content that you enjoy reading I suggest publicly thanking people who write posts that you consider to be particularly good. One way of thanking people is to cite their posts in articles on your own blog (taking care to include a link to at least one page to increase their Technorati rank) or web site. Another is to include a periodic (I suggest monthly at most) links post that contains URLs of blog posts you like along with brief descriptions of the content. If you really like a post then thank the author by not only giving a links with a description (to encourage other people to read it) but also describe why you think it’s a great post. Also if recommending a blog make sure you give a feed URL so that anyone who wants to subscribe can do it as easily as possible (particularly for the blogs with a bad HTML layout).

Here are some recent blog posts that I particularly liked:

Here are some blogs that I read regularly:

  • Problogger (feed), I don’t think that I’ll be a full-time blogger in the forseeable future, but his posts have lots of good ideas for anyone who wants to blog effectively. I particulaly appreciate the short posts with simple suggestions.
  • Mega Tokyo (feed) – A manga comic on the web. The amusing portrayal of computer gaming fanatics will probably remind most people in the computer industry of some of their friends.
  • Defence and the National Interest (feed). The most interesting part of this (and the only reason I regularly read it) is the blog of William S. Lind (titled On War. William writes some very insightful posts about military strategy and tactics but some things about politics will offend most people who aren’t white Christian conservatives.
    It’s a pity that there is not a more traditional blog feed for the data, the individual archives contain all posts and there seems to be no possibility of viewing the posts for the last month (for people who read it regularly in a browser and don’t use an RSS feed) and no search functionality built in.
  • WorseThanFailure.com (was TheDailyWTF.com) (feed) subtitled Curious Perversions in Information Technology. Many amusing anecdotes that illustrate how IT projects can go wrong. This is useful for education, amusement, and as a threat (if you do THAT then we could submit to WorseThanFailure.com).
  • XKCD – a stick-figure web comic, often criticised for the drawing quality by people who just don’t get it, some people read comics for amusement and insightful commentry not drawings. It’s yet another example of content beating presentation when there’s a level playing field.

Finally I don’t read it myself, but CuteOverload.com is a good site to refer people to when they claim that the Internet is too nasty for children – the Internet has lots of pictures of cute animals!

Feedburner Item Link Clicks

For a while I used the Item Link Clicks feature in Feedburner. For those who aren’t aware Feedburner is a service that proxies access to an RSS feed (you need to either publish the Feedburner URL as the syndication link or use a HTTP redirect to send the requests there – I use a HTTP redirect). Then when people download the feed they get it from Feedburner which is fast and reliable (unlike my blog on a bad day) and which also tracks some statistics which can be interesting.

The Item Link Clicks feature rewrites the guid URLs to point to a Feedburner URL that will redirect back to the original post (and track clicks along the way). The down-side of doing this is that some people who read blogs via Planet installations and just copy the link from the Planet page when citing a blog post instead of actually visiting the blog in question. This causes a potential problem for the person citing the post in that they won’t know whether the URL is valid unless they visit it. So when (not if) people have misconfigured blogs that are widely syndicated the people who cite them without verifying the links could end up linking to invalid URLs. The problem for the person who is cited is that such Feedburner redirects don’t seem to be counted as part of the Technorati ranking (which is a count of the number of links to a blog in the last 6 months which give some rough approximation of how important the blog is). The Technorati rating can sometimes be used in negotiations with an advertiser and is often used when boasting about how popular a blog is.

To increase my Technorati ranking I have stopped using the Feedburner URL rewriting feature. For people who view my blog directly or through a Planet installation this will not give any difference that you would notice. The problem is for people who use a service that syndicates RSS feeds and then forwards them on by email, such people received two copies of the last 10 items as the URL (GUID) change means that the posts are seen as new (Planet solves this by deleting the posts which are seen as unavailable and then creating new posts with the new URLs and no change is visible to the user).

Based on this experience I suggest not using URL rewriting services. They will hurt your technorati ranking, give little benefit (IMHO) and annoy the small number of RSS to email readers. Particularly don’t change your mind about whether to use such a feature or not. Changing the setting regularly would be really annoying. Also this means that if you use such a service you should take care not to have you Feedburner redirection ever get disabled. A minor Apache configuration error corrected a day later could end up in sending all the posts in the current feed an extra two times.

Controlling a STONITH and Upgrading a Cluster

One situation that you will occasionally encounter when running a Heartbeat cluster is a need to prevent a STONITH of a node. As documented in my previous post about testing STONITH the ability to STONITH nodes is very important in an operating cluster. However when the sys-admin is performing maintenance on the system or programmers are working on a development or test system it can be rather annoying.

One example of where STONITH is undesired is when upgrading packages of software related to the cluster services. If during a package upgrade the data files and programs related to the OCF script are not synchronised (EG you have two programs that interact and upgrading one requires upgrading the other) at the moment that the status operation is run then an error may occur which may trigger a STONITH. Another possibility is that if using small systems for testing or development (EG running a cluster under Xen with minimal RAM assigned to each node) then a package upgrade may cause the system to thrash which might then cause a timeout of the status scripts (a problem I encounter when upgrading my Xen test instances that have 64M of RAM).

If a STONITH occurs during the process of a package upgrade then you are likely to have consistency problems with the OS due to RPM and DPKG not correctly calling fsync(), this can cause the OCF scripts to always fail to run the status command which can cause an infinite loop of the cluster nodes in question being STONITHed. Incidentally the best way to test for this (given the problems of a STONITH sometimes losing log data) is to boot the node in question without Heartbeat running and then run the OCF status commands manually (I previously documented three ways of doing this).

Of course the ideal (and recommended) way of solving this problem is to migrate all services from a node using the crm_resource program. But in a test or development situation you may forget to migrate all services or simply forget to run the migration before the package upgrade starts. In that case the best thing to do is to be able to remove the ability to call STONITH . For my testing I use Xen and have the nodes ssh to the Dom0 to call STONITH, so all I have to do to remove the STONITH ability is to stop the ssh daemon on the Dom0. For a more serious test network (EG using IPMI or an equivalent technology to perform a hardware STONITH as well as ssh for OS level STONITH on a private network) a viable option might be to shut down the switch port used for such operations – shutting down switch ports is not a nice thing to do, but to allow you to continue work on a development environment without hassle it’s a reasonable hack.

When choosing your method of STONITH it’s probably worth considering what the possibilities are for temporarily disabling it – preferably without having to walk to the server room.

Colorado Software Summit 2007

On about 5 years I attended the conference The Colorado Software Summit. The first one was the last conference under the old name (ColoradOS/2) but then as OS/2 was rapidly losing market share and the conference delegates changed their programming interests it changed to become a Java conference.

The Colorado Software Summit rapidly became known as THE event to really learn about Java, other conferences are larger and have a higher profile but the organisers of CSS decided to keep the numbers smaller (600 is usually the maximum number of delegates) to provide better opportunities for the delegates to meet and confer. One of the attractions of CSS is the large number of skilled and experienced people who attend, there are many delegates who can teach you lots of interesting things even though they aren’t on the speaking list. I ended up never doing any serious Java programming, but I still found that I learned enough and had enough fun to justify the expense.

Currently there is an early registration open which saves $200 off the full price ($1,795 instead of $1,995), this lasts until the 31st of August. In addition to this the organisers have offered a further $100 discount to the first five readers of my blog who register personally (IE an individual not a corporation is paying for the ticket). To take advantage of the extra $100 discount you must include the code CSS509907 in your registration.

PS I have no financial interest in this matter. I like the conference organisers, but that largely stems from the fact that they run great conferences that I have enjoyed. I recommend the conference because it’s really good.

A Great Advertising Web Site

The site noonebelongsheremorethanyou.com is an advert for a book of short stories. The web site is funny and quirky (two qualities that are required for a site to become virally popular), works well on all browser sizes, has a navigation method that is unique (or at least something I don’t recall seeing done so well in 10 years of web surfing) and display all the needed information.

I feel inclined to buy the book just to support the creation of amusing web sites!

Update: Here is the Wikipedia page for Miranda July, thanks to meneame.net for the link.

ARP

In the IP protocol stack the lowest level protocol is ARP (the Address Resolution Protocol). ARP is used to request the Ethernet hardware (MAC) address of the host which owns a particular IP address.

# arping 192.168.0.43
ARPING 192.168.0.43
60 bytes from 00:60:b0:3c:62:6b (192.168.0.43): index=0 time=339.031 usec
60 bytes from 00:60:b0:3c:62:6b (192.168.0.43): index=1 time=12.967 msec
60 bytes from 00:60:b0:3c:62:6b (192.168.0.43): index=2 time=168.800 usec
— 192.168.0.43 statistics —
3 packets transmitted, 3 packets received, 0% unanswered

One creative use of this is the program arping which will send regular ARP request packets for an IP address and give statistics on the success of getting responses. The above is the result of an arping command which shows that the machine in question can respond in 12.9msec or less. One of the features of arping (when compared to the regular ping which uses an ICMP echo) is that it will operate when the interface has no IP address assigned or when the IP address does not match the netmask for the network in question.

This means that if you have a network which lacks DHCP and you want to find a spare IP address in the range that is used then you can use arping without assigning yourself an IP address first. If you wanted to use ping in that situation then you would have to first assign an IP address in which case you may have already broken the network!

Another useful utility is arpwatch. This program listens to ARP traffic and will notify the sys-admin when new machines appear. The notification message will include the Ethernet hardware address and the name of the manufacturer of the device (if it’s known). When you use arpwatch you can say “who added the device with the Intel Ethernet card to the network at lunch time?” instead of “who did something recently to the network that made it break?”. The more specific question is more likely to get an accurate answer.

IT Recruiting Agencies – Advice for Contract Workers

I read an interesting post on Advogato about IT recruiting agencies (along with an interesting preface about medical treatment for broken ribs).

Their report closely mirrored my experience in many ways. Here are what I consider to be the main points for a job applicant dealing with recruiters:

  1. Ask more than you believe that you are worth – the worst they can do is say “no” (and you will feel like a fool if the agency pays you less than half what the client pays because you didn’t ask for enough).
  2. Put lots of terms in your CV that will work for grep or other searches. A human who reads your CV will know that if you describe 3 years of Linux sys-admin experience that you can do BASH shell scripting and sys-admin work on other versions of Unix. But if a search doesn’t match it then the typical recruiting agent won’t offer you the position. I have idly considered saying things like “Perl (not Pearl) experience” to catch mis-spelled grep operations.
  3. Recruiting agents will frequently demand that you re-write your CV to match a position that they have open, they will say things such as “you claim 3 years of shell scripting and Perl experience but I don’t see that on your CV” and insist that you re-write it to give more emphasis to that area.
  4. Most recruiting agents are compulsive liars and don’t understand computers, you have to deal with the fact that to get most of the better paying positions you need to have an incompetent liar represent you. Avoid the stupid liars though. For example I once refused to deal with an agent who told me about his plans for stealing the CV database from the agency he worked for and selling it to another agency – not because he was shifty in every possible way, but because he was so stupid as to boast about such things immediately after meeting me on a train.
  5. Expect that recruiting agents won’t understand the technology. If you politely and subtly offer to assist them in writing a letter to a client recommending you then they will often accept. Why would they go to the effort of assessing your skills and writing a short letter to the client describing how good you are when you can do that for them? On one particularly amusing occasion I was applying for a position with IBM and the recruiting agent had been supplied with a short quiz of technical skills to assess all applicants – they gave me the answer sheet and asked me to self-assess (I got 100% – but it was an easy test and I would have got the same result anyway).
  6. Some levels of stupidity are so great that you should avoid dealing with the agent (and possibly the agency that employs them). Being unable to view a HTML file is one criteria I have used since 1999 (every OS since about 1998 came with a web browser built in). Another example is an agent who tried to convince me that “.au” is not a valid suffix for an email address (I was applying for a sys-admin job with an ISP). Job adverts that mis-spell terms (such as Perl spelled at Pearl) are also a warning sign.
  7. Gossip is important to your business! Some agencies will pay you what you earn and merely terminate your contract when things go wrong. Other agencies will refuse to pay you when things go bad, or even demand that paid money be returned and threaten legal action. Talk to other contract workers in your region and learn the goss about the bad agencies. Also track agency name changes, when a bad agency changes name don’t be fooled.

When applying for a position advertised by an agency you will ideally start by seeing an advert with a phone number and an email address. The best strategy in that case seems to be to send your CV with a brief cover letter and then about 5 minutes after your mail server sends the message to their mail server you phone them. I found that I got a significantly higher success rate (in terms of having the agent send my CV to the client) if I phoned them when my CV arrived.

Sometimes a fax number is advertised, unless there is some problem that prevents sending a document via email (such as the agency having a broken mail server) then do not FAX them. A faxed document will have to be faxed on to the client and will look bad after the double-fax operation and will prevent the agent from grepping it. Rumor has it that agents will often post fake adverts for the purpose of collecting CVs (so that they can boast to clients

In most situations a recruiting agent should insist on meeting you for an interview before sending your CV to a client. The only exception is if you are applying for a job in another country. Meeting an agent at a restaurant or other public place is not uncommon (often they want to meet you while travelling between other locations and sometimes their main office is not in a good location). I suspect that some agencies start with a “virtual office” and perform all their interviews in public places (this doesn’t mean that they will do a worse job than the more established agencies). If an agent is prepared to recommend you to a client without meeting you then they are not doing their job properly. It used to be that there were enough agencies pretending to do their job that you could ignore the agencies that will recommend any unseen candidate. But now an increasing number of agencies do this and if you want a contract you may have to deal with them.

When an agency has a fancy office keep in mind that they paid for it by taking money from people like you! For contract work a recruiting agent is not your friend, they make their money by getting you to accept less money than the client pays them – the less they pay you the more money they make. A common claim is “we only take a fixed percentage of what the client pays”, but when you ask what that percentage is they refuse to answer – I guess that the fixed percentage is 50% or as close to it as they can manage.

Ethernet Bonding and a Xen Bridge

After getting Ethernet Bonding working (see my previous post) I tried to get it going with a bridge for Xen.

I used the following in /etc/network/interfaces to configure the bond0 device and to make the Xen bridge device xenbr0 use the bond device:

iface bond0 inet manual
pre-up modprobe bond0
pre-up ifconfig bond0 up
hwaddress ether 00:02:55:E1:36:32
slaves eth0 eth1

auto xenbr0
iface xenbr0 inet static
pre-up ifup bond0
address 10.0.0.199
netmask 255.255.255.0
gateway 10.0.0.1
bridge_ports bond0

But things didn’t work well. A plain bond device worked correctly in all my tests, but when I had a bridge running over it I had problems every time I tried pulling cables. My test for a bond is to boot the machine with a cable in eth0, then when it’s running switch the cable to eth1. This means there is a few seconds of no connectivity and then the other port becomes connected. In an ideal situation at least one port would work at all times – but redundancy features such as bonding are not for an ideal situation! When doing the cable switching test I found that the bond device would often get into a state where it every two seconds (the configured ARP ping time for the bond) it would change it’s mind about the link status and have the link down half the time (according to the logs – according to ping results it was down all the time). This made the network unusable.

Now I have deided that Xen is more important than bonding so I’ll deploy the machine without bonding.

One thing I am considering for next time I try this is to use bridging instead of bonding. The bridge layer will handle multiple Ethernet devices, and if they are both connected to the same switch then the Spanning Tree Protocol (STP) is designed to work in this way and should handle it. So instead of having a bond of eth0 and eth1 and running a bridge over that I would just bridge eth0, eth1, and the Xen interfaces.

Ethernet Bonding on Debian Etch

I have previously blogged about Ethernet bonding on Red Hat Enterprise Linux. Now I have a need to do the same thing on Debian Etch – to have multiple Ethernet links for redundancy so that if one breaks the system keeps working.

The first thing to do on Debian is to install the package ifenslave-2.6 which provides the utility to manage the bond device. Then create the file /etc/modprobe.d/aliases-bond with the following contents for a network that has 10.0.0.1 as either a reliable host or important router. Note that this will use ARP to ping the router every 2000ms, you could use a lower value for a faster failover or a higher value
alias bond0 bonding
options bond0 mode=1 arp_interval=2000 arp_ip_target=10.0.0.1

If you want to monitor link status then you can use the following options line instead, however I couldn’t test this because the MII link monitoring doesn’t seem to work correctly on my hardware (there are many Ethernet devices that don’t work well in this regard):
options bond0 mode=0 miimon=100

Then edit the file /etc/network/interfaces and inset something like the following (as a replacement for the configuration of eth0 that you might currently be using). Note that XX:XX:XX:XX:XX:XX must be replaced by the hardware address of one of the interfaces that are being bonded or by a locally administered address (see this Wikipedia page for details). If you don’t specify the Ethernet address then it will default to the address of the first interface that is enslaved. This might not sound like a problem, however if the machine boots and a hardware failure is experienced which makes the primary Ethernet device not visible to the OS (IE the PCI card is dead but not killing the machine) then the hardware address of the bond would change, this might cause problems with other parts of your network infrastructure.
auto bond0
iface bond0 inet static
pre-up modprobe bond0
hwaddress ether XX:XX:XX:XX:XX:XX
address 10.0.0.199
netmask 255.255.255.0
gateway 10.0.0.1
up ifenslave bond0 eth0 eth1
down ifenslave -d bond0 eth0 eth1

There is some special support for bonding in the Debian ifup and ifdown utilities. The following will give the same result as the above in /etc/network/interfaces:
auto bond0
iface bond0 inet static
pre-up modprobe bond0
hwaddress ether 00:02:55:E1:36:32
address 10.0.0.199
netmask 255.255.255.0
gateway 10.0.0.1
slaves eth0 eth1

The special file /proc/net/bonding/bond0 can be used to view the current configuration of the bond0 device.

In theory it should be possible to use bonding on a workstation with DHCP, but in my brief attempts I have not got it working – any comments from people who have this working would be appreciated. The first pre-requisite of doing so is to use either MII monitoring or broadcast (mode 3), I experimented with using options bond0 mode=3 in /etc/modprobe.d/aliases-bond but found that it took too long to get the bond working and dhclient timed out.

Thanks for the howtoforge.com article and the linuxhorizon.ro article that helped me discover some aspects of this.

Update: Thanks to Guus Sliepen on the debian-devel mailing list for giving an example of the slaves directive as part of an example of bridging and bonding in response to this question.

Porn For Children

James Purser writes about the current plans for Internet filtering in Australia and concentrates on the technical issues (whether it will degrade the ISP service) and the issue of who’s moral standards should be enforced for the entire country.

But the fact is that children have never had any problem accessing porn. When I was in grade 4 at primary school (~9yo) a group of boys decided to walk to the local shopping centre at lunch-time and I joined them. At the shopping centre the other boys read Playboy (that was before such magazines were required to be displayed in sealed plastic bays). I didn’t read Playboy because there were some electronics magazines that were more interesting. When in grade 6 (~11yo) a friend told me about his parents video collection which featured fellatio and sodomy. I don’t recall whether he offered to show me the videos but being a good friend I’m sure he would have done so if I had asked. In the early years of high school some boys ran a black-market for second-hand porn magazines (ick), they also sold new magazines that were significantly more expensive. When in year 12 digital porn was just becoming popular and the exchange of porn on floppy disk began.

I’m sure that now children use USB sticks to exchange porn that they get from the Internet or other sources.

When I was in year 10 a female dancing instructor ceased working for the school after an up-skirt picture of her was stuck on a notice-board (I guess that her resignation was related to the picture but can’t be sure).

The evidence that I witnessed while at school is that 15yo boys are prepared to photograph unwilling women and exchange the pictures, and that the exchange and sale of all manner of porn is not uncommon at school (including primary school). I don’t think that the schools I attended were in any way unusual in this regard.

When I was at school cameras were large. Unless you had a polaroid camera (which was even larger) the film had to be developed – and the staff at the photo company were potential witnesses. I expect that these factors significantly decreased the amount of such activity.

Now a significant portion of children have a mobile phone and it seems that a built-in camera is a standard feature in all new phones now. Digital cameras (which have much better quality than phone-cameras) are becoming quite cheap. It’s widely regarded that giving a teenager a mobile phone is good for their safety (and it certainly makes it easier to discover where children claim to be) and it’s also widely regarded that a digital camera is a good toy (babies as young as 2 are often given the old camera when their parents get a new one). We should expect that the number of children who have digital cameras to rapidly approach 100% of children who desire them.

Given these factors it seems to me that it would be a good idea to allow teenage boys access to better quality porn than they are unable to produce (with either willing or unwilling subjects). It has already been shown that increased access to porn reduces the incidence of rape. I expect that the same also applies to the issue of making porn, people who have good access to porn will be less inclined to make their own.

There is some nasty porn out there. If they were to try and prevent access to porn that is illegal under Australian law (IE pictures of children, animals, rape, etc) then I don’t think that anyone would object. But preventing access to soft porn such as Playboy (which is so tame that it’s hardly porn by modern standards) is a really bad idea if it will increase the risk of up-skirt photos and the production of child rape movies.

Let’s be sensible and accept the fact that children who want to see porn will see it and focus our attention on what type of porn will be seen by children and whether the “actors” are consenting adults.

PS I spent several years living in Amsterdam and working as a sys-admin for ISPs there.