Archives

Categories

Music Videos

I’ve been thinking about music videos recently while compiling a list of my favourite videos of all time. It seems that YouTube has changed things through the re-mixes of videos and the ability of anyone to publish for a mass-market (although without the possibility of directly making money from it).

Also today all new PCs (and most PCs that are in use) are capable of being used for video editing and the compute power needed for 80’s and 90’s quality special effects is also commonly available (in most cases good art doesn’t need more technical quality than that). So anyone can produce videos (and a quick search of YouTube reveals that many people are producing videos for their favourite songs).

I think that we need a music video for the Free Software Song. One possibility is to base it on the 1984 Apple advert (because it’s the free software community that is opposing Big Brother not Apple). I think it would be good to have multiple versions of the Free Software Song (with matching videos), there could be the version for young children, the Hip-Hop version, the Punk version, etc. Also I think that there is potential for the creation of other songs for the free software community.

One possible way of doing this would be to have a contest for producing music and videos. Maybe a conference such as LCA or OLS could have the judging for such a contest. I would be prepared to donate some money towards the prize pool and I’m sure that other individuals and organisations would also be prepared to do so. If I get some positive feedback on this idea I’ll investigate how to run such a contest.

Here are my favourite videos of the moment. Please let me know of any videos that you think I would like based on this list.

  • Placebo:
    • Infra-Red – I love the Haxor ants (I Lied to You – We Are the Enemy says the CEO), I first saw that idea in the book City by Clifford D. Simak’s
    • A Song to Say Goodbye – strange and sad. Like much good art it can be interpreted in several ways.
    • Pure Morning – strange video that seems to have nothing to do with the music, but still good
    • Slave to the Wage – interesting and not strange by Placebo standards. I’ve recently decided that I don’t like working in a corporate environment so I can relate to this.
  • Smashing Pumpkins:
    Ava Adore, interesting way of changing scenes, and a very artistic and strange video (matches the song)
  • Duran Duran (who incidentally named their group after a character in Barbarella: Queen of the Galaxy – strangely the spelling is different though):
    • Come Undone, interesting aquarium scenes
    • Too Much Information – they should re-do this and include a reference to the Internet in the lyrics. ;)
    • Wild Boys – Mad Max 3 as a film clip
  • UNKLE:
    • Eye for an Eye – strange and disturbing, as any serious art that is related to war must be
    • Rabbit in Your Headlights – surprising end, I wonder if anyone was injured trying to emulate this clip
  • Nine Inch Nails:
    Head Like a Hole, strange and a bit bizarre at times. Not the greatest of my favourite clips but the music makes up for it.
  • Queen:
    • I Want to Break Free, strangely amusing and very artistic
  • Chemical Brothers:
    • Let Forever Be – my favourite clip of all time. Fractally weird, you can watch it dozens of times and still be missing things.
    • Setting Sun – the world would be a better place if more cops could dance like that! Also is it just me or does the drummer guy look like a Narn from Babylon 5?
    • Out of Control – surprise ending. I would appreciate it if someone who knows the non-English language (probably Spanish) in the clip could point me to a translation.
    • Star Guitar – a real work of art but no plot and I didn’t enjoy the music, I recommend watching it once
    • The Golden Path – I used to wonder whether office work was really so grim in the 60s and 70s, but then I worked for a financial company recently…
  • Fat Boy Slim:
    Praise You – why can’t reality TV be this good?
  • Falco:
    Rock Me Amadeus – let’s represent two totally diffent cultures (bikers and Austraian high society) in a film clip, silly but amusing
  • Madonna:
    Like A Prayer – I wonder how many racist organizations banned that
  • A-Ha:
    Take On Me – mixing multiple art forms (in this case film and animation) can work really well. Beat Kill Bill to the idea by a couple of decades.
  • Robert Palmer:
    Simply Irresistable – pity that they didn’t hire more women who can dance or at least put the dancers in front of the models. It’s interesting to note that one of the models appears to be actually playing a guitar.
  • Garbage:
  • Michael Jackson:
    Billie Jean – class is timeless.

When to Use SE Linux

Recently someone asked on IRC whether they should use SE Linux on a web server machine (that is being used for no other purpose) and then went on to add “since the webserver is installed as root anyway“.

If a machine is used to run a single non-root application then the potential benefits of using SE Linux are significantly reduced, the issue will be whether the application could exploit a setuid program to gain root access if SE Linux was not there to prevent it.

The interesting point in this case is that the user notes that the webserver runs as root. It was not made clear whether the entire service ran as root or whether the parent ran as root while child processes ran as a different UID (a typical Apache configuration). In the case where the child processes run as non-root it is still potentially possible for a bug in Apache to be used to exploit the parent process and assume it’s privileges. So it’s reasonable to consider that SE Linux will protect the integrity of the base OS from a web server running as root – even for the most basic configuration (without cgi-bin scripts). If a root owned process that is confined by SE Linux is compromised then as long as there is no kernel vulnerability the base OS should keep it’s integrity and the sys-admin should be able to login and discover what happened.

If the web server is more complex and runs cgi-bin scripts then there is a further benefit for system integrity in that a cgi-bin script could be compromised but the main Apache process (which runs in a different domain) would run without interruption.

When a daemon that runs as non-root is cracked on a non-SE system it will have the ability to execute setuid programs – some of which may have exploitable bugs. Also on a non-SE system every daemon has unrestricted network access in a typical configuration (there is a Net Filter module to control access by UID and GID, but it is very rarely used and won’t work in the case of multiple programs running with the same UID/GID). With SE Linux a non-root daemon will usually have no access to run setuid programs (and if it can run them it will be without a domain transition so they gain no extra privileges). Also SE Linux permits controls over which network ports an application may talk to. So the ability of a compromised server process to attack other programs is significantly reduced on a SE Linux system.

In summary the more complex your installation is and the more privileges that are required by various server processes the more potential there is to increase the security of your system by using SE Linux. But even on a simple server running only a single daemon as non-root there is potential for SE Linux to provide real benefits to system security.

Suggestions and Thanks

One problem with the blog space is that there is a lot of negativity. Many people seem to think that if they don’t like a blog post then the thing to do is to write a post complaining about it – or even worse a complaint that lacks specific details to such an extent that the subject of the complaint would be unable to change their writing in response. The absolute worst thing to do is to post a complaint in a forum that the blog author is unlikely to read – which would be a pointless whinge that benefits no-one.

Of course an alternate way for the recipient to taking such complaints as suggested by Paul Graham is “you’re on the right track when people complain that you’re unqualified, or that you’ve done something inappropriate” and “if they’re driven to such empty forms of complaint, that means you’ve probably done something good” (Paul was talking about writing essays not blogs, but I’m pretty sure that he intended it to apply to blogs too). If you want to actually get a blog author (or probably any author) to make a change in their material in response to your comments then trying to avoid empty complaints is a good idea. Another useful point Paul makes in the same essay is ““Inappropriate” is the null criticism. It’s merely the adjective form of “I don’t like it.”” – something that’s worth considering given the common criticism of particular blog content as being “inappropriate” for an aggregation feed that is syndicating it. Before criticising blog posts you should consider that badly written criticism may result in more of whatever it is that you object to.

If you find some specific objective problem in the content or presentation of a blog the first thing to do is to determine the correct way of notifying the author. I believe that it’s a good idea for the author to have an about page which either has a mailto URL or a web form for sending feedback, I have a mailto on my about page – (here’s the link). Another possible method of contact is a comment on a blog post, if it’s an issue for multiple posts on the blog then writing a comment on the most recent post will do (unless of course it’s a comment about the comment system being broken). For those who are new to blogging, the blog author has full control over what happens to comments. If they decide that your comment about the blog color scheme doesn’t belong on a post about C programming then they can respond to the comment in the way that they think best (making a change or not and maybe sending you an email about it) and then delete the comment if they wish.

If there is an issue that occurs on multiple blogs then a good option is to write a post about the general concept as I did in the case of column width in blogs where I wrote about one blog as an example of a problem that affects many blogs. I also described how I fixed my own blog in this regard (in sufficient detail to allow others to do the same). Note that most blogs have some degree of support for Linkback so any time you link to someone else’s blog post they will usually get notified in some way.

On my blog I have a page for future posts where I invite comments from readers as to what I plan to write about next. Someone who prefers that I not write about topic A could write a comment requesting that I write about topic B instead. WordPress supports pages as a separate type of item to posts. A post is a dated entry while pages are not sorted in date order and in most themes are displayed prominently on the front page (mine are displayed at the top). I suggest that other bloggers consider doing something comparable.

One thing I considered is running a wiki page for the future posts. One of the problems with a wiki page is that I would need to maintain my own private list which is separate, while a page with comments allows only me to edit the page in response to comments and then use the page as my own to-do list. I may experiment with such a wiki page at some future time. One possibility that might be worth considering is a wiki for post requests for any blog that is syndicated by a Planet. For example a wiki related to Planet Debian might request a post about running Debian on the latest SPARC systems, the first blogger to write a post on this topic could then remove the entry from the wish-list (maybe adding the URL to a list of satisfied requests). If the person who made the original request wanted a more detailed post covering some specific area they could then add such a request to the wish-list page. If I get positive feedback on this idea I’ll create the wiki pages and add a few requests for articles that would interest me to start it up.

Finally to encourage the production of content that you enjoy reading I suggest publicly thanking people who write posts that you consider to be particularly good. One way of thanking people is to cite their posts in articles on your own blog (taking care to include a link to at least one page to increase their Technorati rank) or web site. Another is to include a periodic (I suggest monthly at most) links post that contains URLs of blog posts you like along with brief descriptions of the content. If you really like a post then thank the author by not only giving a links with a description (to encourage other people to read it) but also describe why you think it’s a great post. Also if recommending a blog make sure you give a feed URL so that anyone who wants to subscribe can do it as easily as possible (particularly for the blogs with a bad HTML layout).

Here are some recent blog posts that I particularly liked:

Here are some blogs that I read regularly:

  • Problogger (feed), I don’t think that I’ll be a full-time blogger in the forseeable future, but his posts have lots of good ideas for anyone who wants to blog effectively. I particulaly appreciate the short posts with simple suggestions.
  • Mega Tokyo (feed) – A manga comic on the web. The amusing portrayal of computer gaming fanatics will probably remind most people in the computer industry of some of their friends.
  • Defence and the National Interest (feed). The most interesting part of this (and the only reason I regularly read it) is the blog of William S. Lind (titled On War. William writes some very insightful posts about military strategy and tactics but some things about politics will offend most people who aren’t white Christian conservatives.
    It’s a pity that there is not a more traditional blog feed for the data, the individual archives contain all posts and there seems to be no possibility of viewing the posts for the last month (for people who read it regularly in a browser and don’t use an RSS feed) and no search functionality built in.
  • WorseThanFailure.com (was TheDailyWTF.com) (feed) subtitled Curious Perversions in Information Technology. Many amusing anecdotes that illustrate how IT projects can go wrong. This is useful for education, amusement, and as a threat (if you do THAT then we could submit to WorseThanFailure.com).
  • XKCD – a stick-figure web comic, often criticised for the drawing quality by people who just don’t get it, some people read comics for amusement and insightful commentry not drawings. It’s yet another example of content beating presentation when there’s a level playing field.

Finally I don’t read it myself, but CuteOverload.com is a good site to refer people to when they claim that the Internet is too nasty for children – the Internet has lots of pictures of cute animals!

Feedburner Item Link Clicks

For a while I used the Item Link Clicks feature in Feedburner. For those who aren’t aware Feedburner is a service that proxies access to an RSS feed (you need to either publish the Feedburner URL as the syndication link or use a HTTP redirect to send the requests there – I use a HTTP redirect). Then when people download the feed they get it from Feedburner which is fast and reliable (unlike my blog on a bad day) and which also tracks some statistics which can be interesting.

The Item Link Clicks feature rewrites the guid URLs to point to a Feedburner URL that will redirect back to the original post (and track clicks along the way). The down-side of doing this is that some people who read blogs via Planet installations and just copy the link from the Planet page when citing a blog post instead of actually visiting the blog in question. This causes a potential problem for the person citing the post in that they won’t know whether the URL is valid unless they visit it. So when (not if) people have misconfigured blogs that are widely syndicated the people who cite them without verifying the links could end up linking to invalid URLs. The problem for the person who is cited is that such Feedburner redirects don’t seem to be counted as part of the Technorati ranking (which is a count of the number of links to a blog in the last 6 months which give some rough approximation of how important the blog is). The Technorati rating can sometimes be used in negotiations with an advertiser and is often used when boasting about how popular a blog is.

To increase my Technorati ranking I have stopped using the Feedburner URL rewriting feature. For people who view my blog directly or through a Planet installation this will not give any difference that you would notice. The problem is for people who use a service that syndicates RSS feeds and then forwards them on by email, such people received two copies of the last 10 items as the URL (GUID) change means that the posts are seen as new (Planet solves this by deleting the posts which are seen as unavailable and then creating new posts with the new URLs and no change is visible to the user).

Based on this experience I suggest not using URL rewriting services. They will hurt your technorati ranking, give little benefit (IMHO) and annoy the small number of RSS to email readers. Particularly don’t change your mind about whether to use such a feature or not. Changing the setting regularly would be really annoying. Also this means that if you use such a service you should take care not to have you Feedburner redirection ever get disabled. A minor Apache configuration error corrected a day later could end up in sending all the posts in the current feed an extra two times.

Controlling a STONITH and Upgrading a Cluster

One situation that you will occasionally encounter when running a Heartbeat cluster is a need to prevent a STONITH of a node. As documented in my previous post about testing STONITH the ability to STONITH nodes is very important in an operating cluster. However when the sys-admin is performing maintenance on the system or programmers are working on a development or test system it can be rather annoying.

One example of where STONITH is undesired is when upgrading packages of software related to the cluster services. If during a package upgrade the data files and programs related to the OCF script are not synchronised (EG you have two programs that interact and upgrading one requires upgrading the other) at the moment that the status operation is run then an error may occur which may trigger a STONITH. Another possibility is that if using small systems for testing or development (EG running a cluster under Xen with minimal RAM assigned to each node) then a package upgrade may cause the system to thrash which might then cause a timeout of the status scripts (a problem I encounter when upgrading my Xen test instances that have 64M of RAM).

If a STONITH occurs during the process of a package upgrade then you are likely to have consistency problems with the OS due to RPM and DPKG not correctly calling fsync(), this can cause the OCF scripts to always fail to run the status command which can cause an infinite loop of the cluster nodes in question being STONITHed. Incidentally the best way to test for this (given the problems of a STONITH sometimes losing log data) is to boot the node in question without Heartbeat running and then run the OCF status commands manually (I previously documented three ways of doing this).

Of course the ideal (and recommended) way of solving this problem is to migrate all services from a node using the crm_resource program. But in a test or development situation you may forget to migrate all services or simply forget to run the migration before the package upgrade starts. In that case the best thing to do is to be able to remove the ability to call STONITH . For my testing I use Xen and have the nodes ssh to the Dom0 to call STONITH, so all I have to do to remove the STONITH ability is to stop the ssh daemon on the Dom0. For a more serious test network (EG using IPMI or an equivalent technology to perform a hardware STONITH as well as ssh for OS level STONITH on a private network) a viable option might be to shut down the switch port used for such operations – shutting down switch ports is not a nice thing to do, but to allow you to continue work on a development environment without hassle it’s a reasonable hack.

When choosing your method of STONITH it’s probably worth considering what the possibilities are for temporarily disabling it – preferably without having to walk to the server room.

Colorado Software Summit 2007

On about 5 years I attended the conference The Colorado Software Summit. The first one was the last conference under the old name (ColoradOS/2) but then as OS/2 was rapidly losing market share and the conference delegates changed their programming interests it changed to become a Java conference.

The Colorado Software Summit rapidly became known as THE event to really learn about Java, other conferences are larger and have a higher profile but the organisers of CSS decided to keep the numbers smaller (600 is usually the maximum number of delegates) to provide better opportunities for the delegates to meet and confer. One of the attractions of CSS is the large number of skilled and experienced people who attend, there are many delegates who can teach you lots of interesting things even though they aren’t on the speaking list. I ended up never doing any serious Java programming, but I still found that I learned enough and had enough fun to justify the expense.

Currently there is an early registration open which saves $200 off the full price ($1,795 instead of $1,995), this lasts until the 31st of August. In addition to this the organisers have offered a further $100 discount to the first five readers of my blog who register personally (IE an individual not a corporation is paying for the ticket). To take advantage of the extra $100 discount you must include the code CSS509907 in your registration.

PS I have no financial interest in this matter. I like the conference organisers, but that largely stems from the fact that they run great conferences that I have enjoyed. I recommend the conference because it’s really good.

A Great Advertising Web Site

The site noonebelongsheremorethanyou.com is an advert for a book of short stories. The web site is funny and quirky (two qualities that are required for a site to become virally popular), works well on all browser sizes, has a navigation method that is unique (or at least something I don’t recall seeing done so well in 10 years of web surfing) and display all the needed information.

I feel inclined to buy the book just to support the creation of amusing web sites!

Update: Here is the Wikipedia page for Miranda July, thanks to meneame.net for the link.

ARP

In the IP protocol stack the lowest level protocol is ARP (the Address Resolution Protocol). ARP is used to request the Ethernet hardware (MAC) address of the host which owns a particular IP address.

# arping 192.168.0.43
ARPING 192.168.0.43
60 bytes from 00:60:b0:3c:62:6b (192.168.0.43): index=0 time=339.031 usec
60 bytes from 00:60:b0:3c:62:6b (192.168.0.43): index=1 time=12.967 msec
60 bytes from 00:60:b0:3c:62:6b (192.168.0.43): index=2 time=168.800 usec
— 192.168.0.43 statistics —
3 packets transmitted, 3 packets received, 0% unanswered

One creative use of this is the program arping which will send regular ARP request packets for an IP address and give statistics on the success of getting responses. The above is the result of an arping command which shows that the machine in question can respond in 12.9msec or less. One of the features of arping (when compared to the regular ping which uses an ICMP echo) is that it will operate when the interface has no IP address assigned or when the IP address does not match the netmask for the network in question.

This means that if you have a network which lacks DHCP and you want to find a spare IP address in the range that is used then you can use arping without assigning yourself an IP address first. If you wanted to use ping in that situation then you would have to first assign an IP address in which case you may have already broken the network!

Another useful utility is arpwatch. This program listens to ARP traffic and will notify the sys-admin when new machines appear. The notification message will include the Ethernet hardware address and the name of the manufacturer of the device (if it’s known). When you use arpwatch you can say “who added the device with the Intel Ethernet card to the network at lunch time?” instead of “who did something recently to the network that made it break?”. The more specific question is more likely to get an accurate answer.

IT Recruiting Agencies – Advice for Contract Workers

I read an interesting post on Advogato about IT recruiting agencies (along with an interesting preface about medical treatment for broken ribs).

Their report closely mirrored my experience in many ways. Here are what I consider to be the main points for a job applicant dealing with recruiters:

  1. Ask more than you believe that you are worth – the worst they can do is say “no” (and you will feel like a fool if the agency pays you less than half what the client pays because you didn’t ask for enough).
  2. Put lots of terms in your CV that will work for grep or other searches. A human who reads your CV will know that if you describe 3 years of Linux sys-admin experience that you can do BASH shell scripting and sys-admin work on other versions of Unix. But if a search doesn’t match it then the typical recruiting agent won’t offer you the position. I have idly considered saying things like “Perl (not Pearl) experience” to catch mis-spelled grep operations.
  3. Recruiting agents will frequently demand that you re-write your CV to match a position that they have open, they will say things such as “you claim 3 years of shell scripting and Perl experience but I don’t see that on your CV” and insist that you re-write it to give more emphasis to that area.
  4. Most recruiting agents are compulsive liars and don’t understand computers, you have to deal with the fact that to get most of the better paying positions you need to have an incompetent liar represent you. Avoid the stupid liars though. For example I once refused to deal with an agent who told me about his plans for stealing the CV database from the agency he worked for and selling it to another agency – not because he was shifty in every possible way, but because he was so stupid as to boast about such things immediately after meeting me on a train.
  5. Expect that recruiting agents won’t understand the technology. If you politely and subtly offer to assist them in writing a letter to a client recommending you then they will often accept. Why would they go to the effort of assessing your skills and writing a short letter to the client describing how good you are when you can do that for them? On one particularly amusing occasion I was applying for a position with IBM and the recruiting agent had been supplied with a short quiz of technical skills to assess all applicants – they gave me the answer sheet and asked me to self-assess (I got 100% – but it was an easy test and I would have got the same result anyway).
  6. Some levels of stupidity are so great that you should avoid dealing with the agent (and possibly the agency that employs them). Being unable to view a HTML file is one criteria I have used since 1999 (every OS since about 1998 came with a web browser built in). Another example is an agent who tried to convince me that “.au” is not a valid suffix for an email address (I was applying for a sys-admin job with an ISP). Job adverts that mis-spell terms (such as Perl spelled at Pearl) are also a warning sign.
  7. Gossip is important to your business! Some agencies will pay you what you earn and merely terminate your contract when things go wrong. Other agencies will refuse to pay you when things go bad, or even demand that paid money be returned and threaten legal action. Talk to other contract workers in your region and learn the goss about the bad agencies. Also track agency name changes, when a bad agency changes name don’t be fooled.

When applying for a position advertised by an agency you will ideally start by seeing an advert with a phone number and an email address. The best strategy in that case seems to be to send your CV with a brief cover letter and then about 5 minutes after your mail server sends the message to their mail server you phone them. I found that I got a significantly higher success rate (in terms of having the agent send my CV to the client) if I phoned them when my CV arrived.

Sometimes a fax number is advertised, unless there is some problem that prevents sending a document via email (such as the agency having a broken mail server) then do not FAX them. A faxed document will have to be faxed on to the client and will look bad after the double-fax operation and will prevent the agent from grepping it. Rumor has it that agents will often post fake adverts for the purpose of collecting CVs (so that they can boast to clients

In most situations a recruiting agent should insist on meeting you for an interview before sending your CV to a client. The only exception is if you are applying for a job in another country. Meeting an agent at a restaurant or other public place is not uncommon (often they want to meet you while travelling between other locations and sometimes their main office is not in a good location). I suspect that some agencies start with a “virtual office” and perform all their interviews in public places (this doesn’t mean that they will do a worse job than the more established agencies). If an agent is prepared to recommend you to a client without meeting you then they are not doing their job properly. It used to be that there were enough agencies pretending to do their job that you could ignore the agencies that will recommend any unseen candidate. But now an increasing number of agencies do this and if you want a contract you may have to deal with them.

When an agency has a fancy office keep in mind that they paid for it by taking money from people like you! For contract work a recruiting agent is not your friend, they make their money by getting you to accept less money than the client pays them – the less they pay you the more money they make. A common claim is “we only take a fixed percentage of what the client pays”, but when you ask what that percentage is they refuse to answer – I guess that the fixed percentage is 50% or as close to it as they can manage.

Ethernet Bonding and a Xen Bridge

After getting Ethernet Bonding working (see my previous post) I tried to get it going with a bridge for Xen.

I used the following in /etc/network/interfaces to configure the bond0 device and to make the Xen bridge device xenbr0 use the bond device:

iface bond0 inet manual
pre-up modprobe bond0
pre-up ifconfig bond0 up
hwaddress ether 00:02:55:E1:36:32
slaves eth0 eth1

auto xenbr0
iface xenbr0 inet static
pre-up ifup bond0
address 10.0.0.199
netmask 255.255.255.0
gateway 10.0.0.1
bridge_ports bond0

But things didn’t work well. A plain bond device worked correctly in all my tests, but when I had a bridge running over it I had problems every time I tried pulling cables. My test for a bond is to boot the machine with a cable in eth0, then when it’s running switch the cable to eth1. This means there is a few seconds of no connectivity and then the other port becomes connected. In an ideal situation at least one port would work at all times – but redundancy features such as bonding are not for an ideal situation! When doing the cable switching test I found that the bond device would often get into a state where it every two seconds (the configured ARP ping time for the bond) it would change it’s mind about the link status and have the link down half the time (according to the logs – according to ping results it was down all the time). This made the network unusable.

Now I have deided that Xen is more important than bonding so I’ll deploy the machine without bonding.

One thing I am considering for next time I try this is to use bridging instead of bonding. The bridge layer will handle multiple Ethernet devices, and if they are both connected to the same switch then the Spanning Tree Protocol (STP) is designed to work in this way and should handle it. So instead of having a bond of eth0 and eth1 and running a bridge over that I would just bridge eth0, eth1, and the Xen interfaces.