|
|
Here’s an interesting CSPAN interview with Dick Cheney from 1994. It’s amazing how accurate Dick’s analysis of the Iraq situation was in 1994 when justifying the decision to merely destroy Saddam’s army in Kuwait and not try to occupy Iraq or catpure Saddam. It’s a pity that he didn’t stick to that idea.
Since the old CSPAN interview became popular the MSNBC show Countdown with Keith Olbermann covered it (youtube link). Keith initially said some ridiculous things about heart surgery affecting people’s emotions (that was the medical opinion about 2000 years ago, doctors have learned a lot since and Keith should learn from them). Then John Nichols of The Beat blog and author of Dick: The Man Who is President (Dick Cheney) makes some interesting comments. John interviewed the professors who taught Dick at university and their opinion of him matches the current observations – that he believes that the US government can do whatever it wants with no consequences.
When I was about 11 years old I decided that I wanted a career related to computers. My first computer was the TEC-1 single-board Z80 based kit computer from Talking Electronics magazine (see the photo below). I think that I built this when I was 10.

The computer is 16cm high and 25cm wide. The six seven segment displays are the only built-in output device (there were optional kits for other output devices). The keypad has the hexadecimal number keys, an “ad” button for entering addresses, a “go” button for executing programs, and “+” and “–” keys for incrementing and decrementing the address. Below the reset button (labelled “R“) you will see the optional function key (of which I can’t remember the purpose). Programming this computer required entering the hexadecimal code on the keypad with the “+” and “–” keys being the main method of editing (the “ad” key was used to jump to a different section of RAM). In editing mode the first four seven-segment displays showed the address (the Z80 could only address 64K of RAM) and the other two showed the memory contents (the word size was one byte). In terms of user-friendlyness it was probably about equal to punched cards – apart from the lack of non-volatile storage (unless you built the optional NVRAM kit).
My TEC-1 has 2K of RAM (the 83251R chip is equivalent to an Intel 16kilo-bit 6116 static RAM chip) and 2K of ROM (the chip with the orange sticker labeled Mon1 is a 2716 EPROM – 16kilo-bit).
Not long after that my parents bought the first serious computer for the family, a Microbee Z80 based system with a tape drive that used a monochrome monitor of resolution approximately equal to CGA and which had either 16K or 32K of RAM (I can’t recall). The next family computer was a Microbee Premium series 128K which is probably the same model as the one depicted on the Microbee Wikipedia page (a serious omission of the Wikipedia page is that it has no picture of the box containing the PSU and the floppy drives for the Premium Series). My first published article in a computer magazine was when I was about 15 years old and I wrote a long email on a Fidonet echo (mailing list) reviewing a 3rd party update to the CP/M system for the Premium Series Microbee and was surprised by having it published in the Microbee club magazine (in those days we didn’t bother much about copyright so no-one asked for my permission before publishing).
I wonder if starting with computers at such an age is typical for people who now contribute to free software development. I think it would be interesting to see some blog posts from other people in the community about how old they were when they started with computers and what type of computer they started with.
I also wonder about the correlation between the age of starting with computers and career success in the computer industry. One significant benefit of starting early was that I could learn things that would be useful for my career in later decades while other children were wasting time studying what teachers told them to study. It also meant that in later years of high-school I could relax knowing that I could get straight B’s without effort which was more than was required to enter a CS degree program at that time. Until half-way through year 12 I tried to avoid ever doing home-work at home – home-time was computer time! Do you think that the age at which you chose your career significantly affected your success? If so in what way?
If you were asked for advice by parents as to when their child should be given it’s first computer what age would you suggest? Unfortunately I usually get asked for advice about such things by people who have children aged 16+ (which is way too late IMHO).
Update: Dbenn recently gave a talk to his son’s primary school about computers and he used the TEC-1 as an example. They are still in use!
It’s interesting to see that Audi is releasing a car with LEDs for all lights including the headlights. This is being promoted as an environmental benefit, however a quick google search revealed that my Volkswagen Passat apparently takes 55W headlights (giving a total of 110W of electricity used). Even allowing for some inefficiency in the alternator this would make a very small impact on the fuel use of a engine rated at 140KW. The Audi in question is the R8 (wikipedia link because the Audi web site is badly broken) and has a 300KW engine…
A simple implementation of LED headlights will do some good for plug-in hybrid cars and all-electric vehicles where saving power is more important – when the technology filters down to cheaper vehicles. Also one possible use for the technology is to dim the headlights by turning off some of the LEDs in the bank (according to the LED Wikipedia page it is currently impossible to create a single LED that takes more than 1W of power, so a bank of LEDs would be used). Currently you have a choice of using “parking lights” or “head-lights” when driving, and when driving just before sun-set or at night in the city (where the street lights are bright) you need head-lights to allow other drivers to clearly see you but don’t need them as bright as they have to be when driving at night in the country. So a range of levels of luminosity could be effectively used in headlights to increase efficiency in some situations and increase light levels in others.
According to the Luminous efficiency Wikipedia page current LEDs are up to three times as efficient as quartz halogen incandescent globes and future developments are likely to increase that to six times the efficiency. Combine that with more effective use of headlights to provide the light at the location and level that’s needed and the result could be using at little as 10% of the electricity for headlights on average!
Another thing that I would like to see is the Adaptive Headlights feature of the better BMWs (which I referenced in a previous post about the BM 5 and 7 series) implemented in a cheaper and more reliable manner. The feature in question is that the headlights will turn when driving around a corner to show the road ahead instead of just shining off the edge of the corner. Implementing such a feature with incandescent lights is difficult because they have to be physically turned and moving parts tend to break (which increases maintenance costs and decreases the overall reliability of the vehicle). An obvious alternate design is to have a set of LEDs pointing in different directions and which LEDs get power would determine where the light goes (this would also react faster than physically moving a light). Once LED headlights become common the Adaptive Headlights feature could be implemented in the cheapest cars on the road with minimal extra cost – currently it’s a feature that would be expensive to implement and would increase the sale price of a small car and probably the service price too.
A question that is often asked is whether to use SE Linux or a chroot to restrict a program.
In Unix chroot is a way of running a program with a restricted set of directories available (it used to be merely a sub-tree but with bind mounts it can be any arbitrary set of directory trees). A chroot can be implemented in a daemon (it can call the chroot(2) system call before it drops it’s privileges) or by a shell script (through the chroot(8) utility). The disadvantages of a chroot are that root can escape from it, a chroot process can see the existence of non-chroot processes (ps and similar programs work in the same way in all chroot environments), and inter-process communication is not prevented. One solution to this is to have an enhanced chroot environment (which typically requires a kernel patch) where the chrooted processes can not run ps without restriction and have other limits applied to what they are permitted to do (there are several kernel patches that implement such restrictions). In the early days of SE Linux development I implemented similar functionality in SE Linux policy (here is the paper I presented at Linux Kongress 2002).
Configuring a chroot environment is inconvenient. If it is configured in the traditional manner (copying files to the chroot instead of bind mounting the directories) then old versions may exist in the chroot after new versions with security fixes have been installed in the main environment.
SE Linux provides better security than a typical chroot environment by controlling all interaction between processes. It provides more flexibility than an enhanced chroot environment by being configured entirely by policy and not requiring a kernel recompile to change the way it works.
I believe that the correct thing to do is to cease using chroot entirely and use SE Linux instead.
I’ve been thinking about music videos recently while compiling a list of my favourite videos of all time. It seems that YouTube has changed things through the re-mixes of videos and the ability of anyone to publish for a mass-market (although without the possibility of directly making money from it).
Also today all new PCs (and most PCs that are in use) are capable of being used for video editing and the compute power needed for 80’s and 90’s quality special effects is also commonly available (in most cases good art doesn’t need more technical quality than that). So anyone can produce videos (and a quick search of YouTube reveals that many people are producing videos for their favourite songs).
I think that we need a music video for the Free Software Song. One possibility is to base it on the 1984 Apple advert (because it’s the free software community that is opposing Big Brother not Apple). I think it would be good to have multiple versions of the Free Software Song (with matching videos), there could be the version for young children, the Hip-Hop version, the Punk version, etc. Also I think that there is potential for the creation of other songs for the free software community.
One possible way of doing this would be to have a contest for producing music and videos. Maybe a conference such as LCA or OLS could have the judging for such a contest. I would be prepared to donate some money towards the prize pool and I’m sure that other individuals and organisations would also be prepared to do so. If I get some positive feedback on this idea I’ll investigate how to run such a contest.
Here are my favourite videos of the moment. Please let me know of any videos that you think I would like based on this list.
- Placebo:
-
Infra-Red – I love the Haxor ants (I Lied to You – We Are the Enemy says the CEO), I first saw that idea in the book City
by Clifford D. Simak’s
- A Song to Say Goodbye – strange and sad. Like much good art it can be interpreted in several ways.
- Pure Morning – strange video that seems to have nothing to do with the music, but still good
- Slave to the Wage – interesting and not strange by Placebo standards. I’ve recently decided that I don’t like working in a corporate environment so I can relate to this.
- Smashing Pumpkins:
Ava Adore, interesting way of changing scenes, and a very artistic and strange video (matches the song)
- Duran Duran (who incidentally named their group after a character in Barbarella: Queen of the Galaxy
– strangely the spelling is different though):
- Come Undone, interesting aquarium scenes
- Too Much Information – they should re-do this and include a reference to the Internet in the lyrics. ;)
- Wild Boys – Mad Max 3 as a film clip
- UNKLE:
- Eye for an Eye – strange and disturbing, as any serious art that is related to war must be
- Rabbit in Your Headlights – surprising end, I wonder if anyone was injured trying to emulate this clip
- Nine Inch Nails:
Head Like a Hole, strange and a bit bizarre at times. Not the greatest of my favourite clips but the music makes up for it.
- Queen:
- I Want to Break Free, strangely amusing and very artistic
- Chemical Brothers:
- Let Forever Be – my favourite clip of all time. Fractally weird, you can watch it dozens of times and still be missing things.
- Setting Sun – the world would be a better place if more cops could dance like that! Also is it just me or does the drummer guy look like a Narn from Babylon 5?
- Out of Control – surprise ending. I would appreciate it if someone who knows the non-English language (probably Spanish) in the clip could point me to a translation.
- Star Guitar – a real work of art but no plot and I didn’t enjoy the music, I recommend watching it once
- The Golden Path – I used to wonder whether office work was really so grim in the 60s and 70s, but then I worked for a financial company recently…
- Fat Boy Slim:
Praise You – why can’t reality TV be this good?
- Falco:
Rock Me Amadeus – let’s represent two totally diffent cultures (bikers and Austraian high society) in a film clip, silly but amusing
- Madonna:
Like A Prayer – I wonder how many racist organizations banned that
- A-Ha:
Take On Me – mixing multiple art forms (in this case film and animation) can work really well. Beat Kill Bill to the idea by a couple of decades.
- Robert Palmer:
Simply Irresistable – pity that they didn’t hire more women who can dance or at least put the dancers in front of the models. It’s interesting to note that one of the models appears to be actually playing a guitar.
- Garbage:
- Michael Jackson:
Billie Jean – class is timeless.
Recently someone asked on IRC whether they should use SE Linux on a web server machine (that is being used for no other purpose) and then went on to add “since the webserver is installed as root anyway“.
If a machine is used to run a single non-root application then the potential benefits of using SE Linux are significantly reduced, the issue will be whether the application could exploit a setuid program to gain root access if SE Linux was not there to prevent it.
The interesting point in this case is that the user notes that the webserver runs as root. It was not made clear whether the entire service ran as root or whether the parent ran as root while child processes ran as a different UID (a typical Apache configuration). In the case where the child processes run as non-root it is still potentially possible for a bug in Apache to be used to exploit the parent process and assume it’s privileges. So it’s reasonable to consider that SE Linux will protect the integrity of the base OS from a web server running as root – even for the most basic configuration (without cgi-bin scripts). If a root owned process that is confined by SE Linux is compromised then as long as there is no kernel vulnerability the base OS should keep it’s integrity and the sys-admin should be able to login and discover what happened.
If the web server is more complex and runs cgi-bin scripts then there is a further benefit for system integrity in that a cgi-bin script could be compromised but the main Apache process (which runs in a different domain) would run without interruption.
When a daemon that runs as non-root is cracked on a non-SE system it will have the ability to execute setuid programs – some of which may have exploitable bugs. Also on a non-SE system every daemon has unrestricted network access in a typical configuration (there is a Net Filter module to control access by UID and GID, but it is very rarely used and won’t work in the case of multiple programs running with the same UID/GID). With SE Linux a non-root daemon will usually have no access to run setuid programs (and if it can run them it will be without a domain transition so they gain no extra privileges). Also SE Linux permits controls over which network ports an application may talk to. So the ability of a compromised server process to attack other programs is significantly reduced on a SE Linux system.
In summary the more complex your installation is and the more privileges that are required by various server processes the more potential there is to increase the security of your system by using SE Linux. But even on a simple server running only a single daemon as non-root there is potential for SE Linux to provide real benefits to system security.
One problem with the blog space is that there is a lot of negativity. Many people seem to think that if they don’t like a blog post then the thing to do is to write a post complaining about it – or even worse a complaint that lacks specific details to such an extent that the subject of the complaint would be unable to change their writing in response. The absolute worst thing to do is to post a complaint in a forum that the blog author is unlikely to read – which would be a pointless whinge that benefits no-one.
Of course an alternate way for the recipient to taking such complaints as suggested by Paul Graham is “you’re on the right track when people complain that you’re unqualified, or that you’ve done something inappropriate” and “if they’re driven to such empty forms of complaint, that means you’ve probably done something good” (Paul was talking about writing essays not blogs, but I’m pretty sure that he intended it to apply to blogs too). If you want to actually get a blog author (or probably any author) to make a change in their material in response to your comments then trying to avoid empty complaints is a good idea. Another useful point Paul makes in the same essay is ““Inappropriate” is the null criticism. It’s merely the adjective form of “I don’t like it.”” – something that’s worth considering given the common criticism of particular blog content as being “inappropriate” for an aggregation feed that is syndicating it. Before criticising blog posts you should consider that badly written criticism may result in more of whatever it is that you object to.
If you find some specific objective problem in the content or presentation of a blog the first thing to do is to determine the correct way of notifying the author. I believe that it’s a good idea for the author to have an about page which either has a mailto URL or a web form for sending feedback, I have a mailto on my about page – (here’s the link). Another possible method of contact is a comment on a blog post, if it’s an issue for multiple posts on the blog then writing a comment on the most recent post will do (unless of course it’s a comment about the comment system being broken). For those who are new to blogging, the blog author has full control over what happens to comments. If they decide that your comment about the blog color scheme doesn’t belong on a post about C programming then they can respond to the comment in the way that they think best (making a change or not and maybe sending you an email about it) and then delete the comment if they wish.
If there is an issue that occurs on multiple blogs then a good option is to write a post about the general concept as I did in the case of column width in blogs where I wrote about one blog as an example of a problem that affects many blogs. I also described how I fixed my own blog in this regard (in sufficient detail to allow others to do the same). Note that most blogs have some degree of support for Linkback so any time you link to someone else’s blog post they will usually get notified in some way.
On my blog I have a page for future posts where I invite comments from readers as to what I plan to write about next. Someone who prefers that I not write about topic A could write a comment requesting that I write about topic B instead. WordPress supports pages as a separate type of item to posts. A post is a dated entry while pages are not sorted in date order and in most themes are displayed prominently on the front page (mine are displayed at the top). I suggest that other bloggers consider doing something comparable.
One thing I considered is running a wiki page for the future posts. One of the problems with a wiki page is that I would need to maintain my own private list which is separate, while a page with comments allows only me to edit the page in response to comments and then use the page as my own to-do list. I may experiment with such a wiki page at some future time. One possibility that might be worth considering is a wiki for post requests for any blog that is syndicated by a Planet. For example a wiki related to Planet Debian might request a post about running Debian on the latest SPARC systems, the first blogger to write a post on this topic could then remove the entry from the wish-list (maybe adding the URL to a list of satisfied requests). If the person who made the original request wanted a more detailed post covering some specific area they could then add such a request to the wish-list page. If I get positive feedback on this idea I’ll create the wiki pages and add a few requests for articles that would interest me to start it up.
Finally to encourage the production of content that you enjoy reading I suggest publicly thanking people who write posts that you consider to be particularly good. One way of thanking people is to cite their posts in articles on your own blog (taking care to include a link to at least one page to increase their Technorati rank) or web site. Another is to include a periodic (I suggest monthly at most) links post that contains URLs of blog posts you like along with brief descriptions of the content. If you really like a post then thank the author by not only giving a links with a description (to encourage other people to read it) but also describe why you think it’s a great post. Also if recommending a blog make sure you give a feed URL so that anyone who wants to subscribe can do it as easily as possible (particularly for the blogs with a bad HTML layout).
Here are some recent blog posts that I particularly liked:
Here are some blogs that I read regularly:
- Problogger (feed), I don’t think that I’ll be a full-time blogger in the forseeable future, but his posts have lots of good ideas for anyone who wants to blog effectively. I particulaly appreciate the short posts with simple suggestions.
- Mega Tokyo (feed) – A manga comic on the web. The amusing portrayal of computer gaming fanatics will probably remind most people in the computer industry of some of their friends.
- Defence and the National Interest (feed). The most interesting part of this (and the only reason I regularly read it) is the blog of William S. Lind (titled On War. William writes some very insightful posts about military strategy and tactics but some things about politics will offend most people who aren’t white Christian conservatives.
It’s a pity that there is not a more traditional blog feed for the data, the individual archives contain all posts and there seems to be no possibility of viewing the posts for the last month (for people who read it regularly in a browser and don’t use an RSS feed) and no search functionality built in.
- WorseThanFailure.com (was TheDailyWTF.com) (feed) subtitled Curious Perversions in Information Technology. Many amusing anecdotes that illustrate how IT projects can go wrong. This is useful for education, amusement, and as a threat (if you do THAT then we could submit to WorseThanFailure.com).
- XKCD – a stick-figure web comic, often criticised for the drawing quality by people who just don’t get it, some people read comics for amusement and insightful commentry not drawings. It’s yet another example of content beating presentation when there’s a level playing field.
Finally I don’t read it myself, but CuteOverload.com is a good site to refer people to when they claim that the Internet is too nasty for children – the Internet has lots of pictures of cute animals!
For a while I used the Item Link Clicks feature in Feedburner. For those who aren’t aware Feedburner is a service that proxies access to an RSS feed (you need to either publish the Feedburner URL as the syndication link or use a HTTP redirect to send the requests there – I use a HTTP redirect). Then when people download the feed they get it from Feedburner which is fast and reliable (unlike my blog on a bad day) and which also tracks some statistics which can be interesting.
The Item Link Clicks feature rewrites the guid URLs to point to a Feedburner URL that will redirect back to the original post (and track clicks along the way). The down-side of doing this is that some people who read blogs via Planet installations and just copy the link from the Planet page when citing a blog post instead of actually visiting the blog in question. This causes a potential problem for the person citing the post in that they won’t know whether the URL is valid unless they visit it. So when (not if) people have misconfigured blogs that are widely syndicated the people who cite them without verifying the links could end up linking to invalid URLs. The problem for the person who is cited is that such Feedburner redirects don’t seem to be counted as part of the Technorati ranking (which is a count of the number of links to a blog in the last 6 months which give some rough approximation of how important the blog is). The Technorati rating can sometimes be used in negotiations with an advertiser and is often used when boasting about how popular a blog is.
To increase my Technorati ranking I have stopped using the Feedburner URL rewriting feature. For people who view my blog directly or through a Planet installation this will not give any difference that you would notice. The problem is for people who use a service that syndicates RSS feeds and then forwards them on by email, such people received two copies of the last 10 items as the URL (GUID) change means that the posts are seen as new (Planet solves this by deleting the posts which are seen as unavailable and then creating new posts with the new URLs and no change is visible to the user).
Based on this experience I suggest not using URL rewriting services. They will hurt your technorati ranking, give little benefit (IMHO) and annoy the small number of RSS to email readers. Particularly don’t change your mind about whether to use such a feature or not. Changing the setting regularly would be really annoying. Also this means that if you use such a service you should take care not to have you Feedburner redirection ever get disabled. A minor Apache configuration error corrected a day later could end up in sending all the posts in the current feed an extra two times.
One situation that you will occasionally encounter when running a Heartbeat cluster is a need to prevent a STONITH of a node. As documented in my previous post about testing STONITH the ability to STONITH nodes is very important in an operating cluster. However when the sys-admin is performing maintenance on the system or programmers are working on a development or test system it can be rather annoying.
One example of where STONITH is undesired is when upgrading packages of software related to the cluster services. If during a package upgrade the data files and programs related to the OCF script are not synchronised (EG you have two programs that interact and upgrading one requires upgrading the other) at the moment that the status operation is run then an error may occur which may trigger a STONITH. Another possibility is that if using small systems for testing or development (EG running a cluster under Xen with minimal RAM assigned to each node) then a package upgrade may cause the system to thrash which might then cause a timeout of the status scripts (a problem I encounter when upgrading my Xen test instances that have 64M of RAM).
If a STONITH occurs during the process of a package upgrade then you are likely to have consistency problems with the OS due to RPM and DPKG not correctly calling fsync(), this can cause the OCF scripts to always fail to run the status command which can cause an infinite loop of the cluster nodes in question being STONITHed. Incidentally the best way to test for this (given the problems of a STONITH sometimes losing log data) is to boot the node in question without Heartbeat running and then run the OCF status commands manually (I previously documented three ways of doing this).
Of course the ideal (and recommended) way of solving this problem is to migrate all services from a node using the crm_resource program. But in a test or development situation you may forget to migrate all services or simply forget to run the migration before the package upgrade starts. In that case the best thing to do is to be able to remove the ability to call STONITH . For my testing I use Xen and have the nodes ssh to the Dom0 to call STONITH, so all I have to do to remove the STONITH ability is to stop the ssh daemon on the Dom0. For a more serious test network (EG using IPMI or an equivalent technology to perform a hardware STONITH as well as ssh for OS level STONITH on a private network) a viable option might be to shut down the switch port used for such operations – shutting down switch ports is not a nice thing to do, but to allow you to continue work on a development environment without hassle it’s a reasonable hack.
When choosing your method of STONITH it’s probably worth considering what the possibilities are for temporarily disabling it – preferably without having to walk to the server room.
On about 5 years I attended the conference The Colorado Software Summit. The first one was the last conference under the old name (ColoradOS/2) but then as OS/2 was rapidly losing market share and the conference delegates changed their programming interests it changed to become a Java conference.
The Colorado Software Summit rapidly became known as THE event to really learn about Java, other conferences are larger and have a higher profile but the organisers of CSS decided to keep the numbers smaller (600 is usually the maximum number of delegates) to provide better opportunities for the delegates to meet and confer. One of the attractions of CSS is the large number of skilled and experienced people who attend, there are many delegates who can teach you lots of interesting things even though they aren’t on the speaking list. I ended up never doing any serious Java programming, but I still found that I learned enough and had enough fun to justify the expense.
Currently there is an early registration open which saves $200 off the full price ($1,795 instead of $1,995), this lasts until the 31st of August. In addition to this the organisers have offered a further $100 discount to the first five readers of my blog who register personally (IE an individual not a corporation is paying for the ticket). To take advantage of the extra $100 discount you must include the code CSS509907 in your registration.
PS I have no financial interest in this matter. I like the conference organisers, but that largely stems from the fact that they run great conferences that I have enjoyed. I recommend the conference because it’s really good.
|
|