Archives

Categories

ATI ES1000 Video on Debian/Squeeze

The Problem

I’ve just upgraded my Dell PowerEdge T105 [1] from Debian/Lenny to Debian/Squeeze. Unfortunately the result of the upgrade was that everything in an X display looked very green while the console display looked the way it usually did.

I asked for advice on the LUV mailing list [2] and got a lot of good advice. Daniel Pittman offered a lot of great advice.

The first suggestion was to check the gamma levels, the program xgamma displays the relative levels of Red, Green, and Blue (the primary colors for monitors) where it is usually expected that all of them will have the value of 1.0. This turned out to not be the problem, but it’s worth noting for future instances of such problems. It’s also worth noting the potential use of this to correct problems with display hardware, I’ve had two Thinkpads turn red towards the end of their lives due to display hardware problems and I now realise I could have worked around the problem with xgamma.

He also suggested that it might be ICC, the command “xprop -root | grep -i icc” might display something if that was the case. I’m still not sure what ICC is about but I know it’s not set on my system.

The next suggestion was to use the VESA display driver to try and discover whether it was a bug in the ATI driver. It turned out that the VESA driver solved that problem, I was tempted to continue using the VESA driver until I realised that the VESA driver has a maximum resolution of 1280*1024 which isn’t suitable for a 1680*1050 resolution display.

After reviewing my Xorg configuration file Daniel noted that my frame buffer depth of 16 bits per pixel is regarded as unusual by today’s standards and probably isn’t tested well. As 24bpp is generally implemented with 32bits for each pixel that means it takes twice the frame-buffer storage (both in the X server and in some applications) as well as twice the memory bandwidth to send data around. So I generally use 16bpp for my systems to make them run a little faster.

(II) RADEON(0): Not using mode “1680×1050” (mode requires too much memory bandwidth)

I tried using a depth of 24bpp and then I saw messages such as the above in /var/log/Xorg.0.log. It seems that the display hardware in my ATI ES1000 (the on-motherboard video card in the Dell server) doesn’t have the memory bandwidth to support 1680*1050*24bpp. I tried using the gtf to generate new mode lines, but it seems that there is no 24bpp mode which has a low enough vertical refresh rate to not exhaust memory bandwidth but which is also high enough for the monitor to get a signal lock.

The Solution

My current solution is to use 15bpp mode which gives almost the same quality as 16bpp and uses the same small amount of memory bandwidth. It seems that 15bpp doesn’t trigger the display driver bug. Of course one down-side to this is that the default KDE4 desktop background in Debian seems perfectly optimised to make 15bpp modes look ugly, it has a range of shades of blue that look chunky.

What I really want to do is to get a better video card. Among other things I want to get a 1920*1080 resolution monitor in the near future, Dell is selling such monitors at very low prices and there are a bunch of affordable digital cameras that record video at that resolution. Even if I can get the ES1000 to work at 1920*1080 resolution it won’t support playing Full HD resolution video – I can barely play Youtube videos with it!

I’ve previously described my experience with the awful Computers and Parts Land (CPL) store [3] where they insisted that a PCIe*16 graphics card would work in my PCIe*8 system and then claimed to be doing me a favor by giving me a credit note for the full value (not a refund). This convinced me to not bother trying to buy such a card for the past year. But now it seems that I will be forced to buy one.

What I Want to Buy

I want a PCIe*8 video card that supports 1920*1080 resolution at 24bpp and has good enough performance with free Linux drivers to support Full HD video playback. Also PCIe*4 would do, and I’m prepared to compromise on support of Full HD video. Basically anything better than the ES1000 will do.

Does anyone know how I can buy such a card? I would prefer an ATI card but will take an NVidia if necessary.

Note that I have no plans to cut one of the PCIe sockets on my motherboard (it’s an expensive system and I’m not going to risk breaking it). I will consider cutting the excess pins off a video card as a last resort. But I would rather just buy a PCIe*8 video card.

Note that I am not going to pay cash in advance to a random person who reads my blog. Anyone who wants to sell me a second-hand card must either have a good reputation in the Linux community or ship the card to me on the condition that I pay after it passes the tests.

Update: The first version of this post said that I upgraded TO Lenny, not FROM it.

Debian/Testing and KDE4

I’ve just upgraded my Thinkpad (which I use for most of my work) to Debian/testing with KDE4.

Improvements

Kde 3.5 (from Debian/Lenny) didn’t properly display the applets in a vertical task bar. I want a vertical task bar because my screen resolution is 1680*1050 and I find that a less rectangular screen workspace is best for my usage patterns.

In my previous post about my Thinkpad T61 I described how the sound controls weren’t working [1]. These problems were fixed as part of the upgrade, KDE just does the right thing. Now when I press the buttons to increase or decrease the volume the ALSA settings are changed and a small window is briefly displayed in the center of the screen to show the new volume.

Sounds are now made when I plug or unplug the power cable, this was configured in KDE 3.5 but just didn’t work.

Problems

If I have a maximised Konqueror window and I use the middle mouse button to open a link in a new window then the new window will also be maximised. Previously when I did that the new window was not maximised. What sometimes happens is that I want to open several links from a web page in different windows, so if I can open them in non-maximised windows then I can click the title-bar or the bottom status-bar of the parent window to get it in the foreground again. Probably an ideal solution to this use-case would be to configure the middle mouse button to open a new window in the background or minimised.

I can’t figure out how to implement accelerator keys for window controls. In particular I like to use ALT-F9 to minimise a window (CUA89 standard). The upgrade from KDE 3.5 to KDE 4 lost this and I can’t get it back.

I want to have an icon on my panel to launch a Konqueror session. I don’t want a large amount of space taken up for a launcher for several different Konqueror options, I just want a regular Konqueror for web browsing available at a single click. There didn’t seem to be an option for this. KDE 3.5 has an option in the add widgets to toolbar dialogue to add icons for applications. I have just discovered that in KDE 4 the only way to do this is to go through the menu structure and then click the secondary mouse button. Having two ways to do something is often a good thing, particularly when the other way is the way that was most obvious in the previous version!

It was annoying that the font choices for my Konsole session were lost on the KDE 4 upgrade, it’s not a complex setting. Also the option to resize a Konsole session to a common size (such as 80*25) seems to have been lost.

I had to spend at least 30 minutes configuring kmail to get it to display mail in much the same manner as it used to. You have to use the “Select View Appearance (Theme)” icon off at the right of the “Search” box and select “Classic” and then go to “Select Aggregation Mode” (immediately to the left) to select “Flat Date View“. I’m happy for KDE 4 to default to new exciting things when run the first time, but when upgrading from KDE 3.5 it should try to act like KDE 3.5.

I decided to use Kopete for Jabber just to preempt the GNOME people adding Mono support to Pidgin. I had to install the libqca2-plugin-ossl and qca-tls packages to enable SSL connection, missing either of those gives you an incomprehensible error condition that even strace doesn’t clarify much. Given that it’s generally agreed that sending passwords unencrypted over the Internet is a bad idea and that it’s a configuration option in Jabber servers to reject non-SSL connections it seems to me that the Kopoete package should depend on the packages that are needed for SSL support. Failing that it would be good to have Kopete offer big visible warnings when you don’t have them.

I use the KDE 2 theme and the right side of the title bar of each window is a strange dappled pattern. Not sure why and I have more important problems to fix.

Parts of KDE crash too often. I’ll start filing bug reports soon.

The management of the Desktop folder has changed. In previous versions of KDE the directory ~/Desktop had it’s contents displayed in iconic form on the root window. Now by default it doesn’t do that. It is possible to change it, but this is one of those things where the default in the case of an upgrade should be to act like previous versions. The way to enable the previous functionality is to go to the desktop settings (click the secondary mouse button on the background, select “Desktop Settings” and then under “Desktop Activity” change the “Type:” to the value “Folder View” and then specify the directory below.

The facility to have different background colors or pictures for each of the virtual desktops seems to have been removed – either that or the KDE configuration system doesn’t have enough functionality to let me discover how to configure it.

When the panel that I have on the left of the screen crashes everything that was next to the panel gets dragged to the left, this includes extending the width of maximised windows. Then when the panel starts again (which if lucky happens automatically) it pushes things back and if icons had been moved left it just obscures them.

When using Konqueror to browse a directory full of pictures it doesn’t generate thumbnail icons. When I middle-click on an icon for a picture it is opened with Konqueror not the image viewer that was used in KDE 3.5. The image viewer from KDE 3.5 had less options and therefore more screen space was used for the picture. Also the Konqueror window that is opened for this has a navigator panel at the left which I can’t permanently remove.

When I use Konqueror my common action is to perform a Google search and then use the middle button to open a search result in a new window. Most of my Google searches return pages that have more than one screen-full of data so shortly after opening a window with a search result I press PgDn to see the next page. That press of PgDn for some reason takes me back to the Google search. It seems that when a web page is opened in a new window the keyboard focus will be in the URL entry field, and pressing PgDn in that field takes you to the previous web page. This combination is really annoying for me.

Conclusion

Getting the sound working correctly is a great feature! Lots of little things are fancier and generally the upgrade is a benefit. The lack of thumbnails when displaying a folder of JPG files is really annoying though.

The time taken to configure things is also annoying, I support four relatives who are just users so that probably means at least an hour of configuration work and training for each one so KDE 4 is going to cost me at least half a day because of this.

Web Server Performance

We Have to Make Our Servers Faster

Google have just announced that they have made site speed part of their ranking criteria for search results [1]. This means that we now need to put a lot of effort into making our servers run faster.

I’ve just been using the Page Speed Firefox Plugin [2] (which incidentally requires the Firebug Firefox Plugin [3]) to test my blog.

Image Size

One thing that Page Speed recommends is to specify the width and height of images in the img tag so the browser doesn’t have to change the layout of the window every time it loads a picture. The following script generates the HTML that I’m now using for my blog posts. I run “BASE=http://www.coker.com.au/blogpics/2010 jpeg.sh foo.jpg bar.jpg” and it generates HTML code that merely needs the data for the alt tag to be added. Note that this script relies on a scheme where there are files like foo-big.jpg that have maximum resolution and foo.jpg which has the small version. Anyone with some shell coding skills can change this of course, but I expect that some people will change the naming scheme that they use for new pictures.

#!/bin/bash
set -e
while [ "$1" != "" ]; do
  RES=$(identify $1|cut -f3 -d\ )
  WIDTH=$(echo $RES|cut -f1 -dx)px
  HEIGHT=$(echo $RES|cut -f2 -dx)px
  BIG=$(echo $1 | sed -e s/.jpg/-big.jpg/)
  echo "<a href=\"$BASE/$BIG\"><img src=\"$BASE/$1\" width=\"$WIDTH\" height=\"$HEIGHT\" alt=\"\" /></a>"
  shift
done

Thanks to Brett Pemberton for the tip about using identify from imagemagick to discover the resolution.

Apache and Cache Expiry

Page Speed complained that my static URLs didn’t specify a cache expiry time, this didn’t affect things for my own system as my Squid server forcibly caches some things without being told to but would be a problem for some others. I first ran the command “a2enmod expires ; a2enmod headers” to configure my web server to use the expires and headers Apache modules. Then I created a file named /etc/apache2/conf.d/expires with the following contents:

ExpiresActive On
ExpiresDefault "access plus 1 day"
ExpiresByType image/gif "access plus 1 month"
ExpiresByType image/jpeg "access plus 1 month"
ExpiresByType text/css "access plus 1 day"
# Set up caching on media files for 1 year (forever?)
<FilesMatch "\.(flv|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav)$">
ExpiresDefault "access plus 1 year"
Header append Cache-Control "public"
</FilesMatch>
# Set up caching on media files for 1 month
<FilesMatch "\.(gif|jpg|jpeg|png|swf)$">
ExpiresDefault "access plus 1 month"
Header append Cache-Control "public"
</FilesMatch>

DNS Lookups

Page Speed complains about DNS names that are used for only one URL. One example of this was the Octofinder service [4], it’s a service to find blogs based on tags, but I don’t seem to get any traffic from it so I just turned it off. In this case it was the only sensible thing to do to have a single URL from their web site, but I had been considering removing the Octofinder link for a while anyway. As an aside I will be interested to see if there are comments from anyone who has found Octofinder to be useful.

I’ve also disabled the widget that used to display my score from Technorati.com, it wasn’t doing what it used to do, the facility of allowing someone to list my blog as a favorite didn’t seem to provide any benefit, and it was taking extra DNS lookups and data transfers. I might put something from Technorati on my blog again in future as they used to be useful.

Cookies

If you have static content (such as images) on a server that uses cookies then the cookie data is sent with every request. This requires transferring more data and breaks caching. So I modified the style-sheet for my theme to reference icons on a different web server, this will supposedly save about 4K of data transfer for a page load while also giving better caching.

The down-side of this is that I have my static content on a different virtual server so now updating my WordPress theme will require updating two servers, this isn’t a problem for the theme (which doesn’t get updated often) but will be a problem if I do it with plugins.

Conclusion

The end result is that my blog now gets a rating of 95% for Page Speed when previously it got a rating of 82%. Now most of the top references that are flagged by Page Speed come from Google, although there is still work for me to do.

Also it seems that Australia is now generally unsuitable for hosting web sites for viewing in other countries. I will advise all my clients who do International business to consider hosting in the US or the EU.

Health and Status Monitoring via Smart Phone

Health Monitoring

Eric Topol gave an interesting TED talk about wireless medical monitoring devices [1]. The potential for helping diabetics and other people who need ongoing treatment is obvious as is the potential for helping pregnant women and other people who might suddenly need medical treatment at short notice.

One significant positive potential for this is integration with multiple services. For example Eric’s talk showed a graph of sleep levels on a mobile phone, the deep sleep (which is apparently the most restorative) was shown in sections that were significantly less than one hour in duration. I often receive SMS messages about network problems during the night, the vast majority of them aren’t that important and can be delayed without any problem. If my phone could determine that I was in deep sleep and delay sending me a NAGIOS report for up to 30 minutes then it would help me sleep while not making any significant impact on my work in repairing server problems – it’s not as if I would be immediately productive if woken from deep sleep anyway.

Status and Attention Monitoring

Eric Horvitz, Carl Kadie, Tim Paek, and David Hovel of Microsoft Research wrote an interesting paper titled “Models of Attention in Computing and Communication: From Principles to Applications” [2]. In that paper they describe various methods for tracking user attention and filtering messages so that the user won’t be needlessly distracted by unimportant messages when they are busy. The next logical step is to integrate that with a smart phone (maybe Android would be good for this) to screen calls, unknown callers could be automatically directed to voice-mail and known good callers could be given an automated prompt as to whether they think that their call is important enough to be worth the distraction.

It seems to me that combining health and status monitoring would be the sensible thing to do. If a bio-sensor array indicates that someone is more stressed than usual then decreasing their availability for phone calls would make sense. It would also be trivial to analyse calls and determine which callers are likely to cause stress and block their calls at inconvenient times.

What we Need

Of course there are lots of security implications in this. Having multiple networked devices tracking and collating information on health and all activity (including video-monitoring in the Microsoft model) has a lot of potential for malicious use. This is one of many reasons that we need to generally improve computer security.

We also need to have free software implementations of such things. We don’t want Microsoft to get a monopoly on status monitoring. Also it seems that smart Nike shoes that can work with the iPhone [3] are the leading mass-market implementation of health monitoring, everything related to health-care should be as open as possible. I’m happy for Nike to make their own special products, but we need to have them work with open systems and make the data available as widely as possible. There is the real potential for people to needlessly die if health data is not available in open formats! According to the Wikipedia page Nike has been shipping products using a proprietary 2.4GHz wireless network since 2006 [4], it would be a really good hardware project to devise a way of making compatible sensors and readers.

We also need some free software for monitoring the user status to avoid interrupting them.

Finally we need some software to integrate all the data. Canonical’s Desktop Experience team are doing some interesting work on managing desktop notifications that will be in Ubuntu Lucid [5], they appear to have some basic support for masking notifications based on priority. Please keep up the great work Canonical people, and please consider working on advanced status monitoring next!

Slavery vs Child Porn

Slavery Still Exists!

We all like to think of slavery as a problem from the 19th century, but it still exists and is a significant problem! Kevin Bales gave an interesting TED talk about how to combat modern slavery [1]. Currently there are an estimated 27,000,000 slaves in the world, it’s a lot but it’s a smaller proportion of the world population in slavery than at any time in history. Also the price of slaves is lower than ever before, instead of being a capital asset a slave is a disposable item (this is really bad for the slaves).

The estimated average cost to liberate a slave is $US400, this involves rescuing them from slavery and teaching them the skills that they need to have a sustainable life – there’s no point rescuing them only to have them get captured again! Kevin notes that the US is still paying the price of the botched emancipation of 1865, so the liberation of slaves really needs to be done properly.

The estimated total cost to liberate all the slaves in the world is $US10.8 billion. Think about the trillions of dollars that have been spent on wars to supposedly liberate people, when a mere $US10.8 billion would liberate all slaves. That’s a fraction of the cost of the proposed National Broadband Network – which would you rather have, fast net access for cable TV services or a world without slavery?

Censorship of Child-Porn (and other things) vs Liberating Slaves

It is often claimed that child porn needs to be stopped to prevent there being economic incentives to molest children in other countries, to achieve this goal the Australian government wants to filter all net access to prevent access to child-porn, other prohibited porn, documentation about euthanasia, and the occasional dental practice (seriously, they just can’t get their filters right).

Methods that are proven to prevent children being molested should be given a much higher priority than censoring the Internet in the hope of removing economic incentives for child abuse.

It seems reasonable to assume that a significant portion of child-slaves are molested (because we know that slave owners are really bad people and there’s nothing to stop them from molesting children). Let’s assume for the sake of discussion that 1/4 of child slaves are molested. So that would give an average cost of $US1600 to free one child from sexual slavery and three other children from physically abusive environments.

Currently the Australian government plans to spend $44,000,000 in Internet censorship with the supposed aim of protecting the children. The fact that the majority of child-porn is believed to be transferred via protocols other than HTTP has not deterred the government from pushing forward a HTTP-only filter for censorship. Also the fact that anyone could use a VPN, tor, or other services to trivially bypass a HTTP filter has not deterred them.

Let’s assume for the sake of discussion that $US400 equates to $500 Australian, the exchange rate is lower than that but it varies and it’s best to be conservative. Therefore the $44,000,000 the government wants to spent on censorship could be used to liberate 88,000 child slaves. If my estimates are correct that would save 22,000 children from being molested. In the unlikely event that slavers happen to be nice people who don’t do nasty things like molest children (which really isn’t credible) then freeing 88,000 children from slavery and all the physical abuse that it involves is still a really good thing!

If a plan to prevent child sexual abuse by liberating slaves fails to actually prevent any sexual abuse then at least you end up with some freed slaves (which is a really good thing). But if a plan to prevent child sexual abuse by censoring the Internet fails then all you end up with is slower Internet access and censorship of totally unrelated things that the government doesn’t like.

Too Stupid to be a Bishop

A Stupid Bisop breaks the Godwin Rule

The Sydney Morning Herald reports that Catholic Bishop Anthony Fisher has just claimed that “GODLESSNESS and secularism led to Nazism, Stalinism, mass murder and abortion” [1]. This is a violation of the rule part of Godwin’s Law. We might not expect clerics to have enough general knowledge of society to know this rule, but it does seem reasonable to expect them to have enough empathy to understand why inappropriate Hitler analogies will just offend people and don’t advance their cause. But anyone in a position of leadership in a global organisation who is going to talk to the media should have enough intelligence to check historical references.

The Wikipedia article about “Positive Christianity” is worth reading, it includes references to Christian based race-hate in Nazi Germany as well as modern references [2]. There is also an interesting Wikipedia page about the Religious aspects of Nazism [3], there seems to be room for a lot of debate on the matter of how religion fit in to the Nazi regime – but it seems quite clear that it was not an atheist regime.

The Wikipedia page about the Rechskonkordat (the agreement between the Nazi Germany and the Catholic Church) is also worth reading [4].

Also I’m sure that the Chilean Dictator Augusto Pinochet wasn’t the only Catholic despot.

Community Services and Moral Authority

Cardinal Pell was quoted in the same SMH article as saying “we find no community services sponsored by the atheists“, of course if he was to investigate who is contributing to the religious based community service organisations he would find plenty of atheists. I know I’m not the only atheist who donates to The Salvation Army [5] on occasion. I wonder how many religious people would be happy to donate to an explicitly atheist organisation, I suspect that the prevalence of religious charities is due to the fact that a religious charity can get money from both religious people and atheists while a charity that advocated atheism in any way would be limited to atheist donors. If I was to establish a community service charity I would seriously consider adding some religious element to help with fund raising – it’s just a matter of doing what’s necessary to achieve the main goal.

Even if it wasn’t for violating Godwin’s law and a total lack of any knowledge of history Anthony would still have failed. We all know the position of the Catholic Church on the sexual abuse of children. The Catholic policies are implemented in the same way in every country and as far as we can tell have been done so for all time. I believe that makes them unqualified to offer moral advice of any kind.

Criticising the “Secular World”

Peter Craven has written an article for The Age criticising the “secular world” [6]. He makes the extraordinary claim “the molesting clergy are like the brutal policemen and negligent doctors and corrupt politicians: they come with the territory because pennies have two sides“. The difference of course is that police, doctors, and politicians tend to get punished for doing the wrong thing – even when they do things that are far less serious. But the “molesting clergy” seem to be protected by all levels of the church hierarchy.

Peter makes some claims about the “secular world” as if there is a Borg collective of atheists and claims that there is an “incomprehension of Christian values“. I believe that the attitudes of atheists and the secular justice system correspond quite well with what most Christians would regard as “Christian values” – the problem is that the actions of the church leaders tend not to match that.

It’s All About Money

I would like to know why Christians almost never change church and never cease donating. Religious organisations are much like corporations, they seek new members and new revenue sources. If a significant number of Catholics were to pledge to not donate any money to their church for a year after every child sex abuse scandal then Catholic policies might change. Also if Catholics were to start changing to Christian denominations that do the right thing in regard to moral issues then the Catholic church would either change or eventually become irrelevant. If you keep paying people who do bad things then you are supporting them!

I suggest that any church member who cares about the moral issues of the day should vote with their checkbook. If their church fails to do the right thing then inside the donation envelope they should put a note saying “due to the immoral actions of the church I will donate to other charities“. I am not aware of any church that would expel members for such a protest, but I know that some smaller parishes have cash-flow problems and would rapidly escalate the issue through the management chain if even a few members were to protest in such a manner.

Why Comments?

Russ Albery has described why he doesn’t support comments on his blog [1].

I respect his opinion and I’m not going to try and convince him to do otherwise. But I think it’s worth describing why I want comments on my blog and feel that they are worth having for many (possibly most) other blogs.

Types of Blog

The first thing to consider is the type of post on the blog. Some blogs are not well suited to comments. I have considered turning off comments on my documents blog [2] because it gets a small number of readers and is designed as reference material rather than something you might add to a public Planet feed or read every week as it has a small number of posts that are updated. So conversations in the blog comments are unlikely to happen. One thing that has made me keep comments open on my documents blog is the fact that I am using blog posts as the main reference pages for some of my projects and some people are using the comments facility for bug reports. I may make this the main bug reporting facility – I will delete the comments when I release a version of the software with the bugs fixed.

One particular corner case is a blog which has comments as a large part of it’s purpose. Some blogs have a regular “open thread” where anyone can comment about any topic, blogs which do such things have the owners act more like editors than writers. One example of this is the Making Light blog by Teresa and Patrick Nielsen Hayden [3] – who are both professional editors.

The next issue is the content of the post. If I was to create a separate blog for authoritative posts about SE Linux then there wouldn’t be much point in allowing comments, there are very few people who could correct me when I make a mistake and they would probably be just as happy to use email. When I write about issues where there is no provably correct answer (such as in this post) the input of random people on the net is more useful.

Another content issue is that of posts of a personal nature. Some people allow comments on most blog posts apart from when they announce some personal matter. I question the wisdom of blogging about any topic for which you would find comments intolerable, but if you are going to do so then turning off comments makes sense.

Finally there is the scale of the blog. If you don’t get enough readers to have a discussion in the comments then there is less benefit in having the facility turned on – the ratio of effort required to deal with spam to the benefit in comments isn’t good enough. In his FAQ about commenting [4] Russ claims that controlling spam “can take a tremendous amount of time or involve weird hoop-jumping required for commenters“. I have found the Block Spam by Math [5] WordPress plugin to be very effective in dealing with the spam, so for this blog it’s a clear benefit to allow comments. Since using that plugin my spam problem has decreased enough that I now allow comments on posts which are less than 1 year old – previously comments were closed after 90 days. The plugin is a little annoying but I changed the code to give an error message that describes the situation and prevents a comment from being lost so the readers don’t seem too unhappy.

The Purpose of Comments

Russ considers the purpose of comments to be “meaningfully addressed to the original post author or show intent to participate in a discussion“. That’s a reasonable opinion, but I believe that in most cases it’s best if comments are not addressed to the author of the post and are instead directed towards the general readers. I believe that participating in a discussion and helping random people who arrive as the result of a Google search are the main reasons for commenting. For my blog an average post will get viewed about 500 times a year and the popular posts get viewed more than 200 times per month, so when over the course of a year more than 1000 people read the comments on a post (which is probably common for one of my posts) then 99.9% of readers are not me and commentators might want to direct their comments accordingly. Of course a comment can be addressed at the blog author so the unknown audience can enjoy watching the discussion.

For some of my technical posts I don’t have time to respond to all comments. If I have developed a solution to a technical problem that is good enough I may not feel inclined to invest some extra work in developing an ideal solution. So when a reader suggests a better option I sometimes don’t test that out and therefore can’t respond to the comment. But the comment is still valuable to the 1000+ other people who read the comment section. So a commentator should not assume that I will always entirely read a comment on a technical matter.

Comment threads can end up being a little like mailing lists. I don’t think that general discussions really work well in comment threads and don’t aim for such things. But if a conversation starts then I think you might as well continue as long as it’s generally interesting.

Generally for most blogs I think that providing background information, supporting evidence, and occasionally evidence of errors is a major part of the purpose of blog comments. But entertainment is always welcome. I would be happy to see some poems in the comments section of technical posts, sometimes a Limerick or haiku could really help make a technical point.

Political blog posts can be a difficult area. Generally the people who feel inclined to write political blog posts or comment on them are not going to be convinced to entirely change course, but as there are many people who can’t seem to understand this fact a significant portion of the comments on political blog posts consist of different ways of saying “you’re wrong“. The solution to this is to moderate the comments aggressively, too many political blogs have comments sections that are all heat and no light. I’m happy for people to go off on tangents when commenting on my political posts or to suggest a compromise between my position and their preferred option. But my tolerance of comments that simply disagree is quite small. Generally I think that blogs which directly advocate a certain political position should have the comments moderated accordingly, people will read a site in the expectation of certain content and I believe that the comments should also meet that expectation to some degree. Comments on political posts can provide insights into different points of view and help discover compromise positions if moderated well.

How to provide Feedback

Russ advocates commenting to the blog author via email – it is now the only option he accepts. My observation is that the number of people who are prepared to comment via email (which generally involves giving away their identity) is vastly smaller than those who use Blog comment facilities. This means that you will miss some good comments. One of the most valuable commentators on my blog uses the name “Anonymous” and has not felt inclined to ever identify themself to me, I wouldn’t want to miss the input of that person and some of the other people who have useful things to say but who don’t want to identify themself. I have previously written about how not all opinions are equal and anonymous comments are given a lower weight [6]. That post inspired at least one blogger to configure their blog to refuse anonymous comments, it was not my intent to inspire such reactions (although they are logical actions based on a different opinion of the facts I presented). I believe that someone who is anonymous can gain authority by repeatedly producing quality work.

Another option is for people to write their own blog posts referencing the post in question. I don’t believe that my core reader base desires short posts so I won’t write a blog post unless I have something significant to say. I expect that many other people believe that the majority of their blog comments would not meet the level of quality that their readers expect from their posts (posts are expected to be more detailed and better researched than comments). As an aside forcing people to comment via blog posts will tend to increase your Technorati rating. :-#

A final option is for people to use services such as Twitter to provide short comments on posts. While Twitter is well designed for publishing short notes the problem with this is that it’s a different medium. There are many people who like reading and discussing blog posts but who don’t like Twitter and thus using a different service excludes them from the conversation.

For my blog I prefer comments for short responses and blog posts for the longer ones. If you write a blog post that references one of my posts then please enter a comment to inform me and the readers of my blog. Email is not preferred but anyone who wants to send me some is welcome to do so.

If this post inspires you to change your blog comment policy then please let me know. I would like to know whether I inspire people to allow or deny blog comments.

New Portslave release after 5 Years

I’ve just uploaded Portslave version 2010.03.30 to Debian, it replaces version 2005.04.03.1. I considered waiting a few days to make the anniversary but I wanted to get the bugs fixed.

I had a bug report suggesting that Portslave should be removed from Debian because of being 5 years without a major release. It has been running well 24*7 on one of my servers for the last 5 years and hasn’t really needed a change. There were enough bugs to keep me busy for a few hours fixing things though.

The irony is that I started using dates as version numbers back when there were several forks of Portslave with different version numbering schemes. I wanted to show that my fork had the newer version and a recent date stamp was a good indication of that. But then when Portslave didn’t need an update for a while the version number showed it and people got the wrong idea.

The new project home page for Portslave is on my document blog [1].

Server Costs vs Virtual Server Costs

The Claim

I have seen it claimed that renting a virtual server can be cheaper than paying for electricity on a server you own. So I’m going to analyse this with electricity costs from Melbourne, Australia and the costs of running virtual servers in the US and Europe as these are the options available to me.

The Costs

According to my last bill I’m paying 18.25 cents per kWh – that’s a domestic rate for electricity use and businesses pay different rates. For this post I’m interested in SOHO and hobbyist use so business rates aren’t relevant. I’ll assume that a year has 365.25 days as I really doubt that people will change their server arrangements to save some money on a leap year. A device that draws 1W of power if left on for 365.25 days will take 365.25*24/1000 = 8.766kWh which will cost 8.766*0.1825 = $1.5997950. I’ll round that off to $1.60 per Watt-year.

I’ve documented the power use of some systems that I own [1]. I’ll use the idle power use because most small servers spend so much time idling that the time that they spend doing something useful doesn’t affect the average power use. I think it’s safe to assume that someone who really wants to save money on a small server isn’t going to buy a new system so I’ll look at the older and cheaper systems. The lowest power use there is a Cobalt Qube, a 450MHz AMD K6 is really small, but at 20W when idling means a cost of only $32 per annum. My Thinkpad T41p is a powerful little system, a 1.7GHz Pentium-M with 1.5G of RAM, a 100G IDE disk and a Gig-E port should be quite useful as a server – which now that the screen is broken is a good use for it. That Thinkpad drew 23W at idle with the screen on last time I tested it which means an annual cost of $36.80 – or something a little less if I leave the screen turned off. A 1.8GHz Celeron with 3 IDE disks drew 58W when idling (but with the disks still spinning), let’s assume for the sake of discussion that a well configured system of that era would take 60W on average and cost $96 per annum.

So my cost for electricity would vary from as little as $36.80 to as much as $96 per year depending on the specs of the system I choose. That’s not considering the possibility of doing something crazy like ripping the IDE disk out of an old Thinkpad and using some spare USB flash devices for storage – I’ve been given enough USB flash devices to run a RAID array if I was really enthusiastic.

For virtual server hosting the cheapest I could find was Xen Europe charges E5 for a virtual server with 128M of RAM, 10G of storage and 1TB of data transfer [2], that is $AU7.38. The next best was Quantact who charges $US15 for a virtual server with 256M of RAM [3], that is $AU16.41.

Really for my own use if I was paying I might choose Linode [4] or Slicehost [5], they both charge $US20 ($AU21.89) for their cheapest virtual server which has 360M or 256M of RAM respectively. I’ve done a lot of things with Linode and Slicehost and had some good experiences, Xen Europe got some good reviews last time I checked but I haven’t used them.

The Conclusion

When comparing a Xen Europe virtual server at $88.56 per annum it might be slightly cheaper than running my old Celeron system – but would be more expensive than buying electricity for my old Thinkpad. If I needed more than 128M of RAM (which seems likely) then the next cheapest option is a 256M XenEurope server for $14.76 per month which is $177.12 per annum which makes my old computers look very appealing. If I needed more than a Gig of RAM then my old Thinkpad would be a clear winner, also if I needed good disk IO capacity (something that always seems poor in virtual servers) then a local server would win.

Virtual servers win when serious data transfer is needed. Even if you aren’t based in a country like Australia where data transfer quotas are small (see my previous post about why Internet access in Australia sucks [6]) you will probably find that any home Internet connection you can reasonably afford doesn’t allow the fast transfer of large quantities of data that you would desire from a server.

So I conclude that apart from strange and unusual corner cases it is cheaper in terms of ongoing expenses to run a small server in your own home than to rent a virtual server.

If you have to purchase a system to run as a server (let’s say $200 for something cheap) and assume hardware depreciation expenses (maybe another $200 every two years) then you might be able to save money. But this also seems like a corner case as the vast majority of people who have the skills to run such servers also have plenty of old hardware, they replace their main desktop systems periodically and often receive gifts of old hardware.

One final fact that is worth considering is that if your time has a monetary value and if you aren’t going to learn anything useful by running your own local server then using a managed virtual server such as those provided by Linode (who have a really good management console) then you will probably save enough time to make it worth the expense.

Autism vs Asperger Syndrome

Diagnostic Changes for Autism Spectrum Disorders

Currently Asperger Syndrome (AS) is one of a group of conditions that are grouped into the category Autism Spectrum Disorders (ASD).

The American Psychiatric Association plans to merge “Asperger’s Disorder” into “Autism Spectrum Disorder” [1] in version 5 of their Diagnostic and Statistical Manual (DSM). Apparently a primary reason for the change is the difficulty in assessing people into the various categories (AS, Autism, and PDD-NOS) and some variation in diagnosis between regions.

Professor Simon Baron-Cohen (a leading researcher on Autism and Asperger Syndrome) wrote an insightful article about this for the New York Times [2]. He suggests that while genetic research about the causes of Autism Spectrum Disorders (ASDs) is in progress there should be no change. If it turns out that AS and Autism have the same genetic cause then that would be good evidence to combine them into a single diagnostic category. If however they turn out to have different genetic causes then they would need different categories and he suggests that changes should be delayed until this issue is resolved. Simon also raises the issue of the status of people who have already been diagnosed, this is one of the social issues relating to a change in diagnostic criteria.

Social Issues related to Diagnosis

Unlike some disorders listed in the DSM, many (possibly most) people with AS really care about such things. I think that a common reaction to being diagnosed with Aspergers is to make the study of ASDs a “Special Interest“, which therefore makes it impossible to ignore what the psychologists are doing in this regard.

The biggest problem with changing the diagnostic criteria in this regard is that AS has a good reputation. Some people even think that it’s generally a good thing and seem to imagine that every child who is diagnosed with it will end up working for Google! This means that parents will be less likely to reject a diagnosis and therefore will be more likely to try and create a good environment for their child and seek appropriate therapies (such as social skills training and occupational therapy). I expect that a child who is diagnosed as Autistic but who doesn’t obviously conform to the worst stereotypes will likely have their parents reject the diagnosis which will lead to a bad result for everyone concerned.

The contrary view in this issue is that people who are on the spectrum but who insist that they aren’t Autistic are prejudiced and they should embrace the Autism Spectrum label as a measure of solidarity [3], while that’s a reasonable point it’s not going to happen in the short term.

Also there is the issue of adult diagnosis of AS, there are lots of adults who could benefit from being diagnosed and obstacles to such diagnosis (such as associating it with a label that is not well accepted such as Autism) are not going to do any good for anyone.

Is Asperger Syndrome really that similar to Autism?

Roy Richard Grinker (Professor of Anthropology) wrote a positive article for the New York Times about the diagnostic changes [4]. He seems to think that because in some cases it is difficult to distinguish the difference between Autism and AS they should be in a single diagnostic criteria. Based on that logic you could say that no-one should be diagnosed with an ASD because there is never a clear dividing line between the Neuro-Typical and those who are on the spectrum! Some people are clearly on the spectrum, some clearly aren’t, and some are near the border.

Roy cites his daughter and Temple Grandin as examples of Autistic people who have greater ability to relate to animals than someone who is Neuro-Typical (NT). I don’t have any particular skills in terms of relating to animals. Animals have smaller brains than humans and have thoughts that are less complex and more related to short-term issues, this makes them easier to predict in some situations. I do have significantly better skills in figuring out how to operate machines than most NTs, and this doesn’t appear to be uncommon among Aspies. I’ve read some of the material that Temple Grandin has written and watched the video of her TED talk, and I get a strong impression that she isn’t like me. Even the Aspies who are the least successful in terms of their career (IE quite unlike Temple Grandin) often seem to be like me, I can understand the way they think and recognise that the problems they face are similar to mine but merely more severe.

It seems to me that there are significant personality differences between people who have an affinity for animals and those who have an affinity for machines, maths, and engineering.

I wouldn’t be surprised if it was discovered that Autism and AS had different genetic causes, and this might mean that someone could have both sets of genes. It is obvious that the dividing line between Autism and AS is not that clear. It also seems that part of the diagnosis as implemented by psychologists may be based on the ability to act like an NT and succeed by objective criteria – IE earn a good salary in the case of adults. One thing that Roy does get right is that he notes that among people diagnosed with AS and Autism there are both “high” and “low” functioning individuals.

One thing that Roy gets wrong is the implication that Autistic people can become Aspies. An adult who is assessed without background information on their childhood may get a different diagnosis. If someone was reassessed as an adult with the full facts about their childhood available then (barring DSM changes) the same diagnosis should be returned.

Conclusion

It appears that this DSM change is going through regardless of the opinion of the people who are affected. While there is a logical basis for giving more weight to researchers than to the research subjects (who are bound to be more biased) it seems that there are some things you can’t properly understand unless you live them. When a good portion of the research subjects feel compelled to share their experiences with anyone who will listen it is disappointing that so few of the researchers appear to be listening.