|
|
Since writing my post about whether the National Broadband Network can ever break even [1] I’ve had a number of people try to convince me of it’s merit. He is my summary and rebuttal of some of the arguments for the NBN:
The FUD
Claims are made that Australia may fall behind the rest of the world, children may be disadvantaged in their education, or many other bad things may unless a faster net access is provided to everyone. That is FUD. If we are going to spend $43,000,000,000 then we should have some evidence that it will do some good.
One thing to note is that rural areas will not get anything near the 100Mb/s speeds that FTTH will deliver to people in the cities. So if faster net access is actually essential then it would probably make sense to start by delivering ADSL2+ speeds to rural areas – something that is not planned to be part of the NBN.
Some people claim that having slow net access (ADSL2+) is going to preclude unknown future uses of the Internet. However the NBN is only going to be 100MB/s so any such claim is essentially that “unknown things will happen in the future to make 24Mb/s too slow, but those unknown things won’t make 100Mb/s too slow”. I started using computer networks when a 1200/75 modem was considered fast. Over about the last 25 years typical net access speeds have increased from 1200b/s to about 12Mb/s, that’s a factor of 10,000 speed improvement! Now the people who believe that we only need to multiply the current speed by a factor of 4 to address all future needs could be correct, but it seems unlikely and doesn’t seem like a good idea for a $43,000,000,000 bet. If they were talking about 1Gb/s net access then things would be different in this regard.
Technological Development
Some people compare the NBN to the Apollo Program and suggest that the scientific research involved in implementing a FTTH network might result in useful developments in other areas.
The Wikipedia page about Fiber to the premises by country indicates that Hong Kong had 1Gb/s available in 2006. It seems that a service which is rolled out 4 years later and 10 times slower than in Hong Kong is not going to involve a huge amount of research. Certainly nothing like the Apollo Program.
It would allow multiple HDTV channels to be viewed at the same time
According to the Australian Bureau of Statistics in 2001 the average Australian household had 2.6 people (3.0 in NT) [2]. The incidence of every member of a household wanting to watch a different live TV channel at the same time is probably quite low, it seems that young people nowadays spend as much time watching Youtube as they do watching TV. Based on the Wikipedia bit-rate page it seems that an ADSL2+ link could stream two HDTV channels.
I could use fiber speeds today
There are some people who claim that they need faster speeds for their data transfers right now. The problem is that the latency of data transfer is a bottleneck in transfer rates. In some quick tests between servers that have reasonably fast connections to the Internet I was only able to get one international transfer in excess of ADSL2+ speed (24Mb/s or 3MB/s).
I was able to transfer data between London and Germany at a speed of 11MB/s – which was possibly limited by the 100baseT Ethernet connection in the data center in Germany. Now if the people who pay for that German server were to pay more then they would get a higher speed. So anyone who downloads anything from my web site in it’s current configuration would get a significant performance boost by using an NBN connection – if they live in Europe! But if they live in Australia then they will probably get a fraction of that speed (a quick test indicates that my ADSL2+ connection gives me about 500KB/s from my web server. I need to have at least three transfers going at the same time to get more than 600KB/s when downloading files from other countries, and it’s rare to have a single download run at a speed higher than 100KB/s.
It would be handy if I could download at higher speeds from my ISP’s mirror (which seems to be the only way someone in Australia can even use full ADSL2+ speeds). But it’s certainly not worth the $5,000 per household installation cost of the NBN to get that.
I’m sure that there are some people who really do have usage patterns that could take advantage of fiber net access, one possibility would be downloading large files from a local source such as a major web site that uses a CDN. It seems likely to me that the majority of people who fit this category are using major porn services.
Fiber would be good for Video-Conferencing
To be better than ADSL for video-conferencing a NBN connection would need a faster upload speed. Internode is already selling NBN connections for the trial areas [3]. The cheapest fiber plans that they offer are at the 50/2 speed, that’s 50Mb/s download and 2Mb/s upload – in theory ADSL2+ should have a higher upload speed. In my tests the best result I’ve got from sending files over my ADSL2+ link to another country is about 110KB/s (880Kb/s). The fact that the theoretical speed of a fiber connection is better than the measured speed of an ADSL2+ connection in this regard doesn’t mean much, let’s not assume that a fiber connection will get it’s theoretical maximum speed.
Not that you necessarily need higher speeds for video-conferencing, Youtube is one of many services that uses a lot less bandwidth than the upload speed of an ADSL2+ connection. Also video-calls which are supported on most 3G mobile phones use even less bandwidth again.
I want an NBN connection to run a server at home
The fastest connection for uploading that Internode offers to “home users” is the “Home High” plan at 100/8 speed, that is maybe a bit more than twice as fast for uploading as ADSL2+. They also offer a SOHO plan that supports 16Mb/s upload speed (at extra expense) and suggest that customers who want higher speeds contact their business sales department. But they include both sending and receiving data in the bandwidth quota for the fiber connections. Transmitting data at 16Mb/s isn’t that great for a server.
The cheapest virtual server plan on offer from Linode [4] includes 200GB of data transfer per month and has significantly higher transmission speeds. You could get a Linode virtual server plus an Internode ADSL2+ connection for about the same price as an Internode fiber connection to the home.
There are two down-sides to virtual servers, one is that they are limited in the amount of RAM that they have (I can easily afford to buy 8G of RAM for a home system but renting an 8G virtual server is going to be expensive) and the other is that the storage is limited. Shared storage on virtual servers can be slow and is limited in size. If you need to run a server with a few terabytes of data storage (which is cheap on commodity SATA disks but expensive on server-grade disks) and you don’t need to transfer much of it then a home server on the NBN might do well. Otherwise it’s probably not going to work well for server use.
The NBN will avoid people leaving their PC on to do downloads and save electricity
To save electricity you would have to have a significant incidence of situations where a download can complete fast enough over the NBN to allow the user to turn their PC off before going to bed but be slow enough over ADSL to require that it be left on overnight. That would probably only apply to downloads from a CDN or from a local ISP mirror. From the Internode mirror I can download a test file at a speed of 850KB/s (I guess this means that my ADSL connection is not delivering full speed – I suspect poor quality wiring in my home and would try to fix it if the current speed was too slow). In 5 minutes spend brushing my teeth I could download 250M of data, in the same time if I had a 100/4 connection on the NBN I might be able to download almost 3G of data. So in the unlikely event that I wanted to download a CD or DVD image and turn off my PC immediately before going to bed then the NBN would be a good thing.
But then of course I would want to burn the CD or DVD image to disc and that would take long enough that I would leave it on overnight…
My ADSL connection gives really low speeds or I’m out of range for ADSL
Some people who live only a short distance from an exchange are unable to get ADSL2+. Some people live a long way from exchanges and are outside ADSL range. The ideal solution to these problems is not to provide fiber access to the majority of the population, it is to provide ADSL to everyone who is near an exchange and maybe provide fiber access to some people who are a long way from exchanges.
I find it rather ironic that some people in the country are essentially saying “because net access in the country is so slow we need fiber in the cities”. The NBN is not going to give fiber to rural areas, satellite is one of the options that will be used.
Really High Speeds
I have been informed that to get the distances needed for the NBN they have to use Single Mode Fiber, this permits scaling up to higher speeds at some later time by changing the hardware at the end points. So we could end up with a Hong Kong speed network at some future time with the same fibers. This is a good thing.
But I don’t think that we need to get fiber installed right now so that we can use 100Mb, we could wait until there is enough need to get the faster transmission rates from the start. At the moment it’s just a waste of money.
Upgrade Requirements
Debian/Squeeze (the next release of Debian) will be released some time later this year. Many people are already upgrading test servers, and development systems and workstations that are used to develop code that will be deployed next year. Also there are some significant new features in Squeeze that compel some people to upgrade production systems now (such as a newer version of KVM and Ext4 support).
I’ve started working on an upgrade plan for SE Linux. The first thing you want when upgrading between releases is a way of supporting booting a new kernel independently of the other parts of the upgrade. Either supporting the old user-space with the new kernel or the new kernel with the old user-space. It’s not that uncommon for a new kernel to have a problem when under load so it’s best to be able to back out of a kernel upgrade temporarily while trying to find the cause of the problem. For workstations and laptops it’s not uncommon to have a kernel upgrade not immediately work with some old hardware, this can usually be worked around without much effort, but it’s good to be able to keep systems running while waiting for a response to a support request.
Running a Testing/Unstable kernel with Lenny Policy
deb http://www.coker.com.au lenny selinux
In Lenny the version of selinux-policy-default is 2:0.0.20080702-6. In the above APT repository I have version 2:0.0.20080702-18 which is needed if you want to run a 2.6.32 kernel. The main problem with the older policy is that the devtmpfs filesystem that is used by the kernel for /dev in the early stages of booting [1] is not known and therefore unlabeled – so most access to /dev is denied and booting fails. So before upgrading to testing or unstable it’s a really good idea to install the selinux-policy-default package from my Lenny repository and then run “selinux-policy-upgrade” to apply the new changes (by default upgrading the selinux-policy-default package doesn’t change the policy that is running – we consider the running policy to be configuration files that are not changed unless the user requests it).
There are also some other kernel changes which require policy changes such as a change to the way that access controls are applied to programs that trigger module load requests.
Upgrading to the Testing/Unstable Policy
While some details of the policy are not yet finalised and there are some significant bugs remaining (in terms of usability not security) the policy in Unstable is usable. There is no need to rush an upgrade of the policy, so at this stage the policy in Unstable and Testing is more for testers than for serious production use.
But when you upgrade one thing you need to keep in mind is that we don’t to support upgrading the SE Linux policy between different major versions of Debian while in multi-user mode. The minimum requirement is that after the new policy package is installed you run the following commands and then reboot afterwards:
setenforce 0
selinux-policy-upgrade
touch /.autorelabel
If achieving your security goals requires running SE Linux in enforcing mode all the time then you need to do this in single-user mode.
The changes to names of domains and labeling of files that are entry-points for domains is significant enough that it’s not practical to try and prove that all intermediate states of partial labeling are safe and that there are suitable aliases for all domains. Given that you need to reboot to install a new kernel anyway the reboot for upgrading the SE Linux policy shouldn’t be that much of an inconvenience. The relabel process on the first boot will take some time though.
Running a Lenny kernel with Testing/Unstable Policy
In the original design SE Linux didn’t check open as a separate operation, only read/write etc. The reason for this is that the goal for SE Linux was to control information flows. The open() system call doesn’t transfer any data so there was no need to restrict access to it as a separate operation (but if you couldn’t read or write a file then an attempt to open it would fail). Recent versions of the SE Linux policy have added support for controlling file open, the reason for this is to allow a program in domain A to open a file and then let a program in domain B inherit the file handle and continue using the file even if it is not normally permitted to open the file – this matches the Unix semantics where a privileged process can allow an unprivileged child to inherit file handles or use Unix domain sockets to pass file handles to another process with different privileges.
SELinux: WARNING: inside open_file_mask_to_av with unknown mode:c1b6
Unfortunately when support was added for this a bug was discovered in the kernel, this post to the SE Linux mailing list has the conclusion to a discussion about it [2]. The symptom of this problem is messages such as the above appearing in your kernel message log. I am not planning to build a kernel package for Lenny with a fix for this bug.
The command “dmesg -n 1” will prevent such messages from going to the system console – which is something you want to do if you plan to login at the console as they can occur often.
The TEDxVolcano
The TED conference franchise has been extended to TEDxVolcano [1], this is a small conference that features people who are stranded by the Eyjafjallajökull volcano in Iceland. As usual TED is an inspiration to us all, so there is obvious potential for other conferences to be organised in similar situations – there’s no reason why a free software conference can’t be organised in Europe right now!
What You Need to run a Conference
If a conference will have limited attendance (EG due to a volcano preventing anyone from flying to the area) then filming everything is very important. I’ve seen adverts for digital cameras that support “Full HD” resolution (1920*1080) for as little as $AU400. $AU600 will get you a “digital camcorder” that does Full HD which will offer some benefits for recording long movies (such as the ability to store the video on an external hard drive). If I was stuck in a foreign hotel with not much to do then I would be prepared to buy a digital camera or camcorder for the purpose of running such a conference (my current digital camera is 5.1MP and only has 3* optical zoom, it’s a nice camera but I could do with something better. A tripod can cost up to $100, but I recently bought myself a 15cm tall tripod for $10 – that would do at a pinch. Once you have high quality video you can easily upload it to something like Blip.TV. Of course you get a better result if you do some post-production work to merge images of the slides for the lecture into the video, but that is a lot of work and probably requires a camera that outputs uncompressed video for best results.
The next issue is getting a venue. Different hotels cater for different parts of the market, some cater to tourists, some to business travel, some to conferences. If you want a venue at short notice you may be able to get a good deal if you find a hotel that is adversely affected, for example I’m sure that there are some quite empty conference hotels in Europe right now – but the tourist hotels are probably reasonably busy (why not do some tourism if you are stuck). I expect that hotels really don’t want to have empty conference rooms and are prepared to offer good deals for bookings at short notice. Of course you would want to try to ensure that hotel rooms aren’t too expensive in that hotel as some delegates will want to stay in the hotel which hosts the conference.
The minimal staffing for a micro conference is probably two people, one for taking payment, directing people, etc, and the other to film the lectures and moderate panel discussions. Rumor has it that attending without paying is a problem at conferences, for conferences that are planned in advance corporations will try and send multiple employees on the one ticket and have them share a name-tag – one issue with this is that there is a fixed quantity of food supplied and if extra people appear then everyone who paid gets less, another is that people who pay really hate to see freeloaders. The best reference I’ve found for people not paying at conferences is Jon Oxer’s description of how Leslie Cachia of Letac Drafting Services brazenly stole a book from him [2].
Name-tags are needed for any meeting with more than about 15 people. I’m not sure how to get proper name-tags (ones that pin on to clothing and have printed names – maybe the bigger hotels can add this to the conference package). But a roll of sticky labels from an office supply store is pretty cheap.
Costs in Wellington
Along with a few other people I considered running a small security conference immediately before or after LCA 2010, that ended up not happening but I will consider doing it in future. When considering that the general plan was to get a hotel to provide a meeting room for 10-30 people (we had no real idea of the demand).
When investigating the possibilities for running a conference in Wellington I discovered that the hotel fees for a conference room can either be based on paying a fixed fee for the room plus additional expenses for each item or you can pay a fixed rate per person. It seemed that there was the potential to save a small amount of money by paying the fixed fees and avoiding some payments for things like tea/coffee service. But the amount that could be saved would be small and it would incur extra effort in managing it – saving $5 per person is a good thing if you have 600 delegates, but if you have 30 then it’s probably a waste of time. So it seemed best to go for one of the packages, you tell the hotel what time you want the lunch and snack breaks and how you want the tables arranged and they just do everything. The cost for this seemed to be in the range of $nz35 to $nz55 per delegate per day. There is some flexibility in room arrangement, so a room that seats 12 people in the “board-room” layout (tables in a rectangle facing the center) would fit 25 in the “classroom” layout (tables all facing the front) or 50 in the “theater” layout (chairs facing the front with no tables). So the hotel could accommodate changes in the size at relatively short notice (whatever their notice period for buying the food).
The cost for a catered conference dinner seemed to be about $nz45 per diner. In many cases it would be possible to get a meal that is either cheaper, better, or both by going somewhere else, but that wastes time and effort. So that gave an overall conference cost of about $nz135 for a two day conference with a dinner at the end of the first day. Given that the cheapest budget rate from Wotif.com for a 3 star hotel in Wellington is currently $nz85 per night it seems that $nz135 for a two day conference including dinner is pretty cheap as the minimum accommodation cost would be $nz170. Also note that the hotels which I considered for hosting the conference had rates for their hotel rooms that were significantly greater than $nz85 per night.
The hotels all offer other services such as catered “cocktail parties”, these would be good things for any company that wants to sponsor the conference.
Different cities can have vastly different prices for hotels. But I expect that the way conference rooms are booked and managed is similar world-wide and the ratio of conference costs to hotel booking fees to also be similar. Most of the hotels that cater to conferences seem to be owned by multi-national corporations.
It would probably make sense to charge delegates an extra $10 or $15 above the cost of running the conference to cover unexpected expenses. Of course it’s difficult to balance wanting to charge a low rate to attract more people with wanting to avoid the risk of a financial loss.
Conclusion
The hard part is getting speakers. If you can get speakers and panel participants who can fill the time slots and have interesting things to say then all the other parts of organising a micro conference should be relatively easy.
When the cost is less than $150 per delegate then a syndicate of a few people can easily agree to split the loss if the number of delegates turns out to be smaller than expected, a potential loss of $2000 shared among a few people shouldn’t be a huge problem. Also if the conference is booked at short notice (EG because of a volcano) then the hotel shouldn’t require any deposit for anything other than the food which is specially ordered (IE not the tea, coffee, etc) – that limits the potential loss to something well under $100 per delegate who doesn’t attend.
Anyone who has enough dedication to a topic to consider running a conference should be prepared to risk a small financial loss. But based on my past observations of the generosity of conference delegates I’m sure that if at the conference closing the organiser said “unfortunately this conference cost more than the money you paid me, could you please put something in this hat on the way out” then the response would be quite positive.
Note that I am strictly considering non-profit conferences. If you want to make money by running a conference then most things are different.
The Problem
I’ve just upgraded my Dell PowerEdge T105 [1] from Debian/Lenny to Debian/Squeeze. Unfortunately the result of the upgrade was that everything in an X display looked very green while the console display looked the way it usually did.
I asked for advice on the LUV mailing list [2] and got a lot of good advice. Daniel Pittman offered a lot of great advice.
The first suggestion was to check the gamma levels, the program xgamma displays the relative levels of Red, Green, and Blue (the primary colors for monitors) where it is usually expected that all of them will have the value of 1.0. This turned out to not be the problem, but it’s worth noting for future instances of such problems. It’s also worth noting the potential use of this to correct problems with display hardware, I’ve had two Thinkpads turn red towards the end of their lives due to display hardware problems and I now realise I could have worked around the problem with xgamma.
He also suggested that it might be ICC, the command “xprop -root | grep -i icc” might display something if that was the case. I’m still not sure what ICC is about but I know it’s not set on my system.
The next suggestion was to use the VESA display driver to try and discover whether it was a bug in the ATI driver. It turned out that the VESA driver solved that problem, I was tempted to continue using the VESA driver until I realised that the VESA driver has a maximum resolution of 1280*1024 which isn’t suitable for a 1680*1050 resolution display.
After reviewing my Xorg configuration file Daniel noted that my frame buffer depth of 16 bits per pixel is regarded as unusual by today’s standards and probably isn’t tested well. As 24bpp is generally implemented with 32bits for each pixel that means it takes twice the frame-buffer storage (both in the X server and in some applications) as well as twice the memory bandwidth to send data around. So I generally use 16bpp for my systems to make them run a little faster.
(II) RADEON(0): Not using mode “1680×1050” (mode requires too much memory bandwidth)
I tried using a depth of 24bpp and then I saw messages such as the above in /var/log/Xorg.0.log. It seems that the display hardware in my ATI ES1000 (the on-motherboard video card in the Dell server) doesn’t have the memory bandwidth to support 1680*1050*24bpp. I tried using the gtf to generate new mode lines, but it seems that there is no 24bpp mode which has a low enough vertical refresh rate to not exhaust memory bandwidth but which is also high enough for the monitor to get a signal lock.
The Solution
My current solution is to use 15bpp mode which gives almost the same quality as 16bpp and uses the same small amount of memory bandwidth. It seems that 15bpp doesn’t trigger the display driver bug. Of course one down-side to this is that the default KDE4 desktop background in Debian seems perfectly optimised to make 15bpp modes look ugly, it has a range of shades of blue that look chunky.
What I really want to do is to get a better video card. Among other things I want to get a 1920*1080 resolution monitor in the near future, Dell is selling such monitors at very low prices and there are a bunch of affordable digital cameras that record video at that resolution. Even if I can get the ES1000 to work at 1920*1080 resolution it won’t support playing Full HD resolution video – I can barely play Youtube videos with it!
I’ve previously described my experience with the awful Computers and Parts Land (CPL) store [3] where they insisted that a PCIe*16 graphics card would work in my PCIe*8 system and then claimed to be doing me a favor by giving me a credit note for the full value (not a refund). This convinced me to not bother trying to buy such a card for the past year. But now it seems that I will be forced to buy one.
What I Want to Buy
I want a PCIe*8 video card that supports 1920*1080 resolution at 24bpp and has good enough performance with free Linux drivers to support Full HD video playback. Also PCIe*4 would do, and I’m prepared to compromise on support of Full HD video. Basically anything better than the ES1000 will do.
Does anyone know how I can buy such a card? I would prefer an ATI card but will take an NVidia if necessary.
Note that I have no plans to cut one of the PCIe sockets on my motherboard (it’s an expensive system and I’m not going to risk breaking it). I will consider cutting the excess pins off a video card as a last resort. But I would rather just buy a PCIe*8 video card.
Note that I am not going to pay cash in advance to a random person who reads my blog. Anyone who wants to sell me a second-hand card must either have a good reputation in the Linux community or ship the card to me on the condition that I pay after it passes the tests.
Update: The first version of this post said that I upgraded TO Lenny, not FROM it.
I’ve just upgraded my Thinkpad (which I use for most of my work) to Debian/testing with KDE4.
Improvements
Kde 3.5 (from Debian/Lenny) didn’t properly display the applets in a vertical task bar. I want a vertical task bar because my screen resolution is 1680*1050 and I find that a less rectangular screen workspace is best for my usage patterns.
In my previous post about my Thinkpad T61 I described how the sound controls weren’t working [1]. These problems were fixed as part of the upgrade, KDE just does the right thing. Now when I press the buttons to increase or decrease the volume the ALSA settings are changed and a small window is briefly displayed in the center of the screen to show the new volume.
Sounds are now made when I plug or unplug the power cable, this was configured in KDE 3.5 but just didn’t work.
Problems
If I have a maximised Konqueror window and I use the middle mouse button to open a link in a new window then the new window will also be maximised. Previously when I did that the new window was not maximised. What sometimes happens is that I want to open several links from a web page in different windows, so if I can open them in non-maximised windows then I can click the title-bar or the bottom status-bar of the parent window to get it in the foreground again. Probably an ideal solution to this use-case would be to configure the middle mouse button to open a new window in the background or minimised.
I can’t figure out how to implement accelerator keys for window controls. In particular I like to use ALT-F9 to minimise a window (CUA89 standard). The upgrade from KDE 3.5 to KDE 4 lost this and I can’t get it back.
I want to have an icon on my panel to launch a Konqueror session. I don’t want a large amount of space taken up for a launcher for several different Konqueror options, I just want a regular Konqueror for web browsing available at a single click. There didn’t seem to be an option for this. KDE 3.5 has an option in the add widgets to toolbar dialogue to add icons for applications. I have just discovered that in KDE 4 the only way to do this is to go through the menu structure and then click the secondary mouse button. Having two ways to do something is often a good thing, particularly when the other way is the way that was most obvious in the previous version!
It was annoying that the font choices for my Konsole session were lost on the KDE 4 upgrade, it’s not a complex setting. Also the option to resize a Konsole session to a common size (such as 80*25) seems to have been lost.
I had to spend at least 30 minutes configuring kmail to get it to display mail in much the same manner as it used to. You have to use the “Select View Appearance (Theme)” icon off at the right of the “Search” box and select “Classic” and then go to “Select Aggregation Mode” (immediately to the left) to select “Flat Date View“. I’m happy for KDE 4 to default to new exciting things when run the first time, but when upgrading from KDE 3.5 it should try to act like KDE 3.5.
I decided to use Kopete for Jabber just to preempt the GNOME people adding Mono support to Pidgin. I had to install the libqca2-plugin-ossl and qca-tls packages to enable SSL connection, missing either of those gives you an incomprehensible error condition that even strace doesn’t clarify much. Given that it’s generally agreed that sending passwords unencrypted over the Internet is a bad idea and that it’s a configuration option in Jabber servers to reject non-SSL connections it seems to me that the Kopoete package should depend on the packages that are needed for SSL support. Failing that it would be good to have Kopete offer big visible warnings when you don’t have them.
I use the KDE 2 theme and the right side of the title bar of each window is a strange dappled pattern. Not sure why and I have more important problems to fix.
Parts of KDE crash too often. I’ll start filing bug reports soon.
The management of the Desktop folder has changed. In previous versions of KDE the directory ~/Desktop had it’s contents displayed in iconic form on the root window. Now by default it doesn’t do that. It is possible to change it, but this is one of those things where the default in the case of an upgrade should be to act like previous versions. The way to enable the previous functionality is to go to the desktop settings (click the secondary mouse button on the background, select “Desktop Settings” and then under “Desktop Activity” change the “Type:” to the value “Folder View” and then specify the directory below.
The facility to have different background colors or pictures for each of the virtual desktops seems to have been removed – either that or the KDE configuration system doesn’t have enough functionality to let me discover how to configure it.
When the panel that I have on the left of the screen crashes everything that was next to the panel gets dragged to the left, this includes extending the width of maximised windows. Then when the panel starts again (which if lucky happens automatically) it pushes things back and if icons had been moved left it just obscures them.
When using Konqueror to browse a directory full of pictures it doesn’t generate thumbnail icons. When I middle-click on an icon for a picture it is opened with Konqueror not the image viewer that was used in KDE 3.5. The image viewer from KDE 3.5 had less options and therefore more screen space was used for the picture. Also the Konqueror window that is opened for this has a navigator panel at the left which I can’t permanently remove.
When I use Konqueror my common action is to perform a Google search and then use the middle button to open a search result in a new window. Most of my Google searches return pages that have more than one screen-full of data so shortly after opening a window with a search result I press PgDn to see the next page. That press of PgDn for some reason takes me back to the Google search. It seems that when a web page is opened in a new window the keyboard focus will be in the URL entry field, and pressing PgDn in that field takes you to the previous web page. This combination is really annoying for me.
Conclusion
Getting the sound working correctly is a great feature! Lots of little things are fancier and generally the upgrade is a benefit. The lack of thumbnails when displaying a folder of JPG files is really annoying though.
The time taken to configure things is also annoying, I support four relatives who are just users so that probably means at least an hour of configuration work and training for each one so KDE 4 is going to cost me at least half a day because of this.
We Have to Make Our Servers Faster
Google have just announced that they have made site speed part of their ranking criteria for search results [1]. This means that we now need to put a lot of effort into making our servers run faster.
I’ve just been using the Page Speed Firefox Plugin [2] (which incidentally requires the Firebug Firefox Plugin [3]) to test my blog.
Image Size
One thing that Page Speed recommends is to specify the width and height of images in the img tag so the browser doesn’t have to change the layout of the window every time it loads a picture. The following script generates the HTML that I’m now using for my blog posts. I run “BASE=http://www.coker.com.au/blogpics/2010 jpeg.sh foo.jpg bar.jpg” and it generates HTML code that merely needs the data for the alt tag to be added. Note that this script relies on a scheme where there are files like foo-big.jpg that have maximum resolution and foo.jpg which has the small version. Anyone with some shell coding skills can change this of course, but I expect that some people will change the naming scheme that they use for new pictures.
#!/bin/bash
set -e
while [ "$1" != "" ]; do
RES=$(identify $1|cut -f3 -d\ )
WIDTH=$(echo $RES|cut -f1 -dx)px
HEIGHT=$(echo $RES|cut -f2 -dx)px
BIG=$(echo $1 | sed -e s/.jpg/-big.jpg/)
echo "<a href=\"$BASE/$BIG\"><img src=\"$BASE/$1\" width=\"$WIDTH\" height=\"$HEIGHT\" alt=\"\" /></a>"
shift
done
Thanks to Brett Pemberton for the tip about using identify from imagemagick to discover the resolution.
Apache and Cache Expiry
Page Speed complained that my static URLs didn’t specify a cache expiry time, this didn’t affect things for my own system as my Squid server forcibly caches some things without being told to but would be a problem for some others. I first ran the command “a2enmod expires ; a2enmod headers” to configure my web server to use the expires and headers Apache modules. Then I created a file named /etc/apache2/conf.d/expires with the following contents:
ExpiresActive On
ExpiresDefault "access plus 1 day"
ExpiresByType image/gif "access plus 1 month"
ExpiresByType image/jpeg "access plus 1 month"
ExpiresByType text/css "access plus 1 day"
# Set up caching on media files for 1 year (forever?)
<FilesMatch "\.(flv|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav)$">
ExpiresDefault "access plus 1 year"
Header append Cache-Control "public"
</FilesMatch>
# Set up caching on media files for 1 month
<FilesMatch "\.(gif|jpg|jpeg|png|swf)$">
ExpiresDefault "access plus 1 month"
Header append Cache-Control "public"
</FilesMatch>
DNS Lookups
Page Speed complains about DNS names that are used for only one URL. One example of this was the Octofinder service [4], it’s a service to find blogs based on tags, but I don’t seem to get any traffic from it so I just turned it off. In this case it was the only sensible thing to do to have a single URL from their web site, but I had been considering removing the Octofinder link for a while anyway. As an aside I will be interested to see if there are comments from anyone who has found Octofinder to be useful.
I’ve also disabled the widget that used to display my score from Technorati.com, it wasn’t doing what it used to do, the facility of allowing someone to list my blog as a favorite didn’t seem to provide any benefit, and it was taking extra DNS lookups and data transfers. I might put something from Technorati on my blog again in future as they used to be useful.
Cookies
If you have static content (such as images) on a server that uses cookies then the cookie data is sent with every request. This requires transferring more data and breaks caching. So I modified the style-sheet for my theme to reference icons on a different web server, this will supposedly save about 4K of data transfer for a page load while also giving better caching.
The down-side of this is that I have my static content on a different virtual server so now updating my WordPress theme will require updating two servers, this isn’t a problem for the theme (which doesn’t get updated often) but will be a problem if I do it with plugins.
Conclusion
The end result is that my blog now gets a rating of 95% for Page Speed when previously it got a rating of 82%. Now most of the top references that are flagged by Page Speed come from Google, although there is still work for me to do.
Also it seems that Australia is now generally unsuitable for hosting web sites for viewing in other countries. I will advise all my clients who do International business to consider hosting in the US or the EU.
Health Monitoring
Eric Topol gave an interesting TED talk about wireless medical monitoring devices [1]. The potential for helping diabetics and other people who need ongoing treatment is obvious as is the potential for helping pregnant women and other people who might suddenly need medical treatment at short notice.
One significant positive potential for this is integration with multiple services. For example Eric’s talk showed a graph of sleep levels on a mobile phone, the deep sleep (which is apparently the most restorative) was shown in sections that were significantly less than one hour in duration. I often receive SMS messages about network problems during the night, the vast majority of them aren’t that important and can be delayed without any problem. If my phone could determine that I was in deep sleep and delay sending me a NAGIOS report for up to 30 minutes then it would help me sleep while not making any significant impact on my work in repairing server problems – it’s not as if I would be immediately productive if woken from deep sleep anyway.
Status and Attention Monitoring
Eric Horvitz, Carl Kadie, Tim Paek, and David Hovel of Microsoft Research wrote an interesting paper titled “Models of Attention in Computing and Communication: From Principles to Applications” [2]. In that paper they describe various methods for tracking user attention and filtering messages so that the user won’t be needlessly distracted by unimportant messages when they are busy. The next logical step is to integrate that with a smart phone (maybe Android would be good for this) to screen calls, unknown callers could be automatically directed to voice-mail and known good callers could be given an automated prompt as to whether they think that their call is important enough to be worth the distraction.
It seems to me that combining health and status monitoring would be the sensible thing to do. If a bio-sensor array indicates that someone is more stressed than usual then decreasing their availability for phone calls would make sense. It would also be trivial to analyse calls and determine which callers are likely to cause stress and block their calls at inconvenient times.
What we Need
Of course there are lots of security implications in this. Having multiple networked devices tracking and collating information on health and all activity (including video-monitoring in the Microsoft model) has a lot of potential for malicious use. This is one of many reasons that we need to generally improve computer security.
We also need to have free software implementations of such things. We don’t want Microsoft to get a monopoly on status monitoring. Also it seems that smart Nike shoes that can work with the iPhone [3] are the leading mass-market implementation of health monitoring, everything related to health-care should be as open as possible. I’m happy for Nike to make their own special products, but we need to have them work with open systems and make the data available as widely as possible. There is the real potential for people to needlessly die if health data is not available in open formats! According to the Wikipedia page Nike has been shipping products using a proprietary 2.4GHz wireless network since 2006 [4], it would be a really good hardware project to devise a way of making compatible sensors and readers.
We also need some free software for monitoring the user status to avoid interrupting them.
Finally we need some software to integrate all the data. Canonical’s Desktop Experience team are doing some interesting work on managing desktop notifications that will be in Ubuntu Lucid [5], they appear to have some basic support for masking notifications based on priority. Please keep up the great work Canonical people, and please consider working on advanced status monitoring next!
Slavery Still Exists!
We all like to think of slavery as a problem from the 19th century, but it still exists and is a significant problem! Kevin Bales gave an interesting TED talk about how to combat modern slavery [1]. Currently there are an estimated 27,000,000 slaves in the world, it’s a lot but it’s a smaller proportion of the world population in slavery than at any time in history. Also the price of slaves is lower than ever before, instead of being a capital asset a slave is a disposable item (this is really bad for the slaves).
The estimated average cost to liberate a slave is $US400, this involves rescuing them from slavery and teaching them the skills that they need to have a sustainable life – there’s no point rescuing them only to have them get captured again! Kevin notes that the US is still paying the price of the botched emancipation of 1865, so the liberation of slaves really needs to be done properly.
The estimated total cost to liberate all the slaves in the world is $US10.8 billion. Think about the trillions of dollars that have been spent on wars to supposedly liberate people, when a mere $US10.8 billion would liberate all slaves. That’s a fraction of the cost of the proposed National Broadband Network – which would you rather have, fast net access for cable TV services or a world without slavery?
Censorship of Child-Porn (and other things) vs Liberating Slaves
It is often claimed that child porn needs to be stopped to prevent there being economic incentives to molest children in other countries, to achieve this goal the Australian government wants to filter all net access to prevent access to child-porn, other prohibited porn, documentation about euthanasia, and the occasional dental practice (seriously, they just can’t get their filters right).
Methods that are proven to prevent children being molested should be given a much higher priority than censoring the Internet in the hope of removing economic incentives for child abuse.
It seems reasonable to assume that a significant portion of child-slaves are molested (because we know that slave owners are really bad people and there’s nothing to stop them from molesting children). Let’s assume for the sake of discussion that 1/4 of child slaves are molested. So that would give an average cost of $US1600 to free one child from sexual slavery and three other children from physically abusive environments.
Currently the Australian government plans to spend $44,000,000 in Internet censorship with the supposed aim of protecting the children. The fact that the majority of child-porn is believed to be transferred via protocols other than HTTP has not deterred the government from pushing forward a HTTP-only filter for censorship. Also the fact that anyone could use a VPN, tor, or other services to trivially bypass a HTTP filter has not deterred them.
Let’s assume for the sake of discussion that $US400 equates to $500 Australian, the exchange rate is lower than that but it varies and it’s best to be conservative. Therefore the $44,000,000 the government wants to spent on censorship could be used to liberate 88,000 child slaves. If my estimates are correct that would save 22,000 children from being molested. In the unlikely event that slavers happen to be nice people who don’t do nasty things like molest children (which really isn’t credible) then freeing 88,000 children from slavery and all the physical abuse that it involves is still a really good thing!
If a plan to prevent child sexual abuse by liberating slaves fails to actually prevent any sexual abuse then at least you end up with some freed slaves (which is a really good thing). But if a plan to prevent child sexual abuse by censoring the Internet fails then all you end up with is slower Internet access and censorship of totally unrelated things that the government doesn’t like.
A Stupid Bisop breaks the Godwin Rule
The Sydney Morning Herald reports that Catholic Bishop Anthony Fisher has just claimed that “GODLESSNESS and secularism led to Nazism, Stalinism, mass murder and abortion” [1]. This is a violation of the rule part of Godwin’s Law. We might not expect clerics to have enough general knowledge of society to know this rule, but it does seem reasonable to expect them to have enough empathy to understand why inappropriate Hitler analogies will just offend people and don’t advance their cause. But anyone in a position of leadership in a global organisation who is going to talk to the media should have enough intelligence to check historical references.
The Wikipedia article about “Positive Christianity” is worth reading, it includes references to Christian based race-hate in Nazi Germany as well as modern references [2]. There is also an interesting Wikipedia page about the Religious aspects of Nazism [3], there seems to be room for a lot of debate on the matter of how religion fit in to the Nazi regime – but it seems quite clear that it was not an atheist regime.
The Wikipedia page about the Rechskonkordat (the agreement between the Nazi Germany and the Catholic Church) is also worth reading [4].
Also I’m sure that the Chilean Dictator Augusto Pinochet wasn’t the only Catholic despot.
Community Services and Moral Authority
Cardinal Pell was quoted in the same SMH article as saying “we find no community services sponsored by the atheists“, of course if he was to investigate who is contributing to the religious based community service organisations he would find plenty of atheists. I know I’m not the only atheist who donates to The Salvation Army [5] on occasion. I wonder how many religious people would be happy to donate to an explicitly atheist organisation, I suspect that the prevalence of religious charities is due to the fact that a religious charity can get money from both religious people and atheists while a charity that advocated atheism in any way would be limited to atheist donors. If I was to establish a community service charity I would seriously consider adding some religious element to help with fund raising – it’s just a matter of doing what’s necessary to achieve the main goal.
Even if it wasn’t for violating Godwin’s law and a total lack of any knowledge of history Anthony would still have failed. We all know the position of the Catholic Church on the sexual abuse of children. The Catholic policies are implemented in the same way in every country and as far as we can tell have been done so for all time. I believe that makes them unqualified to offer moral advice of any kind.
Criticising the “Secular World”
Peter Craven has written an article for The Age criticising the “secular world” [6]. He makes the extraordinary claim “the molesting clergy are like the brutal policemen and negligent doctors and corrupt politicians: they come with the territory because pennies have two sides“. The difference of course is that police, doctors, and politicians tend to get punished for doing the wrong thing – even when they do things that are far less serious. But the “molesting clergy” seem to be protected by all levels of the church hierarchy.
Peter makes some claims about the “secular world” as if there is a Borg collective of atheists and claims that there is an “incomprehension of Christian values“. I believe that the attitudes of atheists and the secular justice system correspond quite well with what most Christians would regard as “Christian values” – the problem is that the actions of the church leaders tend not to match that.
It’s All About Money
I would like to know why Christians almost never change church and never cease donating. Religious organisations are much like corporations, they seek new members and new revenue sources. If a significant number of Catholics were to pledge to not donate any money to their church for a year after every child sex abuse scandal then Catholic policies might change. Also if Catholics were to start changing to Christian denominations that do the right thing in regard to moral issues then the Catholic church would either change or eventually become irrelevant. If you keep paying people who do bad things then you are supporting them!
I suggest that any church member who cares about the moral issues of the day should vote with their checkbook. If their church fails to do the right thing then inside the donation envelope they should put a note saying “due to the immoral actions of the church I will donate to other charities“. I am not aware of any church that would expel members for such a protest, but I know that some smaller parishes have cash-flow problems and would rapidly escalate the issue through the management chain if even a few members were to protest in such a manner.
Russ Albery has described why he doesn’t support comments on his blog [1].
I respect his opinion and I’m not going to try and convince him to do otherwise. But I think it’s worth describing why I want comments on my blog and feel that they are worth having for many (possibly most) other blogs.
Types of Blog
The first thing to consider is the type of post on the blog. Some blogs are not well suited to comments. I have considered turning off comments on my documents blog [2] because it gets a small number of readers and is designed as reference material rather than something you might add to a public Planet feed or read every week as it has a small number of posts that are updated. So conversations in the blog comments are unlikely to happen. One thing that has made me keep comments open on my documents blog is the fact that I am using blog posts as the main reference pages for some of my projects and some people are using the comments facility for bug reports. I may make this the main bug reporting facility – I will delete the comments when I release a version of the software with the bugs fixed.
One particular corner case is a blog which has comments as a large part of it’s purpose. Some blogs have a regular “open thread” where anyone can comment about any topic, blogs which do such things have the owners act more like editors than writers. One example of this is the Making Light blog by Teresa and Patrick Nielsen Hayden [3] – who are both professional editors.
The next issue is the content of the post. If I was to create a separate blog for authoritative posts about SE Linux then there wouldn’t be much point in allowing comments, there are very few people who could correct me when I make a mistake and they would probably be just as happy to use email. When I write about issues where there is no provably correct answer (such as in this post) the input of random people on the net is more useful.
Another content issue is that of posts of a personal nature. Some people allow comments on most blog posts apart from when they announce some personal matter. I question the wisdom of blogging about any topic for which you would find comments intolerable, but if you are going to do so then turning off comments makes sense.
Finally there is the scale of the blog. If you don’t get enough readers to have a discussion in the comments then there is less benefit in having the facility turned on – the ratio of effort required to deal with spam to the benefit in comments isn’t good enough. In his FAQ about commenting [4] Russ claims that controlling spam “can take a tremendous amount of time or involve weird hoop-jumping required for commenters“. I have found the Block Spam by Math [5] WordPress plugin to be very effective in dealing with the spam, so for this blog it’s a clear benefit to allow comments. Since using that plugin my spam problem has decreased enough that I now allow comments on posts which are less than 1 year old – previously comments were closed after 90 days. The plugin is a little annoying but I changed the code to give an error message that describes the situation and prevents a comment from being lost so the readers don’t seem too unhappy.
The Purpose of Comments
Russ considers the purpose of comments to be “meaningfully addressed to the original post author or show intent to participate in a discussion“. That’s a reasonable opinion, but I believe that in most cases it’s best if comments are not addressed to the author of the post and are instead directed towards the general readers. I believe that participating in a discussion and helping random people who arrive as the result of a Google search are the main reasons for commenting. For my blog an average post will get viewed about 500 times a year and the popular posts get viewed more than 200 times per month, so when over the course of a year more than 1000 people read the comments on a post (which is probably common for one of my posts) then 99.9% of readers are not me and commentators might want to direct their comments accordingly. Of course a comment can be addressed at the blog author so the unknown audience can enjoy watching the discussion.
For some of my technical posts I don’t have time to respond to all comments. If I have developed a solution to a technical problem that is good enough I may not feel inclined to invest some extra work in developing an ideal solution. So when a reader suggests a better option I sometimes don’t test that out and therefore can’t respond to the comment. But the comment is still valuable to the 1000+ other people who read the comment section. So a commentator should not assume that I will always entirely read a comment on a technical matter.
Comment threads can end up being a little like mailing lists. I don’t think that general discussions really work well in comment threads and don’t aim for such things. But if a conversation starts then I think you might as well continue as long as it’s generally interesting.
Generally for most blogs I think that providing background information, supporting evidence, and occasionally evidence of errors is a major part of the purpose of blog comments. But entertainment is always welcome. I would be happy to see some poems in the comments section of technical posts, sometimes a Limerick or haiku could really help make a technical point.
Political blog posts can be a difficult area. Generally the people who feel inclined to write political blog posts or comment on them are not going to be convinced to entirely change course, but as there are many people who can’t seem to understand this fact a significant portion of the comments on political blog posts consist of different ways of saying “you’re wrong“. The solution to this is to moderate the comments aggressively, too many political blogs have comments sections that are all heat and no light. I’m happy for people to go off on tangents when commenting on my political posts or to suggest a compromise between my position and their preferred option. But my tolerance of comments that simply disagree is quite small. Generally I think that blogs which directly advocate a certain political position should have the comments moderated accordingly, people will read a site in the expectation of certain content and I believe that the comments should also meet that expectation to some degree. Comments on political posts can provide insights into different points of view and help discover compromise positions if moderated well.
How to provide Feedback
Russ advocates commenting to the blog author via email – it is now the only option he accepts. My observation is that the number of people who are prepared to comment via email (which generally involves giving away their identity) is vastly smaller than those who use Blog comment facilities. This means that you will miss some good comments. One of the most valuable commentators on my blog uses the name “Anonymous” and has not felt inclined to ever identify themself to me, I wouldn’t want to miss the input of that person and some of the other people who have useful things to say but who don’t want to identify themself. I have previously written about how not all opinions are equal and anonymous comments are given a lower weight [6]. That post inspired at least one blogger to configure their blog to refuse anonymous comments, it was not my intent to inspire such reactions (although they are logical actions based on a different opinion of the facts I presented). I believe that someone who is anonymous can gain authority by repeatedly producing quality work.
Another option is for people to write their own blog posts referencing the post in question. I don’t believe that my core reader base desires short posts so I won’t write a blog post unless I have something significant to say. I expect that many other people believe that the majority of their blog comments would not meet the level of quality that their readers expect from their posts (posts are expected to be more detailed and better researched than comments). As an aside forcing people to comment via blog posts will tend to increase your Technorati rating. :-#
A final option is for people to use services such as Twitter to provide short comments on posts. While Twitter is well designed for publishing short notes the problem with this is that it’s a different medium. There are many people who like reading and discussing blog posts but who don’t like Twitter and thus using a different service excludes them from the conversation.
For my blog I prefer comments for short responses and blog posts for the longer ones. If you write a blog post that references one of my posts then please enter a comment to inform me and the readers of my blog. Email is not preferred but anyone who wants to send me some is welcome to do so.
If this post inspires you to change your blog comment policy then please let me know. I would like to know whether I inspire people to allow or deny blog comments.
|
|