2

Noise Canceling Headphones and People Talking

The Problem

I was asked for advice on buying headphones to protect students who have medical conditions that make them sensitive to noise, such headphones would have to allow them to hear human voices.

Due to the significant differences in hearing issues (including physical damage and sensory issues) it seems unlikely that getting identical headphones for all students will give an ideal result. The person who asked me the question didn’t explain what type of students are being taught. If it’s an adult education class then getting everyone the ideal headset wouldn’t be particularly difficult. If however it’s the special needs class in a high school then students would probably want the most shiny headphones rather than the ones that are a best match to their hearing issues.

Also some combinations of hearing problems and ambient noise can’t be addressed by such headsets. A friend who developed Noise Induced Hearing Loss from shooting tells me that he really can’t stand brass instruments. But the high frequencies from such instruments tend not to be filtered well by noise canceling headphones, so any student who has such a problem would probably need hearing aids that filter out high frequencies – I believe that such hearing aids are available but don’t have any particular knowledge about them.

Test Results

I did a quick test on my Bose QC-15 noise canceling headphones [1] which cost me $320US including tax and my cheap Bauhn headphones from Aldi [2] which cost me $69AU (and which apparently could later be purchased on special for $35AU to clear stock).

I found that when not playing music they seemed to perform about equally well in terms of allowing me to hear people speaking, although I admit that just having a conversation with the nearest people wasn’t the most scientific test. When I was playing music I found that the Bose headset made it significantly more difficult to hear people speak than the Bauhn headset. This is an advantage for the Bose for it’s intended use, and I expect that students who need a headset for medical reasons won’t want to listen to music while studying so it’s never a disadvantage.

In both cases, if the headphones are used for just canceling unwanted noise the speaker shouldn’t need to raise their voice significantly to be heard. In some situations the noise canceling headphones make it easier for someone with good hearing to hear what people are saying, for example a conversation in a car or plane could probably be held at a lower volume if all people involved were wearing suitable noise canceling headphones. If however the students have damaged hearing then I can’t make any prediction as to whether the teacher could speak at a lower volume or whether they would be required to use a higher volume if the students wore such headphones.

The Brookstone on-ear headphones that I tested [3] seem particularly noteworthy in this regard due to the way they canceled the melody of the store background music and just left the singing. If someone wants to buy headphones for people with physical damage to their ears then the Brookstone product is really worth investigating. If however the target market happens to be people on the Autism Spectrum then they may hate anything that presses on their ears (as I do) in which case the Brookstone product can’t be considered. The Brookstone price of $150US (presumably $160 including tax) was also the best price I saw when shopping in the US – but I presume that I could have found something with a similar quality and price to Bauhn in the US if I looked hard enough.

Conclusion

The big advantage of the Bose for this use is that it blocks a wider range of frequencies than some other noise canceling headsets. They all work really well on regular low frequency noise such as car engine noise (whether a car passenger or a pedestrian) but to stop certain higher frequencies such as those from air conditioning systems the Bose wins hands down. I guess this may depend on what noise is to be blocked, if a class was held in the same room every time and noise canceling headsets were purchased specifically for that class then it would probably make sense to ensure that the acoustic capabilities of the headsets match the unwanted background noise and the hearing issue that each student has.

Here’s an Amazon link: Bose® QuietComfort® 15 Acoustic Noise Cancelling® Headphones

I’ve been reading about Sensory Processing Disorder. I’m sure that some children are doing poorly in the default school system because they either have an undiagnosed case of SPD or who don’t have enough symptoms to get a diagnosis. I think it would make a good experiment to try noise canceling headphones on some of the difficult children, I wouldn’t expect a high success rate – but if it worked in as little as 5% of cases and did no harm to the children who didn’t benefit then it would be worth doing.

9

systemd – a Replacement for init etc

The systemd projecct is an interesting concept for replacing init and related code [1]. There have been a few attempts to replace the old init system, upstart is getting some market share in Linux distributions and Solaris has made some interesting changes too.

But systemd is more radical and offers more benefits. While it’s nice to be able to start multiple daemons at the same time with dependencies and doing so offers improvements to the boot times on some systems that really doesn’t lead to optimal boot times or necessarily correct behavior.

Systemd is designed around a similar concept to the wait option in inetd where the service manager (formerly inetd and now the init that comes with systemd) binds to the TCP, UDP, and Unix sockets and then starts daemons when needed. It apparently can start the daemons as needed which means you don’t have a daemon running for months without serving a single request. It also implements some functionality similar to automount which means you can start a daemon before a filesystem that it might need has been fscked.

This means that a large part of the boot process could be performed in reverse. The current process is to run fsck on all filesystems, mount them, run back-end server processes such as database servers and then run servers that need back-end services (EG a web server using a database server). The systemd way would be for process 1 to listen on port 80 and it could then start the web server when a connection is established to port 80, start the database server when a connection is made to the Unix domain socket, and then mount the filesystem when the database server tries to access it’s files.

Now it wouldn’t be a good idea to start all services on demand. Fsck can take hours on some filesystems and is never quick at the best of times. Starting a major daemon such as a database server can also take some time. So a daemon that is known to be necessary for normal functionality and which takes some time to start could be started before a request comes in. As fsck is not only slow but usually has little scope for parallelisation (EG there’s no point running two instances of fsck when you only have one hard disk), so hints as to which filesystem to be checked first would need to be used.

Systemd will require more SE Linux integration than any current init system. There is ongoing debate about whether init should load the SE Linux policy, Debian has init loading the policy while Fedora and Ubuntu have the initramfs do it. Systemd will have to assign the correct SE Linux context to Unix domain socket files and listening sockets for all the daemons that support it (which means that the policy will have to be changed to allow all domains to talk to init). It will also have to manage dbus communication in an appropriate way which includes SE Linux access controls on messages. These features mean that the amount of SE Linux specific code in systemd will dwarf that in sysvinit or Upstart – which among other things means that it really wouldn’t make sense to have an initramfs load the policy.

They have a qemu image prepared to demonstrate what systemd can do. I was disappointed that they prepared the image with SE Linux disabled. All I had to do to get it working correctly was to run the command “chcon -t init_exec_t /usr/local/sbin/systemd” and then configure GRUB to not use “selinux=0” on the kernel command line.

Another idea is to have systemd start up background processes for GUI systems such as KDE and GNOME. Faster startup for KDE and GNOME is a good thing, but I really hope that no-one wants to have process 1 manage this! Having one copy of systemd run as root with PID 1 to start daemons and another copy of the same executable run as non-root with a PID other than 1 to start user background processes is the current design which makes a lot of sense. But I expect that some misguided person will try to save some memory by combining two significantly different uses for process management.

5

Flash, Apple, and Linux

Steve Jobs has published an interesting article about Flash [1]. He criticises Flash for being proprietary, this seems a little hypocritical coming from Apple (who’s the only competitor for Microsoft in terms of being the most proprietary computer company) but is in fact correct. Steve advocates HTML5 which is a better technical solution to a lot of the things that Flash does. He claims that Apple users aren’t missing out on much, but I think that sites such as Physics Games [2] demonstrate the benefits of Flash.

I think that Apple’s attack on Flash is generally a good thing. HTML5 web sites will work everywhere which will be a good incentive for web designers to fix their sites. I also think that we want to deprecate it, but as it’s unfortunately popular it’s useful to have tools such as GNASH to use Flash based web sites with free software. Microsoft has belatedly tried to compete with flash, but it’s Silverlight system and the free but patent encumbered Linux equivalent Moonlight have very little content to play and will probably disappear soon. As an aside the relentless determination of GNOME people to force the MONO project (including Moonlight) on it’s users convinced me to remove GNOME from all systems that I run.

OS News has a good analysis of the MPEG-LA patents [3] which are designed to prevent anyone making any commercial use of H.264 – which includes putting such videos on sites that contain Google advertising! These patent terms are so horrible that they want to control video streams that were ever encoded with them, so you can’t even transcode a H.264 stream to an open format without potentially having the scum at MPEG-LA going after you. This is worth noting when examining Apple’s actions, they support MPEG patents and therefore seem happy to do anything that reduces the freedom of their customers. Apple’s 1984 commercial has been proven to be a lie, it’s Apple that wants to control our freedom.

Charles Stross makes some good points about the issues related to Apple and Flash [4]. He believes that it’s all part of an Apple push to cloud computing and that Apple wants to own all our data at the back-end while providing a relatively reliable front-end (IE without all the anti-virus nonsense that is needed on the MS-Windows platform. Cloud computing is a good thing and I can’t wait for the Linux support for it to improve, I support a number of relatives who run Linux and it would be a lot easier for me if they could have the primary storage for everything be on the cloud so that I can do central backups of user data and they can use their own data while visiting each other. I think that a network filesystem that is similar in concept to offline-IMAP would be a really good thing, I know that there are some filesystems such as AFS and CODA that are designed for wide area network use with client-side caching but as far as I am aware they aren’t designed for the type of operation that offline/caching IMAP supports.

Matt Brubeck has given a good status report of the work porting Firefox to Android [5]. He notes that the next version of Fennec (mobile Firefox) will have Electrolysis – the Firefox one process per tab feature that was first implemented in Google Chrome [6]. I think that the development of Fennec and the one process per tab feature are both great developments. Matt also says “One of my personal goals is to make Firefox compatible with more mobile sites, and to give web developers the tools and information they need to make their sites work great in mobile Firefox. I’ll write much more about this in future articles“, that sounds great, I look forward to the results of his coding and to reading his blog posts about it!

18

Lexmark Supposedly Supports Linux

I wanted to get a Lexmark Prestige Pro805 printer to work under Linux, due to bad drivers from Lexmark and no driver support in Debian/Unstable I’ve given up and advised the people who purchased it to return it for a refund. I recommend that Lexmark not be considered when purchasing a printer for use with Linux.

The box advertises the URL http://www.lexmark.com.au/prestige for downloading Linux drivers. The driver file is named lexmark-inkjet-09-driver-1.5-1.i386_ts.deb.sh.tar.gz which makes anyone expect a tar.gz archive of a shell archive of a Debian package. But that’s not what it is at all, in Lexmark-land deb is not the file name extension for a Debian package, but just a random bit of text to identify a file that is somewhat related to Debian, the fact that the “Linux driver for Ubuntu/Debian Package Manager based distros” doesn’t use the string ubu in it’s name is something that would lead a typical Linux user to believe that deb means a Debian package. Similarly the file named lexmark-inkjet-09-driver-1.5-1.i386_ts.rpm.sh.tar.gz and described as “Linux driver for RedHat Package Manager based distros” is not actually an RPM package or inherently for RPM based distros, it’s just a shar archive that is built and tested for some unspecified version of some Red Hat distribution (RHEL? Fedora? SUSE?).

Now when I execute lexmark-inkjet-09-driver-1.5-1.i386_ts.deb.sh on an AMD64 version of Linux it opens an X11 window, prompts for the root password, and then fails because an i386 Debian package that it somehow built couldn’t be installed. When I ran the shar archive with the options “--noexec --keep” and examined the files it contained it has a few AMD64 executables – so obviously the software they used to create the installer has some support for AMD64 but they just decided not to use it. It seems that the only way to buy an i386 system nowadays is to buy an embedded system or a NetBook, all Desktops, laptops, and servers run the AMD64 architecture, as most people do a Linux install that matches the full capabilities of their system (IE running AMD64 software on an AMD64 CPU) that means most systems sold in the last few years can’t be used with a new Lexmark printer without an unreasonable amount of work. Sure it is possible to setup a chroot environment or KVM virtual machine for running printer drivers, but I don’t really want to do that – and a significant portion of the potential customers don’t have the skill needed to do so.

While technically their claims about having Linux driver support are correct (they support some distributions of Linux on i386), the majority of new systems won’t work with it unless someone who has good skills spends some time and effort on it. Probably the majority of Linux Desktop and server systems that are in use today use AMD64 and are run by people who don’t know how to setup a chroot so therefore for most real installations it’s not supported. Even for i386 systems installation is unlikely to be trouble-free, when they support RPM based distributions (without identifying which of the many wildly different RPM systems they tested on) and Debian (without mentioning a version number) it seems that the incidence of people running a distribution that is supported is going to be quite low.

Lexmark uses the Linux logo to claim compatibility

Based on this experience I am not inclined to trust Lexmark in future, I will not trust any future claims of Linux support that they may make. The above picture of the Lexmark box shows Tux (the Linux logo), it doesn’t mean support out of the box as you would hope, but instead means support for old systems with some effort.

5

Is the NBN a Good Idea

Since writing my post about whether the National Broadband Network can ever break even [1] I’ve had a number of people try to convince me of it’s merit. He is my summary and rebuttal of some of the arguments for the NBN:

The FUD

Claims are made that Australia may fall behind the rest of the world, children may be disadvantaged in their education, or many other bad things may unless a faster net access is provided to everyone. That is FUD. If we are going to spend $43,000,000,000 then we should have some evidence that it will do some good.

One thing to note is that rural areas will not get anything near the 100Mb/s speeds that FTTH will deliver to people in the cities. So if faster net access is actually essential then it would probably make sense to start by delivering ADSL2+ speeds to rural areas – something that is not planned to be part of the NBN.

Some people claim that having slow net access (ADSL2+) is going to preclude unknown future uses of the Internet. However the NBN is only going to be 100MB/s so any such claim is essentially that “unknown things will happen in the future to make 24Mb/s too slow, but those unknown things won’t make 100Mb/s too slow”. I started using computer networks when a 1200/75 modem was considered fast. Over about the last 25 years typical net access speeds have increased from 1200b/s to about 12Mb/s, that’s a factor of 10,000 speed improvement! Now the people who believe that we only need to multiply the current speed by a factor of 4 to address all future needs could be correct, but it seems unlikely and doesn’t seem like a good idea for a $43,000,000,000 bet. If they were talking about 1Gb/s net access then things would be different in this regard.

Technological Development

Some people compare the NBN to the Apollo Program and suggest that the scientific research involved in implementing a FTTH network might result in useful developments in other areas.

The Wikipedia page about Fiber to the premises by country indicates that Hong Kong had 1Gb/s available in 2006. It seems that a service which is rolled out 4 years later and 10 times slower than in Hong Kong is not going to involve a huge amount of research. Certainly nothing like the Apollo Program.

It would allow multiple HDTV channels to be viewed at the same time

According to the Australian Bureau of Statistics in 2001 the average Australian household had 2.6 people (3.0 in NT) [2]. The incidence of every member of a household wanting to watch a different live TV channel at the same time is probably quite low, it seems that young people nowadays spend as much time watching Youtube as they do watching TV. Based on the Wikipedia bit-rate page it seems that an ADSL2+ link could stream two HDTV channels.

I could use fiber speeds today

There are some people who claim that they need faster speeds for their data transfers right now. The problem is that the latency of data transfer is a bottleneck in transfer rates. In some quick tests between servers that have reasonably fast connections to the Internet I was only able to get one international transfer in excess of ADSL2+ speed (24Mb/s or 3MB/s).

I was able to transfer data between London and Germany at a speed of 11MB/s – which was possibly limited by the 100baseT Ethernet connection in the data center in Germany. Now if the people who pay for that German server were to pay more then they would get a higher speed. So anyone who downloads anything from my web site in it’s current configuration would get a significant performance boost by using an NBN connection – if they live in Europe! But if they live in Australia then they will probably get a fraction of that speed (a quick test indicates that my ADSL2+ connection gives me about 500KB/s from my web server. I need to have at least three transfers going at the same time to get more than 600KB/s when downloading files from other countries, and it’s rare to have a single download run at a speed higher than 100KB/s.

It would be handy if I could download at higher speeds from my ISP’s mirror (which seems to be the only way someone in Australia can even use full ADSL2+ speeds). But it’s certainly not worth the $5,000 per household installation cost of the NBN to get that.

I’m sure that there are some people who really do have usage patterns that could take advantage of fiber net access, one possibility would be downloading large files from a local source such as a major web site that uses a CDN. It seems likely to me that the majority of people who fit this category are using major porn services.

Fiber would be good for Video-Conferencing

To be better than ADSL for video-conferencing a NBN connection would need a faster upload speed. Internode is already selling NBN connections for the trial areas [3]. The cheapest fiber plans that they offer are at the 50/2 speed, that’s 50Mb/s download and 2Mb/s upload – in theory ADSL2+ should have a higher upload speed. In my tests the best result I’ve got from sending files over my ADSL2+ link to another country is about 110KB/s (880Kb/s). The fact that the theoretical speed of a fiber connection is better than the measured speed of an ADSL2+ connection in this regard doesn’t mean much, let’s not assume that a fiber connection will get it’s theoretical maximum speed.

Not that you necessarily need higher speeds for video-conferencing, Youtube is one of many services that uses a lot less bandwidth than the upload speed of an ADSL2+ connection. Also video-calls which are supported on most 3G mobile phones use even less bandwidth again.

I want an NBN connection to run a server at home

The fastest connection for uploading that Internode offers to “home users” is the “Home High” plan at 100/8 speed, that is maybe a bit more than twice as fast for uploading as ADSL2+. They also offer a SOHO plan that supports 16Mb/s upload speed (at extra expense) and suggest that customers who want higher speeds contact their business sales department. But they include both sending and receiving data in the bandwidth quota for the fiber connections. Transmitting data at 16Mb/s isn’t that great for a server.

The cheapest virtual server plan on offer from Linode [4] includes 200GB of data transfer per month and has significantly higher transmission speeds. You could get a Linode virtual server plus an Internode ADSL2+ connection for about the same price as an Internode fiber connection to the home.

There are two down-sides to virtual servers, one is that they are limited in the amount of RAM that they have (I can easily afford to buy 8G of RAM for a home system but renting an 8G virtual server is going to be expensive) and the other is that the storage is limited. Shared storage on virtual servers can be slow and is limited in size. If you need to run a server with a few terabytes of data storage (which is cheap on commodity SATA disks but expensive on server-grade disks) and you don’t need to transfer much of it then a home server on the NBN might do well. Otherwise it’s probably not going to work well for server use.

The NBN will avoid people leaving their PC on to do downloads and save electricity

To save electricity you would have to have a significant incidence of situations where a download can complete fast enough over the NBN to allow the user to turn their PC off before going to bed but be slow enough over ADSL to require that it be left on overnight. That would probably only apply to downloads from a CDN or from a local ISP mirror. From the Internode mirror I can download a test file at a speed of 850KB/s (I guess this means that my ADSL connection is not delivering full speed – I suspect poor quality wiring in my home and would try to fix it if the current speed was too slow). In 5 minutes spend brushing my teeth I could download 250M of data, in the same time if I had a 100/4 connection on the NBN I might be able to download almost 3G of data. So in the unlikely event that I wanted to download a CD or DVD image and turn off my PC immediately before going to bed then the NBN would be a good thing.

But then of course I would want to burn the CD or DVD image to disc and that would take long enough that I would leave it on overnight…

My ADSL connection gives really low speeds or I’m out of range for ADSL

Some people who live only a short distance from an exchange are unable to get ADSL2+. Some people live a long way from exchanges and are outside ADSL range. The ideal solution to these problems is not to provide fiber access to the majority of the population, it is to provide ADSL to everyone who is near an exchange and maybe provide fiber access to some people who are a long way from exchanges.

I find it rather ironic that some people in the country are essentially saying “because net access in the country is so slow we need fiber in the cities”. The NBN is not going to give fiber to rural areas, satellite is one of the options that will be used.

Really High Speeds

I have been informed that to get the distances needed for the NBN they have to use Single Mode Fiber, this permits scaling up to higher speeds at some later time by changing the hardware at the end points. So we could end up with a Hong Kong speed network at some future time with the same fibers. This is a good thing.

But I don’t think that we need to get fiber installed right now so that we can use 100Mb, we could wait until there is enough need to get the faster transmission rates from the start. At the moment it’s just a waste of money.

Upgrading a SE Linux system to Debian/Testing (Squeeze)

Upgrade Requirements

Debian/Squeeze (the next release of Debian) will be released some time later this year. Many people are already upgrading test servers, and development systems and workstations that are used to develop code that will be deployed next year. Also there are some significant new features in Squeeze that compel some people to upgrade production systems now (such as a newer version of KVM and Ext4 support).

I’ve started working on an upgrade plan for SE Linux. The first thing you want when upgrading between releases is a way of supporting booting a new kernel independently of the other parts of the upgrade. Either supporting the old user-space with the new kernel or the new kernel with the old user-space. It’s not that uncommon for a new kernel to have a problem when under load so it’s best to be able to back out of a kernel upgrade temporarily while trying to find the cause of the problem. For workstations and laptops it’s not uncommon to have a kernel upgrade not immediately work with some old hardware, this can usually be worked around without much effort, but it’s good to be able to keep systems running while waiting for a response to a support request.

Running a Testing/Unstable kernel with Lenny Policy

deb http://www.coker.com.au lenny selinux

In Lenny the version of selinux-policy-default is 2:0.0.20080702-6. In the above APT repository I have version 2:0.0.20080702-18 which is needed if you want to run a 2.6.32 kernel. The main problem with the older policy is that the devtmpfs filesystem that is used by the kernel for /dev in the early stages of booting [1] is not known and therefore unlabeled – so most access to /dev is denied and booting fails. So before upgrading to testing or unstable it’s a really good idea to install the selinux-policy-default package from my Lenny repository and then run “selinux-policy-upgrade” to apply the new changes (by default upgrading the selinux-policy-default package doesn’t change the policy that is running – we consider the running policy to be configuration files that are not changed unless the user requests it).

There are also some other kernel changes which require policy changes such as a change to the way that access controls are applied to programs that trigger module load requests.

Upgrading to the Testing/Unstable Policy

While some details of the policy are not yet finalised and there are some significant bugs remaining (in terms of usability not security) the policy in Unstable is usable. There is no need to rush an upgrade of the policy, so at this stage the policy in Unstable and Testing is more for testers than for serious production use.

But when you upgrade one thing you need to keep in mind is that we don’t to support upgrading the SE Linux policy between different major versions of Debian while in multi-user mode. The minimum requirement is that after the new policy package is installed you run the following commands and then reboot afterwards:

setenforce 0
selinux-policy-upgrade
touch /.autorelabel

If achieving your security goals requires running SE Linux in enforcing mode all the time then you need to do this in single-user mode.

The changes to names of domains and labeling of files that are entry-points for domains is significant enough that it’s not practical to try and prove that all intermediate states of partial labeling are safe and that there are suitable aliases for all domains. Given that you need to reboot to install a new kernel anyway the reboot for upgrading the SE Linux policy shouldn’t be that much of an inconvenience. The relabel process on the first boot will take some time though.

Running a Lenny kernel with Testing/Unstable Policy

In the original design SE Linux didn’t check open as a separate operation, only read/write etc. The reason for this is that the goal for SE Linux was to control information flows. The open() system call doesn’t transfer any data so there was no need to restrict access to it as a separate operation (but if you couldn’t read or write a file then an attempt to open it would fail). Recent versions of the SE Linux policy have added support for controlling file open, the reason for this is to allow a program in domain A to open a file and then let a program in domain B inherit the file handle and continue using the file even if it is not normally permitted to open the file – this matches the Unix semantics where a privileged process can allow an unprivileged child to inherit file handles or use Unix domain sockets to pass file handles to another process with different privileges.

SELinux: WARNING: inside open_file_mask_to_av with unknown mode:c1b6

Unfortunately when support was added for this a bug was discovered in the kernel, this post to the SE Linux mailing list has the conclusion to a discussion about it [2]. The symptom of this problem is messages such as the above appearing in your kernel message log. I am not planning to build a kernel package for Lenny with a fix for this bug.

The command “dmesg -n 1” will prevent such messages from going to the system console – which is something you want to do if you plan to login at the console as they can occur often.

2

Creating a Micro Conference

The TEDxVolcano

The TED conference franchise has been extended to TEDxVolcano [1], this is a small conference that features people who are stranded by the Eyjafjallajökull volcano in Iceland. As usual TED is an inspiration to us all, so there is obvious potential for other conferences to be organised in similar situations – there’s no reason why a free software conference can’t be organised in Europe right now!

What You Need to run a Conference

If a conference will have limited attendance (EG due to a volcano preventing anyone from flying to the area) then filming everything is very important. I’ve seen adverts for digital cameras that support “Full HD” resolution (1920*1080) for as little as $AU400. $AU600 will get you a “digital camcorder” that does Full HD which will offer some benefits for recording long movies (such as the ability to store the video on an external hard drive). If I was stuck in a foreign hotel with not much to do then I would be prepared to buy a digital camera or camcorder for the purpose of running such a conference (my current digital camera is 5.1MP and only has 3* optical zoom, it’s a nice camera but I could do with something better. A tripod can cost up to $100, but I recently bought myself a 15cm tall tripod for $10 – that would do at a pinch. Once you have high quality video you can easily upload it to something like Blip.TV. Of course you get a better result if you do some post-production work to merge images of the slides for the lecture into the video, but that is a lot of work and probably requires a camera that outputs uncompressed video for best results.

The next issue is getting a venue. Different hotels cater for different parts of the market, some cater to tourists, some to business travel, some to conferences. If you want a venue at short notice you may be able to get a good deal if you find a hotel that is adversely affected, for example I’m sure that there are some quite empty conference hotels in Europe right now – but the tourist hotels are probably reasonably busy (why not do some tourism if you are stuck). I expect that hotels really don’t want to have empty conference rooms and are prepared to offer good deals for bookings at short notice. Of course you would want to try to ensure that hotel rooms aren’t too expensive in that hotel as some delegates will want to stay in the hotel which hosts the conference.

The minimal staffing for a micro conference is probably two people, one for taking payment, directing people, etc, and the other to film the lectures and moderate panel discussions. Rumor has it that attending without paying is a problem at conferences, for conferences that are planned in advance corporations will try and send multiple employees on the one ticket and have them share a name-tag – one issue with this is that there is a fixed quantity of food supplied and if extra people appear then everyone who paid gets less, another is that people who pay really hate to see freeloaders. The best reference I’ve found for people not paying at conferences is Jon Oxer’s description of how Leslie Cachia of Letac Drafting Services brazenly stole a book from him [2].

Name-tags are needed for any meeting with more than about 15 people. I’m not sure how to get proper name-tags (ones that pin on to clothing and have printed names – maybe the bigger hotels can add this to the conference package). But a roll of sticky labels from an office supply store is pretty cheap.

Costs in Wellington

Along with a few other people I considered running a small security conference immediately before or after LCA 2010, that ended up not happening but I will consider doing it in future. When considering that the general plan was to get a hotel to provide a meeting room for 10-30 people (we had no real idea of the demand).

When investigating the possibilities for running a conference in Wellington I discovered that the hotel fees for a conference room can either be based on paying a fixed fee for the room plus additional expenses for each item or you can pay a fixed rate per person. It seemed that there was the potential to save a small amount of money by paying the fixed fees and avoiding some payments for things like tea/coffee service. But the amount that could be saved would be small and it would incur extra effort in managing it – saving $5 per person is a good thing if you have 600 delegates, but if you have 30 then it’s probably a waste of time. So it seemed best to go for one of the packages, you tell the hotel what time you want the lunch and snack breaks and how you want the tables arranged and they just do everything. The cost for this seemed to be in the range of $nz35 to $nz55 per delegate per day. There is some flexibility in room arrangement, so a room that seats 12 people in the “board-room” layout (tables in a rectangle facing the center) would fit 25 in the “classroom” layout (tables all facing the front) or 50 in the “theater” layout (chairs facing the front with no tables). So the hotel could accommodate changes in the size at relatively short notice (whatever their notice period for buying the food).

The cost for a catered conference dinner seemed to be about $nz45 per diner. In many cases it would be possible to get a meal that is either cheaper, better, or both by going somewhere else, but that wastes time and effort. So that gave an overall conference cost of about $nz135 for a two day conference with a dinner at the end of the first day. Given that the cheapest budget rate from Wotif.com for a 3 star hotel in Wellington is currently $nz85 per night it seems that $nz135 for a two day conference including dinner is pretty cheap as the minimum accommodation cost would be $nz170. Also note that the hotels which I considered for hosting the conference had rates for their hotel rooms that were significantly greater than $nz85 per night.

The hotels all offer other services such as catered “cocktail parties”, these would be good things for any company that wants to sponsor the conference.

Different cities can have vastly different prices for hotels. But I expect that the way conference rooms are booked and managed is similar world-wide and the ratio of conference costs to hotel booking fees to also be similar. Most of the hotels that cater to conferences seem to be owned by multi-national corporations.

It would probably make sense to charge delegates an extra $10 or $15 above the cost of running the conference to cover unexpected expenses. Of course it’s difficult to balance wanting to charge a low rate to attract more people with wanting to avoid the risk of a financial loss.

Conclusion

The hard part is getting speakers. If you can get speakers and panel participants who can fill the time slots and have interesting things to say then all the other parts of organising a micro conference should be relatively easy.

When the cost is less than $150 per delegate then a syndicate of a few people can easily agree to split the loss if the number of delegates turns out to be smaller than expected, a potential loss of $2000 shared among a few people shouldn’t be a huge problem. Also if the conference is booked at short notice (EG because of a volcano) then the hotel shouldn’t require any deposit for anything other than the food which is specially ordered (IE not the tea, coffee, etc) – that limits the potential loss to something well under $100 per delegate who doesn’t attend.

Anyone who has enough dedication to a topic to consider running a conference should be prepared to risk a small financial loss. But based on my past observations of the generosity of conference delegates I’m sure that if at the conference closing the organiser said “unfortunately this conference cost more than the money you paid me, could you please put something in this hat on the way out” then the response would be quite positive.

Note that I am strictly considering non-profit conferences. If you want to make money by running a conference then most things are different.

3

ATI ES1000 Video on Debian/Squeeze

The Problem

I’ve just upgraded my Dell PowerEdge T105 [1] from Debian/Lenny to Debian/Squeeze. Unfortunately the result of the upgrade was that everything in an X display looked very green while the console display looked the way it usually did.

I asked for advice on the LUV mailing list [2] and got a lot of good advice. Daniel Pittman offered a lot of great advice.

The first suggestion was to check the gamma levels, the program xgamma displays the relative levels of Red, Green, and Blue (the primary colors for monitors) where it is usually expected that all of them will have the value of 1.0. This turned out to not be the problem, but it’s worth noting for future instances of such problems. It’s also worth noting the potential use of this to correct problems with display hardware, I’ve had two Thinkpads turn red towards the end of their lives due to display hardware problems and I now realise I could have worked around the problem with xgamma.

He also suggested that it might be ICC, the command “xprop -root | grep -i icc” might display something if that was the case. I’m still not sure what ICC is about but I know it’s not set on my system.

The next suggestion was to use the VESA display driver to try and discover whether it was a bug in the ATI driver. It turned out that the VESA driver solved that problem, I was tempted to continue using the VESA driver until I realised that the VESA driver has a maximum resolution of 1280*1024 which isn’t suitable for a 1680*1050 resolution display.

After reviewing my Xorg configuration file Daniel noted that my frame buffer depth of 16 bits per pixel is regarded as unusual by today’s standards and probably isn’t tested well. As 24bpp is generally implemented with 32bits for each pixel that means it takes twice the frame-buffer storage (both in the X server and in some applications) as well as twice the memory bandwidth to send data around. So I generally use 16bpp for my systems to make them run a little faster.

(II) RADEON(0): Not using mode “1680×1050” (mode requires too much memory bandwidth)

I tried using a depth of 24bpp and then I saw messages such as the above in /var/log/Xorg.0.log. It seems that the display hardware in my ATI ES1000 (the on-motherboard video card in the Dell server) doesn’t have the memory bandwidth to support 1680*1050*24bpp. I tried using the gtf to generate new mode lines, but it seems that there is no 24bpp mode which has a low enough vertical refresh rate to not exhaust memory bandwidth but which is also high enough for the monitor to get a signal lock.

The Solution

My current solution is to use 15bpp mode which gives almost the same quality as 16bpp and uses the same small amount of memory bandwidth. It seems that 15bpp doesn’t trigger the display driver bug. Of course one down-side to this is that the default KDE4 desktop background in Debian seems perfectly optimised to make 15bpp modes look ugly, it has a range of shades of blue that look chunky.

What I really want to do is to get a better video card. Among other things I want to get a 1920*1080 resolution monitor in the near future, Dell is selling such monitors at very low prices and there are a bunch of affordable digital cameras that record video at that resolution. Even if I can get the ES1000 to work at 1920*1080 resolution it won’t support playing Full HD resolution video – I can barely play Youtube videos with it!

I’ve previously described my experience with the awful Computers and Parts Land (CPL) store [3] where they insisted that a PCIe*16 graphics card would work in my PCIe*8 system and then claimed to be doing me a favor by giving me a credit note for the full value (not a refund). This convinced me to not bother trying to buy such a card for the past year. But now it seems that I will be forced to buy one.

What I Want to Buy

I want a PCIe*8 video card that supports 1920*1080 resolution at 24bpp and has good enough performance with free Linux drivers to support Full HD video playback. Also PCIe*4 would do, and I’m prepared to compromise on support of Full HD video. Basically anything better than the ES1000 will do.

Does anyone know how I can buy such a card? I would prefer an ATI card but will take an NVidia if necessary.

Note that I have no plans to cut one of the PCIe sockets on my motherboard (it’s an expensive system and I’m not going to risk breaking it). I will consider cutting the excess pins off a video card as a last resort. But I would rather just buy a PCIe*8 video card.

Note that I am not going to pay cash in advance to a random person who reads my blog. Anyone who wants to sell me a second-hand card must either have a good reputation in the Linux community or ship the card to me on the condition that I pay after it passes the tests.

Update: The first version of this post said that I upgraded TO Lenny, not FROM it.

5

Debian/Testing and KDE4

I’ve just upgraded my Thinkpad (which I use for most of my work) to Debian/testing with KDE4.

Improvements

Kde 3.5 (from Debian/Lenny) didn’t properly display the applets in a vertical task bar. I want a vertical task bar because my screen resolution is 1680*1050 and I find that a less rectangular screen workspace is best for my usage patterns.

In my previous post about my Thinkpad T61 I described how the sound controls weren’t working [1]. These problems were fixed as part of the upgrade, KDE just does the right thing. Now when I press the buttons to increase or decrease the volume the ALSA settings are changed and a small window is briefly displayed in the center of the screen to show the new volume.

Sounds are now made when I plug or unplug the power cable, this was configured in KDE 3.5 but just didn’t work.

Problems

If I have a maximised Konqueror window and I use the middle mouse button to open a link in a new window then the new window will also be maximised. Previously when I did that the new window was not maximised. What sometimes happens is that I want to open several links from a web page in different windows, so if I can open them in non-maximised windows then I can click the title-bar or the bottom status-bar of the parent window to get it in the foreground again. Probably an ideal solution to this use-case would be to configure the middle mouse button to open a new window in the background or minimised.

I can’t figure out how to implement accelerator keys for window controls. In particular I like to use ALT-F9 to minimise a window (CUA89 standard). The upgrade from KDE 3.5 to KDE 4 lost this and I can’t get it back.

I want to have an icon on my panel to launch a Konqueror session. I don’t want a large amount of space taken up for a launcher for several different Konqueror options, I just want a regular Konqueror for web browsing available at a single click. There didn’t seem to be an option for this. KDE 3.5 has an option in the add widgets to toolbar dialogue to add icons for applications. I have just discovered that in KDE 4 the only way to do this is to go through the menu structure and then click the secondary mouse button. Having two ways to do something is often a good thing, particularly when the other way is the way that was most obvious in the previous version!

It was annoying that the font choices for my Konsole session were lost on the KDE 4 upgrade, it’s not a complex setting. Also the option to resize a Konsole session to a common size (such as 80*25) seems to have been lost.

I had to spend at least 30 minutes configuring kmail to get it to display mail in much the same manner as it used to. You have to use the “Select View Appearance (Theme)” icon off at the right of the “Search” box and select “Classic” and then go to “Select Aggregation Mode” (immediately to the left) to select “Flat Date View“. I’m happy for KDE 4 to default to new exciting things when run the first time, but when upgrading from KDE 3.5 it should try to act like KDE 3.5.

I decided to use Kopete for Jabber just to preempt the GNOME people adding Mono support to Pidgin. I had to install the libqca2-plugin-ossl and qca-tls packages to enable SSL connection, missing either of those gives you an incomprehensible error condition that even strace doesn’t clarify much. Given that it’s generally agreed that sending passwords unencrypted over the Internet is a bad idea and that it’s a configuration option in Jabber servers to reject non-SSL connections it seems to me that the Kopoete package should depend on the packages that are needed for SSL support. Failing that it would be good to have Kopete offer big visible warnings when you don’t have them.

I use the KDE 2 theme and the right side of the title bar of each window is a strange dappled pattern. Not sure why and I have more important problems to fix.

Parts of KDE crash too often. I’ll start filing bug reports soon.

The management of the Desktop folder has changed. In previous versions of KDE the directory ~/Desktop had it’s contents displayed in iconic form on the root window. Now by default it doesn’t do that. It is possible to change it, but this is one of those things where the default in the case of an upgrade should be to act like previous versions. The way to enable the previous functionality is to go to the desktop settings (click the secondary mouse button on the background, select “Desktop Settings” and then under “Desktop Activity” change the “Type:” to the value “Folder View” and then specify the directory below.

The facility to have different background colors or pictures for each of the virtual desktops seems to have been removed – either that or the KDE configuration system doesn’t have enough functionality to let me discover how to configure it.

When the panel that I have on the left of the screen crashes everything that was next to the panel gets dragged to the left, this includes extending the width of maximised windows. Then when the panel starts again (which if lucky happens automatically) it pushes things back and if icons had been moved left it just obscures them.

When using Konqueror to browse a directory full of pictures it doesn’t generate thumbnail icons. When I middle-click on an icon for a picture it is opened with Konqueror not the image viewer that was used in KDE 3.5. The image viewer from KDE 3.5 had less options and therefore more screen space was used for the picture. Also the Konqueror window that is opened for this has a navigator panel at the left which I can’t permanently remove.

When I use Konqueror my common action is to perform a Google search and then use the middle button to open a search result in a new window. Most of my Google searches return pages that have more than one screen-full of data so shortly after opening a window with a search result I press PgDn to see the next page. That press of PgDn for some reason takes me back to the Google search. It seems that when a web page is opened in a new window the keyboard focus will be in the URL entry field, and pressing PgDn in that field takes you to the previous web page. This combination is really annoying for me.

Conclusion

Getting the sound working correctly is a great feature! Lots of little things are fancier and generally the upgrade is a benefit. The lack of thumbnails when displaying a folder of JPG files is really annoying though.

The time taken to configure things is also annoying, I support four relatives who are just users so that probably means at least an hour of configuration work and training for each one so KDE 4 is going to cost me at least half a day because of this.

10

Web Server Performance

We Have to Make Our Servers Faster

Google have just announced that they have made site speed part of their ranking criteria for search results [1]. This means that we now need to put a lot of effort into making our servers run faster.

I’ve just been using the Page Speed Firefox Plugin [2] (which incidentally requires the Firebug Firefox Plugin [3]) to test my blog.

Image Size

One thing that Page Speed recommends is to specify the width and height of images in the img tag so the browser doesn’t have to change the layout of the window every time it loads a picture. The following script generates the HTML that I’m now using for my blog posts. I run “BASE=http://www.coker.com.au/blogpics/2010 jpeg.sh foo.jpg bar.jpg” and it generates HTML code that merely needs the data for the alt tag to be added. Note that this script relies on a scheme where there are files like foo-big.jpg that have maximum resolution and foo.jpg which has the small version. Anyone with some shell coding skills can change this of course, but I expect that some people will change the naming scheme that they use for new pictures.

#!/bin/bash
set -e
while [ "$1" != "" ]; do
  RES=$(identify $1|cut -f3 -d\ )
  WIDTH=$(echo $RES|cut -f1 -dx)px
  HEIGHT=$(echo $RES|cut -f2 -dx)px
  BIG=$(echo $1 | sed -e s/.jpg/-big.jpg/)
  echo "<a href=\"$BASE/$BIG\"><img src=\"$BASE/$1\" width=\"$WIDTH\" height=\"$HEIGHT\" alt=\"\" /></a>"
  shift
done

Thanks to Brett Pemberton for the tip about using identify from imagemagick to discover the resolution.

Apache and Cache Expiry

Page Speed complained that my static URLs didn’t specify a cache expiry time, this didn’t affect things for my own system as my Squid server forcibly caches some things without being told to but would be a problem for some others. I first ran the command “a2enmod expires ; a2enmod headers” to configure my web server to use the expires and headers Apache modules. Then I created a file named /etc/apache2/conf.d/expires with the following contents:

ExpiresActive On
ExpiresDefault "access plus 1 day"
ExpiresByType image/gif "access plus 1 month"
ExpiresByType image/jpeg "access plus 1 month"
ExpiresByType text/css "access plus 1 day"
# Set up caching on media files for 1 year (forever?)
<FilesMatch "\.(flv|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav)$">
ExpiresDefault "access plus 1 year"
Header append Cache-Control "public"
</FilesMatch>
# Set up caching on media files for 1 month
<FilesMatch "\.(gif|jpg|jpeg|png|swf)$">
ExpiresDefault "access plus 1 month"
Header append Cache-Control "public"
</FilesMatch>

DNS Lookups

Page Speed complains about DNS names that are used for only one URL. One example of this was the Octofinder service [4], it’s a service to find blogs based on tags, but I don’t seem to get any traffic from it so I just turned it off. In this case it was the only sensible thing to do to have a single URL from their web site, but I had been considering removing the Octofinder link for a while anyway. As an aside I will be interested to see if there are comments from anyone who has found Octofinder to be useful.

I’ve also disabled the widget that used to display my score from Technorati.com, it wasn’t doing what it used to do, the facility of allowing someone to list my blog as a favorite didn’t seem to provide any benefit, and it was taking extra DNS lookups and data transfers. I might put something from Technorati on my blog again in future as they used to be useful.

Cookies

If you have static content (such as images) on a server that uses cookies then the cookie data is sent with every request. This requires transferring more data and breaks caching. So I modified the style-sheet for my theme to reference icons on a different web server, this will supposedly save about 4K of data transfer for a page load while also giving better caching.

The down-side of this is that I have my static content on a different virtual server so now updating my WordPress theme will require updating two servers, this isn’t a problem for the theme (which doesn’t get updated often) but will be a problem if I do it with plugins.

Conclusion

The end result is that my blog now gets a rating of 95% for Page Speed when previously it got a rating of 82%. Now most of the top references that are flagged by Page Speed come from Google, although there is still work for me to do.

Also it seems that Australia is now generally unsuitable for hosting web sites for viewing in other countries. I will advise all my clients who do International business to consider hosting in the US or the EU.