Archives

Categories

systemd – a Replacement for init etc

The systemd projecct is an interesting concept for replacing init and related code [1]. There have been a few attempts to replace the old init system, upstart is getting some market share in Linux distributions and Solaris has made some interesting changes too.

But systemd is more radical and offers more benefits. While it’s nice to be able to start multiple daemons at the same time with dependencies and doing so offers improvements to the boot times on some systems that really doesn’t lead to optimal boot times or necessarily correct behavior.

Systemd is designed around a similar concept to the wait option in inetd where the service manager (formerly inetd and now the init that comes with systemd) binds to the TCP, UDP, and Unix sockets and then starts daemons when needed. It apparently can start the daemons as needed which means you don’t have a daemon running for months without serving a single request. It also implements some functionality similar to automount which means you can start a daemon before a filesystem that it might need has been fscked.

This means that a large part of the boot process could be performed in reverse. The current process is to run fsck on all filesystems, mount them, run back-end server processes such as database servers and then run servers that need back-end services (EG a web server using a database server). The systemd way would be for process 1 to listen on port 80 and it could then start the web server when a connection is established to port 80, start the database server when a connection is made to the Unix domain socket, and then mount the filesystem when the database server tries to access it’s files.

Now it wouldn’t be a good idea to start all services on demand. Fsck can take hours on some filesystems and is never quick at the best of times. Starting a major daemon such as a database server can also take some time. So a daemon that is known to be necessary for normal functionality and which takes some time to start could be started before a request comes in. As fsck is not only slow but usually has little scope for parallelisation (EG there’s no point running two instances of fsck when you only have one hard disk), so hints as to which filesystem to be checked first would need to be used.

Systemd will require more SE Linux integration than any current init system. There is ongoing debate about whether init should load the SE Linux policy, Debian has init loading the policy while Fedora and Ubuntu have the initramfs do it. Systemd will have to assign the correct SE Linux context to Unix domain socket files and listening sockets for all the daemons that support it (which means that the policy will have to be changed to allow all domains to talk to init). It will also have to manage dbus communication in an appropriate way which includes SE Linux access controls on messages. These features mean that the amount of SE Linux specific code in systemd will dwarf that in sysvinit or Upstart – which among other things means that it really wouldn’t make sense to have an initramfs load the policy.

They have a qemu image prepared to demonstrate what systemd can do. I was disappointed that they prepared the image with SE Linux disabled. All I had to do to get it working correctly was to run the command “chcon -t init_exec_t /usr/local/sbin/systemd” and then configure GRUB to not use “selinux=0” on the kernel command line.

Another idea is to have systemd start up background processes for GUI systems such as KDE and GNOME. Faster startup for KDE and GNOME is a good thing, but I really hope that no-one wants to have process 1 manage this! Having one copy of systemd run as root with PID 1 to start daemons and another copy of the same executable run as non-root with a PID other than 1 to start user background processes is the current design which makes a lot of sense. But I expect that some misguided person will try to save some memory by combining two significantly different uses for process management.

Flash, Apple, and Linux

Steve Jobs has published an interesting article about Flash [1]. He criticises Flash for being proprietary, this seems a little hypocritical coming from Apple (who’s the only competitor for Microsoft in terms of being the most proprietary computer company) but is in fact correct. Steve advocates HTML5 which is a better technical solution to a lot of the things that Flash does. He claims that Apple users aren’t missing out on much, but I think that sites such as Physics Games [2] demonstrate the benefits of Flash.

I think that Apple’s attack on Flash is generally a good thing. HTML5 web sites will work everywhere which will be a good incentive for web designers to fix their sites. I also think that we want to deprecate it, but as it’s unfortunately popular it’s useful to have tools such as GNASH to use Flash based web sites with free software. Microsoft has belatedly tried to compete with flash, but it’s Silverlight system and the free but patent encumbered Linux equivalent Moonlight have very little content to play and will probably disappear soon. As an aside the relentless determination of GNOME people to force the MONO project (including Moonlight) on it’s users convinced me to remove GNOME from all systems that I run.

OS News has a good analysis of the MPEG-LA patents [3] which are designed to prevent anyone making any commercial use of H.264 – which includes putting such videos on sites that contain Google advertising! These patent terms are so horrible that they want to control video streams that were ever encoded with them, so you can’t even transcode a H.264 stream to an open format without potentially having the scum at MPEG-LA going after you. This is worth noting when examining Apple’s actions, they support MPEG patents and therefore seem happy to do anything that reduces the freedom of their customers. Apple’s 1984 commercial has been proven to be a lie, it’s Apple that wants to control our freedom.

Charles Stross makes some good points about the issues related to Apple and Flash [4]. He believes that it’s all part of an Apple push to cloud computing and that Apple wants to own all our data at the back-end while providing a relatively reliable front-end (IE without all the anti-virus nonsense that is needed on the MS-Windows platform. Cloud computing is a good thing and I can’t wait for the Linux support for it to improve, I support a number of relatives who run Linux and it would be a lot easier for me if they could have the primary storage for everything be on the cloud so that I can do central backups of user data and they can use their own data while visiting each other. I think that a network filesystem that is similar in concept to offline-IMAP would be a really good thing, I know that there are some filesystems such as AFS and CODA that are designed for wide area network use with client-side caching but as far as I am aware they aren’t designed for the type of operation that offline/caching IMAP supports.

Matt Brubeck has given a good status report of the work porting Firefox to Android [5]. He notes that the next version of Fennec (mobile Firefox) will have Electrolysis – the Firefox one process per tab feature that was first implemented in Google Chrome [6]. I think that the development of Fennec and the one process per tab feature are both great developments. Matt also says “One of my personal goals is to make Firefox compatible with more mobile sites, and to give web developers the tools and information they need to make their sites work great in mobile Firefox. I’ll write much more about this in future articles“, that sounds great, I look forward to the results of his coding and to reading his blog posts about it!

Lexmark Supposedly Supports Linux

I wanted to get a Lexmark Prestige Pro805 printer to work under Linux, due to bad drivers from Lexmark and no driver support in Debian/Unstable I’ve given up and advised the people who purchased it to return it for a refund. I recommend that Lexmark not be considered when purchasing a printer for use with Linux.

The box advertises the URL http://www.lexmark.com.au/prestige for downloading Linux drivers. The driver file is named lexmark-inkjet-09-driver-1.5-1.i386_ts.deb.sh.tar.gz which makes anyone expect a tar.gz archive of a shell archive of a Debian package. But that’s not what it is at all, in Lexmark-land deb is not the file name extension for a Debian package, but just a random bit of text to identify a file that is somewhat related to Debian, the fact that the “Linux driver for Ubuntu/Debian Package Manager based distros” doesn’t use the string ubu in it’s name is something that would lead a typical Linux user to believe that deb means a Debian package. Similarly the file named lexmark-inkjet-09-driver-1.5-1.i386_ts.rpm.sh.tar.gz and described as “Linux driver for RedHat Package Manager based distros” is not actually an RPM package or inherently for RPM based distros, it’s just a shar archive that is built and tested for some unspecified version of some Red Hat distribution (RHEL? Fedora? SUSE?).

Now when I execute lexmark-inkjet-09-driver-1.5-1.i386_ts.deb.sh on an AMD64 version of Linux it opens an X11 window, prompts for the root password, and then fails because an i386 Debian package that it somehow built couldn’t be installed. When I ran the shar archive with the options “--noexec --keep” and examined the files it contained it has a few AMD64 executables – so obviously the software they used to create the installer has some support for AMD64 but they just decided not to use it. It seems that the only way to buy an i386 system nowadays is to buy an embedded system or a NetBook, all Desktops, laptops, and servers run the AMD64 architecture, as most people do a Linux install that matches the full capabilities of their system (IE running AMD64 software on an AMD64 CPU) that means most systems sold in the last few years can’t be used with a new Lexmark printer without an unreasonable amount of work. Sure it is possible to setup a chroot environment or KVM virtual machine for running printer drivers, but I don’t really want to do that – and a significant portion of the potential customers don’t have the skill needed to do so.

While technically their claims about having Linux driver support are correct (they support some distributions of Linux on i386), the majority of new systems won’t work with it unless someone who has good skills spends some time and effort on it. Probably the majority of Linux Desktop and server systems that are in use today use AMD64 and are run by people who don’t know how to setup a chroot so therefore for most real installations it’s not supported. Even for i386 systems installation is unlikely to be trouble-free, when they support RPM based distributions (without identifying which of the many wildly different RPM systems they tested on) and Debian (without mentioning a version number) it seems that the incidence of people running a distribution that is supported is going to be quite low.

Lexmark uses the Linux logo to claim compatibility

Based on this experience I am not inclined to trust Lexmark in future, I will not trust any future claims of Linux support that they may make. The above picture of the Lexmark box shows Tux (the Linux logo), it doesn’t mean support out of the box as you would hope, but instead means support for old systems with some effort.

Marshmallow Challenge for Linux Programmers

Tom Wujec gave an interesting TED talk about training people in team-work and engineering through building the tallest possible structures from 20 pieces of spaghetti, 1 yard of string, and 1 yard of sticky-tape with a time limit of 18 minutes [1]. The project is completed by groups of four people – which is probably about the maximum number of hands that you could have on such a small structure at one time. They have a web site MarshmallowChallenge.com/ which gives some suggestions for conducting a challenge.

One interesting point made in the talk is that kindergarten students tend to do better than most adults.

I think it would be good to have such challenges at events such as Linux conferences. The type of people who attend such conferences tend to enjoy such challenges, and it may lead to some lessons in team-work that can help the software development process. Also we can discover whether Linux programmers are better than the typical kindergarten students. ;)

Link Within

Good Things about LinkWithin

For the last 10 weeks I’ve been using the LinkWithin.com service to show links to other blog posts at the end of each post (the links are only shown to visitors of my blog not in the RSS feed, so people who read my posts through RSS syndication will miss this). The service shows excerpts of pictures from my blog at the bottom of each post to entice readers into reading other posts.

When you click on a LinkWithin icon on my blog you visit a LinkWithin page that redirects you back to my page, so people who use that show up as new visitors being referred by LinkWithin. Currently this month LinkWithin is the fourth highest referrer to my blog, below Google, Reddit, and WebWombat.com.au – so it is clearly doing some good in enticing people to read other posts that they might not otherwise read!

Bad Things about LinkWithin

The first problem with LinkWithin is that the WordPress plugin was written by people who don’t know much about WordPress. Unlike every other plugin I use it doesn’t support configuration options for the user in the database but instead has it hard coded in the PHP code! When you download a plugin from their web site it creates a custom zip that includes the PHP code automatically generated just for you! I have some friends using the same web server as me for running blogs, and I have to tell them “you can install any plugin apart from LinkWithin – it works for no-one but me”.

The next problem is that LinkWithin advertises that it will display related posts, it seems to be doing a poor job of that on my blog although admittedly only a minority of my blog posts have pictures so this limits what it has to work with. This has however inspired me to use more pictures in my posts.

Another problem is that it produces web pages that are not valid XHTML, the following patch fixes this.

--- /tmp/linkwithin.php        2010-02-22 02:09:47.000000000 +0000
+++ ./wp-content/plugins/linkwithin/linkwithin.php        2010-02-22 02:17:29.000000000 +0000
@@ -15,7 +15,7 @@
        global $post, $wp_query, $linkwithin_code_start, $linkwithin_code_end;

        $permalink = get_permalink($post->ID);
–        $content .= '<div class="linkwithin_hook" id="'.$permalink.'"></div>';
+        $content .= '<div class="linkwithin_hook" id="'.str_replace('/','-',$permalink).'"></div>';
        $content = linkwithin_add_code($content);
    }
    return $content;
@@ -26,13 +26,13 @@
        global $post, $wp_query, $linkwithin_code_start, $linkwithin_code_end;

        if ($wp_query->current_post + 1 == $wp_query->post_count) {
–            $embed_code = '<script>
+            $embed_code = '<script type="text/javascript">
<!-- //LinkWithinCodeStart
var linkwithin_site_id = 151382;
var linkwithin_div_class = "linkwithin_hook";
//LinkWithinCodeEnd -->
</script>
-<script src="http://www.linkwithin.com/widget.js"></script>
+<script src="http://www.linkwithin.com/widget.js" type="text/javascript"></script>
<a href="http://www.linkwithin.com/"><img src="http://www.linkwithin.com/pixel.png" alt="Related Posts with Thumbnails" style="border: 0" /></a>';
            $content .= $embed_code;
        }

Finally LinkWithin shows up badly on the FireBug analysis (see my previous post about using Firebug to speed up my blog) [1] – see the below picture for details. As an aside, given that Google recommend Firebug it is rather ironic that Google Adsense related URLs cover the majority of the Firebug issues that are not caused by LinkWithin.

demonstration of how Linkwithin lowers my page speed

What Next?

I’ve sent email to the LinkWithin people about all these issues other than the FireBug problem reports, given that they haven’t responded to some suggestions for over 10 weeks it seems hardly worth my effort in informing them of other issues.

I’m thinking of trying OutBrain.com again. 18 monthe ago I tried OutBrain but never got it working due to technical issues and then forgot about it. It has some similar features and may work better – at least it has tech support people who respond to queries!

CPL Still Sucks

I previously described my experience with Computers and Parts Land (CPL) [1] who gave me a product that didn’t do what I wanted (because they thought that they knew better than me) and then gave me attitude when I returned it.

As it’s almost a year since that incident I had to buy something else from them to use the credit note as I decided that it’s not worth making a Trade Practices Act (TPA) [2] issue out of it. Thanks Tim for the suggestion though.

Firstly I went to their new store that is advertised on their web site. The people at the new store refused to honor the credit note (they probably hoped that I would just let it go and give them an extra $60 in profit). Claiming that I need to take a credit note back to the store it came from is bogus.

Anyway I went back to the original store and bought a new 1.5TB SATA hard drive which seems to work well enough. The service was really slow, I was the only customer in the store and there were several employees not doing much but it still took them ages to print a receipt and give me the hard drive.

It will be the last CPL purchase I ever make. In the unlikely situation that there is ever any new gear that is only sold by CPL then I will wait a year and buy it on ebay.

It would be good if someone started working as a TPA complainant. They would take donations from dis-satisfied customers who want to pay for an investigation of a shonky company (I’d pay $10 for an investigation of CPL). When enough donations were collected they would buy some stuff and make a video of every interaction with the employees and then launch a private legal action under the TPA if they do anything wrong. The TPA complainant would get to keep everything that they buy, any excess donation money after proving the case, and any money offered by the company to settle law suits. While it’s not a good use of my time and money to go after CPL directly it would be good to give some money to someone who would then see them dealt with properly.

Links April 2010

Sam Harris gave an interesting TED talk about whether there are scientific answers to moral questions [1]. One of his insightful points was that when dealing with facts certain opinions should be excluded – it would be good if journalists who report on issues of science could understand this. Another insight was that religious people most strongly agree with him regarding the issue of whether there are factual answers to moral questions – but they think that God just handed the answers to their ancesters rather than making it an issue that requires consideration. He cites the issue of gay marriage as being a distraction from moral issues such as genocide and poverty. He asks “how have we convinced ourself that every culture has a point of view worth considering?”. He asks how the ignorance of the Taliban on the topic of physics is any less obvious than on the topic of human well-being.

Dan Gilbert gave an insightful TED talk titled “Why Are We Happy?” [2]. One interesting fact he cites is that people who become paraplegic are no less happy in the long term than people who win the lottery. He points out that a shopping mall full of Zen monks is not going to be particularly profitable and uses this fact to explain the promotion of natural happiness over synthetic happiness in our society.

Dan Barber gave an amusing and informative TED talk “How I Fell in Love with a Fish” [3]. He speaks about ecological fish farming and how the fish are more tasty as well as the farm being good for the environment. The farm in question is in the south-west of Spain, hopefully there will be more similar farms in other parts of the world soon.

Gary Lauder gave an interesting brief TED talk about road signs [4]. His main point was to advocate a road sign saying “take turns”, but there are already signs in the US at freeway on-ramps saying that 1 or 2 cars may enter every time the light turns green – which is a similar concept. The innovative thing he did was to estimate the amount of time and petrol wasted by stop signs, add that over a year based on the average income and then estimate that an annuity covering that ongoing expense would cost more than $2,000,000. This makes two stop signs at an intersection have an expense of $1,000,000 each. He suggests that rather than installing stop signs it would be cheaper to buy the adjacent land, chop down all trees, and then sell it again.

Alan Siegel gave an insightful TED talk about simplifying legal documents [5]. He gives an example of an IRS document which was analysed with a Heat Map to show which parts confused the readers – the IRS adopted a new document that his group designed which made it easier for taxpayers. He advocates legislation to make legal documents easier to understand for customers of financial services.

Tim Berners Lee gave an interesting TED talk about Open Data, he illustrated it with some fantastic videos showing how mashups have been used with government data [6] and how the Open Street Map project developed over time.

Martin F. Krafft gave an interesting Debconf talk about Tool Adoption Behavior in the Debian project [7]. One thing that I found particularly interesting was his description of the Delphi Method that he used to assemble a panel of experts and gather a consensus of opinion. The post-processing on this talk was very good, in some sections Martin’s presentation notes are shown on screen with the video of him in the corner. As an aside, I think we really do need camera-phones.

The Big Money has an interesting article comparing the Mafia “Bust Out” with the practices of US banks [8].

Mark Roth gave an exciting TED talk about using Hydrogen Sulphide to trigger suspended animation [9]. They are now doing human trials for suspending people who have serious injuries to reduce tissue damage during the process of surgery.

Pawan Sinha gave an interesting TED talk about how brains learn to see [10]. He started by talking about curing blindness in people who have been blind since birth. But he then ended by showing some research into the correlation between visual processing and Autism, he showed that an Autistic child had significantly different visual patterns when playing Pong to an NT child.

Adora Svitak gave an insightful TED talk about what adults can learn from kids [11]. She made some particularly interesting points about the education system requiring that adults respect children more and expect them to do better than their parents – which is essential for all progress in society.

The NY Times has an interesting article on animal homosexuality [12]. In terms of research it focusses on lesbian relationships between albatrosses. But a large part of the article is devoted to the politics of scientific research into animal sexuality.

BrowserShots.org shows you what your web site looks like in different web browsers [13].

Cory Doctorow wrote an insightful article titles “Can You Survive a Benevolent Dictatorship” about the Apple DRM [14]. He describes the way the Apple Digital Restrictions Management (DRM) doesn’t stop copyright violation but does reduce competition in the computer industry. He is not going to sell his work on the Apple store (for the iPad or iPhone etc) and suggests that customers should choose a more open platform. It’s unfortunate that he didn’t suggest a better platform.

Is the NBN a Good Idea

Since writing my post about whether the National Broadband Network can ever break even [1] I’ve had a number of people try to convince me of it’s merit. He is my summary and rebuttal of some of the arguments for the NBN:

The FUD

Claims are made that Australia may fall behind the rest of the world, children may be disadvantaged in their education, or many other bad things may unless a faster net access is provided to everyone. That is FUD. If we are going to spend $43,000,000,000 then we should have some evidence that it will do some good.

One thing to note is that rural areas will not get anything near the 100Mb/s speeds that FTTH will deliver to people in the cities. So if faster net access is actually essential then it would probably make sense to start by delivering ADSL2+ speeds to rural areas – something that is not planned to be part of the NBN.

Some people claim that having slow net access (ADSL2+) is going to preclude unknown future uses of the Internet. However the NBN is only going to be 100MB/s so any such claim is essentially that “unknown things will happen in the future to make 24Mb/s too slow, but those unknown things won’t make 100Mb/s too slow”. I started using computer networks when a 1200/75 modem was considered fast. Over about the last 25 years typical net access speeds have increased from 1200b/s to about 12Mb/s, that’s a factor of 10,000 speed improvement! Now the people who believe that we only need to multiply the current speed by a factor of 4 to address all future needs could be correct, but it seems unlikely and doesn’t seem like a good idea for a $43,000,000,000 bet. If they were talking about 1Gb/s net access then things would be different in this regard.

Technological Development

Some people compare the NBN to the Apollo Program and suggest that the scientific research involved in implementing a FTTH network might result in useful developments in other areas.

The Wikipedia page about Fiber to the premises by country indicates that Hong Kong had 1Gb/s available in 2006. It seems that a service which is rolled out 4 years later and 10 times slower than in Hong Kong is not going to involve a huge amount of research. Certainly nothing like the Apollo Program.

It would allow multiple HDTV channels to be viewed at the same time

According to the Australian Bureau of Statistics in 2001 the average Australian household had 2.6 people (3.0 in NT) [2]. The incidence of every member of a household wanting to watch a different live TV channel at the same time is probably quite low, it seems that young people nowadays spend as much time watching Youtube as they do watching TV. Based on the Wikipedia bit-rate page it seems that an ADSL2+ link could stream two HDTV channels.

I could use fiber speeds today

There are some people who claim that they need faster speeds for their data transfers right now. The problem is that the latency of data transfer is a bottleneck in transfer rates. In some quick tests between servers that have reasonably fast connections to the Internet I was only able to get one international transfer in excess of ADSL2+ speed (24Mb/s or 3MB/s).

I was able to transfer data between London and Germany at a speed of 11MB/s – which was possibly limited by the 100baseT Ethernet connection in the data center in Germany. Now if the people who pay for that German server were to pay more then they would get a higher speed. So anyone who downloads anything from my web site in it’s current configuration would get a significant performance boost by using an NBN connection – if they live in Europe! But if they live in Australia then they will probably get a fraction of that speed (a quick test indicates that my ADSL2+ connection gives me about 500KB/s from my web server. I need to have at least three transfers going at the same time to get more than 600KB/s when downloading files from other countries, and it’s rare to have a single download run at a speed higher than 100KB/s.

It would be handy if I could download at higher speeds from my ISP’s mirror (which seems to be the only way someone in Australia can even use full ADSL2+ speeds). But it’s certainly not worth the $5,000 per household installation cost of the NBN to get that.

I’m sure that there are some people who really do have usage patterns that could take advantage of fiber net access, one possibility would be downloading large files from a local source such as a major web site that uses a CDN. It seems likely to me that the majority of people who fit this category are using major porn services.

Fiber would be good for Video-Conferencing

To be better than ADSL for video-conferencing a NBN connection would need a faster upload speed. Internode is already selling NBN connections for the trial areas [3]. The cheapest fiber plans that they offer are at the 50/2 speed, that’s 50Mb/s download and 2Mb/s upload – in theory ADSL2+ should have a higher upload speed. In my tests the best result I’ve got from sending files over my ADSL2+ link to another country is about 110KB/s (880Kb/s). The fact that the theoretical speed of a fiber connection is better than the measured speed of an ADSL2+ connection in this regard doesn’t mean much, let’s not assume that a fiber connection will get it’s theoretical maximum speed.

Not that you necessarily need higher speeds for video-conferencing, Youtube is one of many services that uses a lot less bandwidth than the upload speed of an ADSL2+ connection. Also video-calls which are supported on most 3G mobile phones use even less bandwidth again.

I want an NBN connection to run a server at home

The fastest connection for uploading that Internode offers to “home users” is the “Home High” plan at 100/8 speed, that is maybe a bit more than twice as fast for uploading as ADSL2+. They also offer a SOHO plan that supports 16Mb/s upload speed (at extra expense) and suggest that customers who want higher speeds contact their business sales department. But they include both sending and receiving data in the bandwidth quota for the fiber connections. Transmitting data at 16Mb/s isn’t that great for a server.

The cheapest virtual server plan on offer from Linode [4] includes 200GB of data transfer per month and has significantly higher transmission speeds. You could get a Linode virtual server plus an Internode ADSL2+ connection for about the same price as an Internode fiber connection to the home.

There are two down-sides to virtual servers, one is that they are limited in the amount of RAM that they have (I can easily afford to buy 8G of RAM for a home system but renting an 8G virtual server is going to be expensive) and the other is that the storage is limited. Shared storage on virtual servers can be slow and is limited in size. If you need to run a server with a few terabytes of data storage (which is cheap on commodity SATA disks but expensive on server-grade disks) and you don’t need to transfer much of it then a home server on the NBN might do well. Otherwise it’s probably not going to work well for server use.

The NBN will avoid people leaving their PC on to do downloads and save electricity

To save electricity you would have to have a significant incidence of situations where a download can complete fast enough over the NBN to allow the user to turn their PC off before going to bed but be slow enough over ADSL to require that it be left on overnight. That would probably only apply to downloads from a CDN or from a local ISP mirror. From the Internode mirror I can download a test file at a speed of 850KB/s (I guess this means that my ADSL connection is not delivering full speed – I suspect poor quality wiring in my home and would try to fix it if the current speed was too slow). In 5 minutes spend brushing my teeth I could download 250M of data, in the same time if I had a 100/4 connection on the NBN I might be able to download almost 3G of data. So in the unlikely event that I wanted to download a CD or DVD image and turn off my PC immediately before going to bed then the NBN would be a good thing.

But then of course I would want to burn the CD or DVD image to disc and that would take long enough that I would leave it on overnight…

My ADSL connection gives really low speeds or I’m out of range for ADSL

Some people who live only a short distance from an exchange are unable to get ADSL2+. Some people live a long way from exchanges and are outside ADSL range. The ideal solution to these problems is not to provide fiber access to the majority of the population, it is to provide ADSL to everyone who is near an exchange and maybe provide fiber access to some people who are a long way from exchanges.

I find it rather ironic that some people in the country are essentially saying “because net access in the country is so slow we need fiber in the cities”. The NBN is not going to give fiber to rural areas, satellite is one of the options that will be used.

Really High Speeds

I have been informed that to get the distances needed for the NBN they have to use Single Mode Fiber, this permits scaling up to higher speeds at some later time by changing the hardware at the end points. So we could end up with a Hong Kong speed network at some future time with the same fibers. This is a good thing.

But I don’t think that we need to get fiber installed right now so that we can use 100Mb, we could wait until there is enough need to get the faster transmission rates from the start. At the moment it’s just a waste of money.

Upgrading a SE Linux system to Debian/Testing (Squeeze)

Upgrade Requirements

Debian/Squeeze (the next release of Debian) will be released some time later this year. Many people are already upgrading test servers, and development systems and workstations that are used to develop code that will be deployed next year. Also there are some significant new features in Squeeze that compel some people to upgrade production systems now (such as a newer version of KVM and Ext4 support).

I’ve started working on an upgrade plan for SE Linux. The first thing you want when upgrading between releases is a way of supporting booting a new kernel independently of the other parts of the upgrade. Either supporting the old user-space with the new kernel or the new kernel with the old user-space. It’s not that uncommon for a new kernel to have a problem when under load so it’s best to be able to back out of a kernel upgrade temporarily while trying to find the cause of the problem. For workstations and laptops it’s not uncommon to have a kernel upgrade not immediately work with some old hardware, this can usually be worked around without much effort, but it’s good to be able to keep systems running while waiting for a response to a support request.

Running a Testing/Unstable kernel with Lenny Policy

deb http://www.coker.com.au lenny selinux

In Lenny the version of selinux-policy-default is 2:0.0.20080702-6. In the above APT repository I have version 2:0.0.20080702-18 which is needed if you want to run a 2.6.32 kernel. The main problem with the older policy is that the devtmpfs filesystem that is used by the kernel for /dev in the early stages of booting [1] is not known and therefore unlabeled – so most access to /dev is denied and booting fails. So before upgrading to testing or unstable it’s a really good idea to install the selinux-policy-default package from my Lenny repository and then run “selinux-policy-upgrade” to apply the new changes (by default upgrading the selinux-policy-default package doesn’t change the policy that is running – we consider the running policy to be configuration files that are not changed unless the user requests it).

There are also some other kernel changes which require policy changes such as a change to the way that access controls are applied to programs that trigger module load requests.

Upgrading to the Testing/Unstable Policy

While some details of the policy are not yet finalised and there are some significant bugs remaining (in terms of usability not security) the policy in Unstable is usable. There is no need to rush an upgrade of the policy, so at this stage the policy in Unstable and Testing is more for testers than for serious production use.

But when you upgrade one thing you need to keep in mind is that we don’t to support upgrading the SE Linux policy between different major versions of Debian while in multi-user mode. The minimum requirement is that after the new policy package is installed you run the following commands and then reboot afterwards:

setenforce 0
selinux-policy-upgrade
touch /.autorelabel

If achieving your security goals requires running SE Linux in enforcing mode all the time then you need to do this in single-user mode.

The changes to names of domains and labeling of files that are entry-points for domains is significant enough that it’s not practical to try and prove that all intermediate states of partial labeling are safe and that there are suitable aliases for all domains. Given that you need to reboot to install a new kernel anyway the reboot for upgrading the SE Linux policy shouldn’t be that much of an inconvenience. The relabel process on the first boot will take some time though.

Running a Lenny kernel with Testing/Unstable Policy

In the original design SE Linux didn’t check open as a separate operation, only read/write etc. The reason for this is that the goal for SE Linux was to control information flows. The open() system call doesn’t transfer any data so there was no need to restrict access to it as a separate operation (but if you couldn’t read or write a file then an attempt to open it would fail). Recent versions of the SE Linux policy have added support for controlling file open, the reason for this is to allow a program in domain A to open a file and then let a program in domain B inherit the file handle and continue using the file even if it is not normally permitted to open the file – this matches the Unix semantics where a privileged process can allow an unprivileged child to inherit file handles or use Unix domain sockets to pass file handles to another process with different privileges.

SELinux: WARNING: inside open_file_mask_to_av with unknown mode:c1b6

Unfortunately when support was added for this a bug was discovered in the kernel, this post to the SE Linux mailing list has the conclusion to a discussion about it [2]. The symptom of this problem is messages such as the above appearing in your kernel message log. I am not planning to build a kernel package for Lenny with a fix for this bug.

The command “dmesg -n 1” will prevent such messages from going to the system console – which is something you want to do if you plan to login at the console as they can occur often.

Creating a Micro Conference

The TEDxVolcano

The TED conference franchise has been extended to TEDxVolcano [1], this is a small conference that features people who are stranded by the Eyjafjallajökull volcano in Iceland. As usual TED is an inspiration to us all, so there is obvious potential for other conferences to be organised in similar situations – there’s no reason why a free software conference can’t be organised in Europe right now!

What You Need to run a Conference

If a conference will have limited attendance (EG due to a volcano preventing anyone from flying to the area) then filming everything is very important. I’ve seen adverts for digital cameras that support “Full HD” resolution (1920*1080) for as little as $AU400. $AU600 will get you a “digital camcorder” that does Full HD which will offer some benefits for recording long movies (such as the ability to store the video on an external hard drive). If I was stuck in a foreign hotel with not much to do then I would be prepared to buy a digital camera or camcorder for the purpose of running such a conference (my current digital camera is 5.1MP and only has 3* optical zoom, it’s a nice camera but I could do with something better. A tripod can cost up to $100, but I recently bought myself a 15cm tall tripod for $10 – that would do at a pinch. Once you have high quality video you can easily upload it to something like Blip.TV. Of course you get a better result if you do some post-production work to merge images of the slides for the lecture into the video, but that is a lot of work and probably requires a camera that outputs uncompressed video for best results.

The next issue is getting a venue. Different hotels cater for different parts of the market, some cater to tourists, some to business travel, some to conferences. If you want a venue at short notice you may be able to get a good deal if you find a hotel that is adversely affected, for example I’m sure that there are some quite empty conference hotels in Europe right now – but the tourist hotels are probably reasonably busy (why not do some tourism if you are stuck). I expect that hotels really don’t want to have empty conference rooms and are prepared to offer good deals for bookings at short notice. Of course you would want to try to ensure that hotel rooms aren’t too expensive in that hotel as some delegates will want to stay in the hotel which hosts the conference.

The minimal staffing for a micro conference is probably two people, one for taking payment, directing people, etc, and the other to film the lectures and moderate panel discussions. Rumor has it that attending without paying is a problem at conferences, for conferences that are planned in advance corporations will try and send multiple employees on the one ticket and have them share a name-tag – one issue with this is that there is a fixed quantity of food supplied and if extra people appear then everyone who paid gets less, another is that people who pay really hate to see freeloaders. The best reference I’ve found for people not paying at conferences is Jon Oxer’s description of how Leslie Cachia of Letac Drafting Services brazenly stole a book from him [2].

Name-tags are needed for any meeting with more than about 15 people. I’m not sure how to get proper name-tags (ones that pin on to clothing and have printed names – maybe the bigger hotels can add this to the conference package). But a roll of sticky labels from an office supply store is pretty cheap.

Costs in Wellington

Along with a few other people I considered running a small security conference immediately before or after LCA 2010, that ended up not happening but I will consider doing it in future. When considering that the general plan was to get a hotel to provide a meeting room for 10-30 people (we had no real idea of the demand).

When investigating the possibilities for running a conference in Wellington I discovered that the hotel fees for a conference room can either be based on paying a fixed fee for the room plus additional expenses for each item or you can pay a fixed rate per person. It seemed that there was the potential to save a small amount of money by paying the fixed fees and avoiding some payments for things like tea/coffee service. But the amount that could be saved would be small and it would incur extra effort in managing it – saving $5 per person is a good thing if you have 600 delegates, but if you have 30 then it’s probably a waste of time. So it seemed best to go for one of the packages, you tell the hotel what time you want the lunch and snack breaks and how you want the tables arranged and they just do everything. The cost for this seemed to be in the range of $nz35 to $nz55 per delegate per day. There is some flexibility in room arrangement, so a room that seats 12 people in the “board-room” layout (tables in a rectangle facing the center) would fit 25 in the “classroom” layout (tables all facing the front) or 50 in the “theater” layout (chairs facing the front with no tables). So the hotel could accommodate changes in the size at relatively short notice (whatever their notice period for buying the food).

The cost for a catered conference dinner seemed to be about $nz45 per diner. In many cases it would be possible to get a meal that is either cheaper, better, or both by going somewhere else, but that wastes time and effort. So that gave an overall conference cost of about $nz135 for a two day conference with a dinner at the end of the first day. Given that the cheapest budget rate from Wotif.com for a 3 star hotel in Wellington is currently $nz85 per night it seems that $nz135 for a two day conference including dinner is pretty cheap as the minimum accommodation cost would be $nz170. Also note that the hotels which I considered for hosting the conference had rates for their hotel rooms that were significantly greater than $nz85 per night.

The hotels all offer other services such as catered “cocktail parties”, these would be good things for any company that wants to sponsor the conference.

Different cities can have vastly different prices for hotels. But I expect that the way conference rooms are booked and managed is similar world-wide and the ratio of conference costs to hotel booking fees to also be similar. Most of the hotels that cater to conferences seem to be owned by multi-national corporations.

It would probably make sense to charge delegates an extra $10 or $15 above the cost of running the conference to cover unexpected expenses. Of course it’s difficult to balance wanting to charge a low rate to attract more people with wanting to avoid the risk of a financial loss.

Conclusion

The hard part is getting speakers. If you can get speakers and panel participants who can fill the time slots and have interesting things to say then all the other parts of organising a micro conference should be relatively easy.

When the cost is less than $150 per delegate then a syndicate of a few people can easily agree to split the loss if the number of delegates turns out to be smaller than expected, a potential loss of $2000 shared among a few people shouldn’t be a huge problem. Also if the conference is booked at short notice (EG because of a volcano) then the hotel shouldn’t require any deposit for anything other than the food which is specially ordered (IE not the tea, coffee, etc) – that limits the potential loss to something well under $100 per delegate who doesn’t attend.

Anyone who has enough dedication to a topic to consider running a conference should be prepared to risk a small financial loss. But based on my past observations of the generosity of conference delegates I’m sure that if at the conference closing the organiser said “unfortunately this conference cost more than the money you paid me, could you please put something in this hat on the way out” then the response would be quite positive.

Note that I am strictly considering non-profit conferences. If you want to make money by running a conference then most things are different.