Archives

Categories

A Netbook for Aircraft Navigation

There is apparently some MS-Windows software for navigating light aircraft in Australia. It takes input from a GPS device and knows the rules for certain types of common tasks (such as which direction to use when approaching an airport). My first question when I heard of this was “so if the Windows laptop crashes does your plane crash?“. But I’ve been assured that paper maps will always be available.

The requirement is for a touch-screen device because a regular laptop in the open position won’t leave enough room for the control stick. So the question is, what is the best touch-screen Windows laptop? It must be relatively rugged spinning media for storage is unacceptable due to the risk of damage in turbulence, it should be relatively cheap (less than $1000), and can apparently have a somewhat low resolution for the screen.

The pilot who asked me for advice on this matter is currently thinking of the ASUS Eee T91 which runs Windows XP home, has 16G of solid-state storage and a 1024*600 screen. I am concerned about the reliability of that system as the rotatable screen design seems inherently weak.

The Smartbook concept sounds appealing, I don’t expect that you would want to wait for a typical OS to boot while flying a plane. But those devices mostly use ARM CPUs and thus can’t run MS-Windows. One particularly interesting device is the Always Innovating Touchbook [1] which has a detachable keyboard – which would be handy for non-airline use. Unfortunately it seems that Always Innovating aren’t doing production at the moment, they say “The current Touch Book production is in stand-by and will resume in the summer when we will release our newest and craziest innovation” – well summer is almost over in the northern hemisphere so I guess that means there won’t be anything from them for another 9 months.

A device such as an iPad would also be a good option for looking at static documents. The pilot is considering using a MS-Windows PC to generate images and then viewing them on such a device. But he’s not really enthusiastic about it.

Are there any good and cheap touch-screen devices that run MS-Windows? Are there any particularly noteworthy PDF reader devices which would be better than an iPad for viewing maps while flying a plane? Is it possible to run a MS-Windows application that uses a GPS under Wine on a Netbook?

Telling People How to Vote

Yesterday I handed out how to vote (HTV) cards for the Australian Greens. The experience was very different to the one I had when I handed out cards for the Greens in the Victorian state election in 2006 [1]. The Labor party (ALP) hadn’t spread any gross lies about the Greens and there were no representatives from the insane parties (Family First and Citizens Electoral Council/Commission (CEC)). So we didn’t have any arguments among the people handing out the HTV cards.

The atmosphere among the volunteers that were present was a good match for some ideals of a sporting contest. Everyone wanted their own team to win but acted in a sporting manner. When no voters were around we had some friendly conversations.

One thing that was interesting to note was the significant number of families where the parents in their 40s deliberately snubbed me while their children in the 18-22 age range took the Greens cards. It seemed that for families with adult children there were two likely voting patterns, one was children voting Green and parents not liking it, and the other was when the entire family voted informal (when someone refuses all offers of HTV cards it’s a safe bet that they will cast an informal vote). In Australia submitting a vote card is mandatory but making it legible and formal is optional.

The last report I heard suggested that about 5% of the total votes were informal. This seems to be strong evidence showing that civics lessons are needed in high school. Also there were a disturbing number of people who stated that they didn’t know which party to vote for when they were collecting the HTV cards. A HTV card has one or two sentences about the party and there are almost no requirements for truth in such statements. Anyone who votes according to such brief summaries of the parties is quiet unlikely to end up casting a vote that gives the result that they desire.

The result of the election is a significant swing to the Greens, more senators and the first Green MP! For the lower house it seems that Labor will have great difficulty in forming government even when in a coalition with the Greens and some independents. It seems unlikely that the Liberal party could ever make a deal with the Greens, the Liberal position on almost every significant issue contradicts that of the Green policy, but there is a chance of a Liberal coalition with independent MPs. In any case it seems that the Greens will have the balance of power in the senate so the excesses of the Howard government can’t be repeated.

If you like the nail-biting drama of watching several columns of figures slowly changing over the course of several days then you would love watching the analysis of this election! Whatever coalition government is created is not likely to be stable and we can probably expect another election in a year or two.

It’s Election Time Again

Linux People and Voting

Chris Samuel (a member of LUV who’s known for his work on high performance computers and the “vacation” program) has described why he’s voting for the Greens [1]. His main reasons are the Greens strong support of human rights and for science-based policy.

Paul Dwerryhouse (a member of the Australian Linux community who’s currently travelling around the world and who has made contributions to a range of Linux projects including SE Linux) has described his thoughts about the “Filter Conroy” campaign [2]. He gives a list of some of the high profile awful candidates who could possibly win a seat and therefore deserve a lower position in the preferences than Conroy.

SAGE-AU and Voting for the Internet

There has been some discussion by members of the System Administrators Guild of Australia (SAGE-AU) [3] about issues related to the election. As you would expect there was no consensus on which party was best. But there was a general agreement that the Greens are the only significant party to strongly support the NBN (National Broadband Network – fiber to the home in cities and fast wireless in rural areas) and to also strongly oppose censoring the Internet. SAGE-AU has an official position opposing Internet filtering, and while the organisation hasn’t taken a position on the NBN it seems that the majority of members are in favor of it (I am in a small minority that doesn’t like the NBN). So it seems that political desires of the SAGE-AU members (and probably most people who care about the Internet in Australia) are best represented by the Greens.

Note that SAGE-AU has no official policy on this, the above paragraph is based on discussions I’ve had on mailing lists and in private mail with a number of SAGE-AU members. Also note that not all the SAGE-AU members who agree that the Greens advocate their positions on Internet issues plan to vote for them.

The Green support for the NBN is based on the importance of the Internet to all aspects of modern life, the social justice benefit of providing decent net access for everyone (particularly people in rural areas) is very important to the Greens. I still oppose the NBN and believe that it would be better to just provide better ADSL in all suburbs, better net access (through whichever technology works best) in rural areas, and fiber to the central business areas. But the NBN isn’t really that important to me, human rights and a science based policy are much more important and are the reasons why I’ve been supporting and voting for the Greens.

No Wasted Votes

One thing to note is that the Australian electoral system is designed to avoid wasted votes. There are two ways of considering a vote to be wasted in Australia, one is if you live in an electorate where both the upper and lower house elections have an almost certain result such that no expected swing can change the outcome – I doubt that this is possible for any region in Australia given the way the upper house elections work, although a large portion of the lower house seats have a result that is almost certain.

The other way of having a wasted vote is to vote for someone who doesn’t actually represent you. Lots of people mindlessly vote for a party that seems to represent them, either they identify with unions and vote Labor every time, they regard themselves as “conservative” and vote Liberal every time, or they live in a rural area and vote National every time. The Labor and Liberal parties don’t differ much in policies and members in safe seats typically don’t do anything for the people who elected them. If you generally support the policies of one of the major parties then it can be a good tactic to give your first preference to a minor party. For example if you tend towards Labor then vote Greens first and preference Labor over Liberal. The result will be that your vote will count towards Labor in the lower house and it sends a message to Labor and prevents them from being complacent.

Before Australian elections there is always some propaganda going around about wasted votes, this is usually part of a deliberate campaign to try and prevent people from voting for smaller parties. Because the news has many mentions of wasted votes in US elections (which are watched closely in Australia) it seems that some Australians don’t realise that there are significant and fundamental differences between the political systems in Australia and the US.

Volunteering

Last time I checked the Greens were still accepting volunteers to hand out “how to vote” cards, so if you want to do more for the Greens than just vote for them then this is one way to do it. If you want an uncensored Internet with freedom of speech and a lot of investment in infrastructure (as well as good support for all human rights) then you really want to help the Greens win more seats at the election on Saturday.

The Gift of Fear

I have just read The Gift of Fear and Other Survival Signals that Protect Us From Violence by Gavin de Becker.

Like many self-help books it has a concept that can be described in a paragraph and explained in a few pages. The rest of the book shares anecdotes that help the reader understand the concept, but which are also interesting for people who get it from the first chapter. When I read the book I considered the majority of the content to be interesting stuff added to pad it out to book size because the concept seemed easy enough to get from the start, but from reading some of the reviews I get the impression that 375 pages of supporting material aren’t enough to convince some people – maybe this is something that you will either understand from the first few chapters or never understand at all.

Gavin’s writing is captivating, he has written a book about real violent crime in a style that is more readable than many detective novels, from the moment I finished the first chapter I spent all my spare time reading it.

I was a little disappointed at the lack of detailed statistics, but when someone has done all the statistical analysis chooses to provide the results in the form of anecdotes rather than statistics I’m prepared to tolerate that – especially when the anecdotes are so interesting. I spent quite a bit of time reading the Wikipedia pages relating to some of the people and incidents that are mentioned in this book.

The basic concepts of his book are to cease worrying about silly things like airline terrorists (passengers won’t surrender now so that’s not going to work again) and to instead take note of any real fear. For example if you are doing the same things you usually do but suddenly feel afraid then you should carefully consider what you might have subconsciously noticed that makes you feel afraid and what you can do about it. This isn’t going to change my behavior much as I have mostly been doing what the book recommends for a long time.

I think that everyone should read this book.

Ethernet Interface Naming

As far as I recall the standard for naming Linux Ethernet devices has always been ethX where X is a number starting at 0. Until fairly recently the interface names were based on the order that device drivers were loaded or the order in which the PCI bus was scanned. This meant that after hardware changes (replacing network cards or changing the BIOS settings related to the PCI bus) it was often necessary to replace Ethernet cables or change the Linux network configuration to match the renumbering. It was also possible for a hardware failure to cause an Ethernet card to fail to be recognised on boot and thus change the numbers of all the others!

In recent times udev has managed the interface naming. In Debian the file /etc/udev/rules.d/70-persistent-net.rules can be edited to change the names of interfaces, so no matter the scanning order as long as an interface retains it’s MAC address it will get the correct name – or at least the name it initially had. One of the down-sides to the way this operates is that if you remove an old Ethernet card and replace it with a new one then you might find that eth1 is your first interface and there is no eth0 on the system – this is annoying for many humans but computers work quite well with that type of configuration.

I’ve just renamed the interfaces on one of my routers by editing the /etc/udev/rules.d/70-persistent-net.rules file and rebooting (really we should have a utility like /sbin/ip with the ability to change this on a running system).

I have decided to name the Ethernet port on the motherboard mb0. The PCI slots are named A, B, and C with A being the bottom one and when there are two ports on a PCI card the one closest to the left side of the system (when viewed from the front – the right side when viewed from the read) is port 0 on that card. So I have interfaces pcia0, pcia1, pcib0, pcib1, and pcic0. Now when I see a kernel message about the link going down on one of my ports I won’t have to wonder which port has the interface name eth4.

I did idly consider naming the Ethernet devices after their service, in which case I could have given names such as adsl and voip (appending a digit is not required). Also as the names which are permitted are reasonably long I could have used names such as mb0-adsl, although a hyphen character might cause problems with some of the various utilities and boot scripts – I haven’t tested out which characters other than letters and digits work. I may use interface names such as adsl for systems that run at client sites, if a client phoned me to report Internet problems and messages on the console saying things like “adsl NIC Link is Down” then my process of diagnosing the problem would become a lot easier!

Does anyone else have any good ideas for how to rename interfaces to make things easier to manage?

I have filed Debian bug report #592607 against ppp requesting that it support renaming interfaces. I have also filed Debian bug report #592608 against my Portslave package requesting that provide such support – although it may be impossible for me to fix the bug against Portslave without fixing pppd first (I haven’t looked at the pppd code in question for a while). Thanks to Rusty for suggesting this feature during the Q/A part of my talk about Portslave for the Debian mini-conf in LCA 2002 [1].

Cyborgs solving Protein Folding problems

Arstechnica has an interesting article about protein folding problems being solved by a combination of brute-force software and human pattern recognition in the form of a computer game [1]. Here is a link to the primary source which also mentions the fact that players can design their own proteins which could potentially cure some diseases [2].

This is very similar to Garry Kasparov’s observations about ideal methods for playing chess given computers and humans [3].

It is also similar to Jane McGoningal’s ideas about using online gaming to make a better world [4] which inspired my post about choosing a free software mission [5].

What other serious problems can we solve via computer games and screen-savers?

The Lord of the Fries

Today I bought a box of fries from The Lord of the Fries [1]. I bought it from their new stand at Flinders St station because I was going past and saw no queue. In the past I had considered buying from their store on Elizabeth St but the queues were too long.

The fries were nice – probably among the best fries that I’ve had from local fish and chip shops. Way better than any other fries that you can find in the center of Melbourne. The range of sauces is quite good if you like that thing (I just like vinegar on mine). However it should be noted that the quantity of chips that you would get for the same price at a local fish and chip shop is usually a lot greater.

Overall I was a bit disappointed, sure it’s nice to have someone hand-cut fresh potatoes and to actually care about making a quality product. But when compared to the other options for relatively fast food in the CBD it didn’t seem that great to me. I’m never going to join a queue that has more than 20 people to buy them! But I probably will buy from them on occasion if they don’t have big queues.

It seems to me that the best thing that they have done is to create a strong commitment to food quality and document it on their web site. I hope that this will inspire other fast-food companies to do the same thing and result in an overall increase in the food quality.

On a related note Jamie Oliver has an IDEO project running with the aim of getting kids into fresh food [2].

Why Clusters Usually Don’t Work

It’s widely regarded that to solve reliability problems you can just install a cluster. It’s quite obvious that if instead of having one system of a particular type you have multiple systems of that type and a cluster configured such that broken systems aren’t used then reliability will increase. Also in the case of routine maintenance a cluster configuration can allow one system to be maintained in a serious way (EG being rebooted for a kernel or BIOS upgrade) without interrupting service (apart from a very brief interruption that may be needed for resource failover). But there are some significant obstacles in the path of getting a good cluster going.

Buying Suitable Hardware

If you only have a single server that is doing something important and you have some budget for doing things properly then you really must do everything possible to keep it going. You need RAID storage with hot-swap disks, hot-swap redundant PSUs, and redundant ethernet cables bonded together. But if you have redundant servers then the requirement for making one server reliable is slightly reduced.

Hardware is getting cheaper all the time, a Dell R300 1RU server configured with redundant hot-plug PSUs, two 250G hot-plug SATA disks in a RAID-1 array, 2G of RAM, and a dual-core Xeon Pro E3113 3.0GHz CPU apparently costs just under $2,800AU (when using Google Chrome I couldn’t add some necessary jumper cables to the list so I couldn’t determine the exact price). So a cluster of two of them would cost about $5,600 just for the servers. But a Dell R200 1RU server with no redundant PSUs, a single 250G SATA disk, 2G of RAM, and a Core 2 Duo E7400 2.8GHz CPU costs only $1,048.99AU. So if a low end server is required then you could buy two R200 servers that have no redundancy built in which cost less than a single server that has hardware RAID and redundant PSUs. Those two servers have different sets of CPU options and probably other differences in the technical specs, but for many applications they will probably both provide more than adequate performance.

Using a server that doesn’t even have RAID is a bad idea, a minimal RAID configuration is a software RAID-1 array which only requires an extra disk per server. That takes the price of a Dell R200 to $1,203. So it seems that two low-end 1RU servers from Dell that have minimal redundancy features will be cheaper than a single 1RU server that has the full set of features. If you want to serve static content then that’s all you need, and a cluster can save you money on hardware! Of course we can debate whether any cluster node should be missing redundant hot-plug PSUs and disks. But that’s not an issue I want to address in this post.

Also serving static content is the simplest form of cluster, if you have a cluster for running a database server then you will need a dual-attached RAID array which will make things start to get expensive (or software for replicating the data over the network which is difficult to configure and may be expensive), so while a trivial cluster may not cost any extra money a real-world cluster deployment is likely to add significant expense.

My observation is that most people who implement clusters tend to have problems getting budget for decent hardware. When you have redundancy via the cluster you can tolerate slightly less expected uptime from the individual servers. While we can debate about whether a cluster member should have redundant PSUs and other expensive features it does seem that using a cheap desktop system as a cluster node is a bad idea. Unfortunately some managers think that a cluster solves the reliability problem and therefore you can just use recycled desktop systems as cluster nodes, this doesn’t give a good result.

Even if it is agreed that server class hardware is used for all servers so features such as ECC RAM are used you will still have problems if someone decides to use different hardware specs for each of the cluster nodes.

Testing a Cluster

Testing a non-clustered server or some servers that use a load-balancing device at the front-end isn’t that difficult in concept. Sure you have lots of use cases and exception conditions to test, but they are all mostly straight-through tests. With a cluster you need to test node failover at unexpected times. When a node is regarded as having an inconsistent state (which can mean that one service it runs could not be cleanly shutdown when it was due to be migrated) it will need to be rebooted which is sometimes known as a STONITH. A STONITH event usually involves something like IPMI to cut the power or a command such as “reboot -nf“, this loses cached data and can cause serious problems for any application which doesn’t call fsync() as often as it should. It seems likely that the vast majority of sysadmins run programs which don’t call fsync() often enough, but the probability of losing data is low and the probability of losing data in a way that you will notice (IE it doesn’t get automatically regenerated) is even lower. The low probability of data loss due to race conditions combined with the fact that a server with a UPS and redundant PSUs doesn’t unexpectedly halt that often means that problems don’t get found easily. But when clusters have problems and start calling STONITH the probability starts increasing.

Getting cluster software to work in a correct manner isn’t easy. I filed Debian bug #430958 about dpkg (the Debian package manager) not calling fsync() and thus having the potential to leave systems in an inconsistent or unusable state if a STONITH happened at the wrong time. I was inspired to find this problem after finding the same problem with RPM on a SUSE system. The result of applying a patch to call fsync() on every file was bug report #578635 about the performance of doing so, the eventual solution was to call sync() after each package is installed. Next time I do any cluster work on Debian I will have to test whether the sync() code seems to work as desired.

Getting software to work in a cluster requires that not only bugs in system software such as dpkg be fixed, but also bugs in 3rd party applications and in-house code. Please someone write a comment claiming that their favorite OS has no such bugs and the commercial and in-house software they use is also bug-free – I could do with a cheap laugh.

For the most expensive cluster I have ever installed (worth about 4,000,000 UK pounds – back when the pound was worth something) I was not allowed to power-cycle the servers. Apparently the servers were too valuable to be rebooted in that way, so if they did happen to have any defective hardware or buggy software that would do something undesirable after a power problem it would become apparent in production rather than being a basic warranty or patching issue before the system went live.

I have heard many people argue that if you install a reasonably common OS on a server from a reputable company and run reasonably common server software then the combination would have been tested before and therefore almost no testing is required. I think that some testing is always required (and I always seem to find some bugs when I do such tests), but I seem to be in a minority on this issue as less testing saves money – unless of course something breaks. It seems that the need for testing systems before going live is much greater for clusters, but most managers don’t allocate budget and other resources for this.

Finally there is the issue of testing issues related to custom code and the user experience. What is the correct thing to do with an interactive application when one of the cluster nodes goes down and how would you implement it at the back-end?

Running a Cluster

Systems don’t just sit there without changing, you have new versions of the OS and applications and requirements for configuration changes. This means that the people who run the cluster will ideally have some specialised cluster skills. If you hire sysadmins without regard to cluster skills then you will probably end up not hiring anyone who has any prior experience with the cluster configuration that you use. Learning to run a cluster is not like learning to run yet another typical Unix daemon, it requires some differences in the way things are done. All changes have to be strictly made to all nodes in the cluster, having a cluster fail-over to a node that wasn’t upgraded and can’t understand the new data is not fun at all!

My observation is that the typical experience of having a team of sysadmins who have no prior cluster experience being hired to run a cluster usually involves “learning experiences” for everyone. It’s probably best to assume that every member of the team will break the cluster and cause down-time on at least one occasion! This can be alleviated by only having one or two people ever work on the cluster and having everyone else delegate cluster work to them. Of course if something goes wrong when the cluster experts aren’t available then the result is even more downtime than might otherwise be expected.

Hiring sysadmins who have prior experience running a cluster with the software that you use is going to be very difficult. It seems that any organisation that is planning a cluster deployment should plan a training program for sysadmins. Have a set of test machines suitable for running a cluster and have every new hire install the cluster software and get it all working correctly. It’s expensive to buy extra systems for such testing, but it’s much more expensive to have people who lack necessary skills try and run your most important servers!

The trend in recent years has been towards sysadmins not being system programmers. This may be a good thing in other areas but it seems that in the case of clustering it is very useful to have a degree of low level knowledge of the system that you can only gain by having some experience doing system coding in C.

It’s also a good idea to have a test network which has machines in an almost identical configuration to the production servers. Being able to deploy patches to test machines before applying them in production is a really good thing.

Conclusion

Running a cluster is something that you should either do properly or not at all. If you do it badly then the result can easily be less uptime than a single well-run system.

I am not suggesting that people avoid running clusters. You can take this post as a list of suggestions for what to avoid doing if you want a successful cluster deployment.

WordPress Maintainability

For a while I’ve been maintaining my own WordPress packages. I use quite a few plugins that weren’t included in Debian, some of them have unclear licenses so they can’t go in Debian while the rest would have to go in Volatile at best because they update regularly and often have little or no information in the changelog to describe the reason for the update – so we have to assume there is a potential security issue and update it reasonably quickly. As I’m maintaining plugin packages it seems most reasonable to keep maintaining my own packages of WordPress itself which I started doing some time ago then the version in Debian became outdated.

Now WordPress isn’t a convenient package to maintain, the design of it is that a user will upload it to their web space via FTP or whatever, it’s not designed to be managed by a packaging system with the option of rolling back upgrades that don’t work, tracking dependencies, etc. One example of this is the fact that it comes with a couple of plugins included in the package, of which Akismet is widely used. The Akismet package is periodically updated asynchronously from the updates to the WordPress package with the apparent expectation that you can just FTP the files. Of course I have to build a new WordPress package whenever Akismet is changed.

Now there is a new default theme for WordPress called TwentyTen [1]. This theme ships with WordPress and again has updates asynchronously. Just over a week ago my blog prompting me for an update to the theme even though I hadn’t consciously installed it – I have to update because I don’t know whether one of the other users on the same system has chosen it and because having a message about an update being required is annoying.

The Themes update page has no option for visiting the web site for the theme and only offered to send it to my server via FTP or SFTP, of course I’m not going to give WordPress access to change it’s own PHP files (and thus allow a trojan to be installed). So I had to do some Google searching to find the download page for TewntyTen – which happens to not be in the first few results from a Google search (even though those pages look like they should have a link to it and thus waste the time of anyone who just wants to download it).

After downloading the theme I had to build a new WordPress package containing it – I could have split it out into a separate package and have the WordPress package depend on it, but I’ve got enough little WordPress packages already. It doesn’t seem worth-while to put too much effort into my private repository of WordPress packages that possibly aren’t used by anyone other than me.

Plugins aren’t as bad, the list of plugins gives you a link to the main web page for each plugin which allows you to download it.

I wonder what portion of the WordPress user-base install via FTP to a server that they don’t understand and what portion of them use servers that are maintained properly with a packaging system, my guess is that with the possible exception of WordPress.com most bloggers are running on packaged code. It seems to me that optimising for Debian and CentOS is the smart thing to do for anyone who is developing a web service nowadays. That includes files managed by the packaging system, an option to downgrade (as well as upgrade) the database format (which changes with almost every release), and an option for upgrading the database from the command-line (so it can be done once for dozens or hundreds of users).

deb http://www.coker.com.au lenny wordpress

I have a repository of WordPress packages that anyone can use with the above APT sources.list line. There is no reason why they shouldn’t work with Testing or Unstable (the packaging process mostly involves copying PHP files to the correct locations) but I only test them on Lenny.

Pre-Meeting Lightning Talks

This evening I arrived at the LUV [1] meeting half an hour before it started. I was one of about a dozen people sitting in the room waiting, some of us had laptops and were reading email but others just sat quietly – the venue is sometimes open as much as an hour before the event starts and in bad weather some people arrive early because it’s more comfortable than anywhere else that they might hang out.

So I went to the front and suggested that instead of just doing nothing we get some short talks about random Linux things to fill the time. This seems to be a good opportunity for people to practice their public speaking skills, share things that interest them with a small and friendly audience, and keep everyone else entertained.

With some prompting a few members of the audience got up and spoke about Linux things that they were doing or had recently read about. They were all interesting and I learned a few things. I considered giving a talk myself (my plan B was to just speak for 15 minutes about random Linux stuff I’m doing) but decided that it would be best if I just encouraged other people to give talks.

I have suggested to the committee that we plan to do this in future and maybe have a mention of it on the web site to encourage people who are interested in such things (either speaking or listening) to attend early enough.

I think that this concept has been demonstrated to work and should also work well in most other user group meetings of a suitable size. At LUV we typically have about 60 people attend the main meeting and maybe a dozen arrive really early so people who would be nervous about speaking to an audience of 60 may feel more comfortable. For a significantly larger group (where you have maybe 300 people attend the main meeting and 60 arrive early) the dynamic would be quite different, instead of having more nervous people give talks you might find that a membership of 300 gives a significant number of people who have enough confidence to give an impromptu short lecture to an audience of 60.

As an aside the Connected Community Hackerspace [2] is having a meeting tonight to decide what to do about an office in a central Melbourne area. One of the many things that a Hackerspace can be used for is a meeting venue for lightning talks etc.