Archives

Categories

Athlon Memory Problems

I had an old Compaq Athlon 1GHz system that seemed to be broken. It would display random things on the screen from the BIOS and fail the boot, it looked like a motherboard problem. Fortunately before I gave it away (I give away all my broken machines to members of my local LUG who want spare parts) I remembered that the machine uses main memory for video. I removed and reinstalled one of the DIMMs and it then worked perfectly, presumably it was making a poor contact.

The next task was to put it back in service, I had a bunch of spare RAM and I decided to upgrade it to 768M (it previously had 384M). With the new RAM it would fail to boot, sometimes it would give a kernel message “soft lockup detected“, and sometimes it would just hang. The hangs typically occurred when running the initramfs (the first non-kernel code). I tested this with a Fedora install (what the machine used to run when it had 384M), a Debian install (the aim was to use a hard drive with Debian installed for the new purpose of the machine), and with CentOS and Debian installation CDs. The CentOS and Debian installation CDs got a little further but still crashed. Memtest86+ however reported all the RAM as good.

I then tried reducing the RAM to 256M which worked perfectly. It seemed that increasing RAM above 384M would cause the problem.

The next thing I tried was the mem= kernel option. As a first experiment I tried mem=400m which worked perfectly. I then tried mem=759m which allowed the boot process to proceed a little further but it still crashed. The system supposedly has 760M available (8M used for video). Further testing revealed that mem=750m seemed to work well. Being cautious (and not short of RAM) I have configured the machine to use mem=740m, and it is now working well.

Oracle Unbreakable Linux

Matt Bottrell writes about the Oracle Linux offerings presented at LCA 2008 [1]

The one thing that Oracle does which I really object to is the “unbreakable” part of their advertising. They have pictures of penguins in armour and the only reasonable assumption is that their system is more secure in some way. As far as I am aware they offer no security features other than those which are available in Red Hat Enterprise Linux, CentOS, and Fedora. The unbreakable claims were also made before Oracle even had their own Linux distribution, which gave them even less reason for the claims.

If someone is going to be given credit for making Linux unbreakable then the contributors list for the SE Linux project [2] is one possible starting point. Another possibility is that credit could be given to Red Hat for introducing so many security features to the mainstream Linux users before any other distribution.

In terms of improving the security of databases it’s probably best to give credit to Kaigai Kohei and the PostgreSQL team for Security Enhanced PostgreSQL [3]. I believe that NEC also deserves some credit for sponsoring Kaigai’s work, I am not sure whether NEC directly sponsored his recent work on SE-PostgreSQL but they certainly sponsored his past work (and are credited on the NSA web site for this).

Oracle’s Linux distribution is based on CentOS and/or Red Hat Enterprise Linux (RHEL). The situation with RHEL is that the source is freely available to everyone but binaries are only available to people who pay for support. CentOS is a free recompile of RHEL and a good choice of a distribution if you want a server with long-term support and don’t want to pay Red Hat (I run many servers on CentOS).

While Matt gets most things right in his post there is one statement that I believe to be wrong, he writes “One of the craziest statements I heard during the talk was that Oracle will only support their products running under a VM if it’s within Oracle VM“. My knowledge of Xen causes me to have great concerns about reliability. My conversations with MySQL people about how intensive database servers are and how they can reveal bugs in the OS and hardware are backed up by my own experience in benchmarking systems. Therefore I think it’s quite reasonable to decline to support software running under someone else’s Xen build in the same way as you might refuse to support software running under a different kernel version (for exactly the same reasons).

Matt however goes on to make some very reasonable requests of Oracle. The demand for native packages of Oracle is significant, I can’t imagine official Debian package support appearing in the near future, but RPM support for RHEL etc would make things easier for everyone (including Oracle).

A better installation process for Oracle would also be a good thing. My observation is that most Oracle installations are not used for intensive work and use database features that are a sub-set of what MySQL offers. I’ve seen a few Oracle installations which have no more than three tables! The installation and management of Oracle is a significant cost factor. For example I used to work for a company that employed a full-time Oracle DBA for a database with only a few tables and very small amounts of access (he spent most of his time watching videos of fights and car crashes that he downloaded from the net). Adding one extra salary for a database is a significant expense (although the huge Oracle license fees may make it seem insignificant).

Laptop vs Book Weight

Matt Bottrell wrote an interesting and informative post about laptops for school kids [1]. His conclusion is that based on technical features the OLPC machine is best suited for primary school children and one of the ASUS EeePC, the Intel Classmate, and the Everex Cloudbook would be best suited for high-school students.

The Asus EeePC [2] is a good option, it runs a variant of Debian and the Debian Eeepc Team are active in getting full Debian support for it [3].

The Intel Classmate [4] has a choice of Windows XP, Mandriva, and Metasys Classmate. The web page says that it’s designed “for primary students (ages 5-14)“, so I think that Matt made a mistake in listing this as a possibility for high-schools, of course when running Mandriva it could have software installed for any age group but the hardware design may be better suited to younger children.

The Everex Cloudbook [5] runs the GOS Rocket [6] OS which seems to be an Ubuntu variant with an Enlightenment based GUI and a configuration aimed at using Google services (blogger, gmail, etc). Configuring Ubuntu to suit your needs is easy enough (it’s based on Debian). Note that Matt did not mention where one might purchase a Cloudbook in Australia and I don’t recall seeing one on any of my many window-shopping expeditions to Australian consumer electronics stores, while the EeePC is widely available (except when sold out). But I’m sure that if the government wanted to place an order for a couple of million units then Everex would ramp up production quickly enough.

Matt made one statement that I strongly disagree with, he wrote “A traditional notebook is far too heavy for high-school kids to lug around“.

To test this theory I searched for some high-school text books and a set of scales. A year 11 Maths A text book from ~1988 weighed 600g and the pair of year 12 Maths A and Maths B texts weighed 1.6Kg. When I was at high-school the day was divided into seven “periods”, some classes took two periods so four different classes which required text books (or other books) was typical. Carrying 3Kg of books to school would not be uncommon for year 12 students. The Lenovo T series (advertised as “premier performance” and the model I personally prefer) is listed as having a starting weight of 2.1Kg (which presumably doesn’t include the power pack). My Thinkpad T series (from about 2004) weighs about 2.4Kg according to my kitchen scales and has a battery weighing just over 400g.

My practice for a long time was to own a spare power pack for my Thinkpad so that I could leave it at work (saving 400g when travelling to and from work). I have also had the practice of buying a spare battery when I buy a Thinkpad (you need a spare battery for a long trip). So if I had really wanted to save weight I could have left a battery at work and reduced by travel weight by another 400g (with the cost being that I couldn’t use it when on a train or bus).

A spare power pack is not overly expensive. In the usual case students would only need a battery when at school (it’s a little known fact that Thinkpads work perfectly without a battery plugged in). So if a student had a power pack at home as well as one at school and if they left their battery at school and they owned one of the latest Thinkpad T series (listed with a starting weight of 2.1Kg) then their travel weight might be about 1.7Kg. If the majority of school texts could be stored on their laptop then the result of using a Thinkpad T series would be a significant weight reduction! If the students were using a Thinkpad X series (more expensive so maybe not a good option) then the list weight is 1.57Kg and the travel weight might be as low as 1.3Kg (at a rough estimate).

The EeePC offers significant benefits for school use, it is light, cheap (children tend to break or lose things more frequently than adults so you should budget for buying two of anything that they use), and having no hard drive (flash storage) it should cope well with being dropped. The screen on the EeePC is unreasonably small buy Asus could release a new model with a bigger screen (they may do this in the future anyway or a government contract could encourage them to do it sooner).

I agree that the EeePC or the Everex Cloudbook is probably the best option for high-school students, but I can’t agree with any claim about a traditional laptop being too heavy, the only reason for excluding a traditional laptop is that those new ultra-lights are better.

Another reason that might be cited for not using laptops is the cost. While prices of $1000 or more for a traditional laptop are rather expensive, the $500 for an EeePC is not that expensive – and the government could surely negotiate a better deal, I would be surprised if they couldn’t get the price down to $350 by some bargaining and by removing the middle-man. A careful child could use the same laptop for the entire duration of high-school and their parents would incur less expense than they currently would spend on text books.

As for the current lack of electronic text books. Currently when the education department selects a book it’s a license to print money for the author and publisher. All that the education department has to do is to declare that they will do a deal with the first company to release their books under a creative commons license. The idea would be that an author (or publishing company) would get paid a fixed sum of money for a CC release of a text book which would then be available for use by anyone anywhere in the world. World-wide free distribution would be no loss to the author (each country tends to have unique books anyway) but would be a good act of charity from our government to developing countries.

Once books were available under a creative commons license (without the “no modifications” clause) they could be freely improved by anyone. Improving text books for younger students could be a good school project.

Update:

Thanks to Steve Walsh for pointing out that the Classmate can run Linux. It’s a pity that he didn’t link to my post so that his readers could see what he was referring to. I take it as a good sign of the quality of my posts that such small errors get pointed out.

Hot Plug and How to Defeat It

Finally I found the URL of a device I’ve been hearing rumours about. The HotPlug is a device to allow you to move a computer without turning it off [1]. It is described as being created for “Government/Forensic customers” but is also being advertised for moving servers without powering them down.

The primary way that it works is by slightly unplugging the power plug and connecting wires to the active and neutral terminals, then when mains power is no longer connected it supplies power from a UPS. When mains power is re-connected the UPS is cut off.

Australian 240 volt 10 amp mains power plug

Modern electrical safety standards in most countries require that exposed pins of a power plug (other than the earth) be shielded to prevent metal objects or the fingers of young children from touching live conductors. The image above shows a recent Australian power plug which has the active and neutral pins protected with plastic such that if the plug is slightly removed there will be no access to live conductors. I have photographed it resting on a keyboard so that people who aren’t familiar with Australian plugs can see the approximate scale.

I’m not sure exactly when the new safer plugs were introduced, a mobile phone I bought just over three years ago has the old-style plug (no shielding) while most things that I bought since then have it. In any case I expect that a good number of PCs being used by Australian companies have the old style as I expect that some machines with the older plugs haven’t reached their three year tax write-down period.

For a device which has a plug with such shielding they sell kits for disassembling the power lead or taking the power point from the wall. I spoke to an an electrician who assured me that he could with a 100% success rate attach to wires within a power cord without any special tools (saving $149 of equipment that the HotPlug people offer). Any of these things will need to be implemented by a qualified electrician to be legal, and any electrician who has been doing the job for a while probably has a lot of experience working before the recent safety concerns about “working live“.

The part of the web site which concerns moving servers seems a little weak. It seems to be based on the idea that someone might have servers which don’t have redundant PSUs (IE really cheap machines – maybe re-purposed desktop machines) which have to be moved without any down-time and for which spending $500US on a device to cut the power (plus extra money to pay an electrician to use it) is considered a good investment. The only customers I can imagine for such a device are criminals and cops.

I also wonder whether you could get the same result with a simple switch that cuts from one power source to another. I find that it’s not uncommon for brief power fluctuations to cause the lights to flicker but for most desktop machines to not reboot. So obviously the capacitors in the PSU and on the motherboard can keep things running for a small amount of time without mains power. That should be enough for the power to be switched across to another source. It probably wouldn’t be as reliable but a “non-government” organisation which desires the use of such devices probably doesn’t want any evidence that they ever purchased one…

Now given that such devices are out there, the question is how to work around them. One thing that they advertise is “mouse jigglers” to prevent screen-lock programs from activating. So an obvious first step is to not allow jiggling to prevent the screen-saver. Forcing people to re-authenticate periodically during their work is not going to impact productivity much (of course the down-side is that it offers more opportunities for shoulder-surfing authentication methods).

Once a machine is taken the next step is to delay or prevent an attacker from reading the data. If an attacker has the resources of a major government behind them then they could read the bus of the machine to extract data and maybe isolate the CPU and send memory read commands to system memory to extract all data (including the keys for decrypting the hard drive). The only possible defence against that would be to have multiple machines exchanging encrypted heart-beat packets and configured to immediately shut themselves down if all other machines stop sending packets to them. But if defending against an attacker with more modest resources the shutdown period could be a lot longer (maybe a week without a successful login).

Obviously an attacker who gets physical ownership of a running machine will try and crack it. This is where all the OS security features we know can be used to delay them long enough to allow an automated shut-down that will remove the encryption keys from memory.

The Bill – Computers for Police

I was watching the British police drama show The Bill [1] and I was impressed by their use of computers.

They were analysing the evidence of a homicide and one of the tasks was to assemble a time-line of the related events. They had a projector connected to a computer which displayed the data and used what was apparently an infra-red pen (presumably similar in technology to Rusty’s infra-red pen for Wii-Mote Pong [2]) to write on the wall and the computer then performed OCR to convert it to printed text. When new evidence was discovered about the time of events they used drag and drop to move the events around to the correct time slots.

I’ve seen many horribly faked renditions of computer stuff on TV, but this seemed quite realistic. It was all technically possible, in many cases what was displayed would be quite inconvenient to fake if you didn’t have software to do it, and little touches like closing the session when the case was resolved are things that you wouldn’t bother with if faking it. The down-side to my analysis is that I had a couple of glasses of red wine so I might be more gullible than usual. ;)

Some years ago I swore off watching The Bill after a horribly stupid episode about computer crime (teenage haxor sells copies of cracked software to computer store owner for resale to the public, then computer store owner murders him for disrespect and price-gouging). I refrained from watching for almost a decade after that. But it seems that they have redeemed themself to some degree.

I wonder whether the software they were showing is really used by law enforcement, or whether it’s designed for corporations to use in tele-conferences etc.

In the same episode they showed the recordings of two security cameras on projectors side by side for comparison, they could fast-forward them independently but they couldn’t zoom in or do other silly and impossible things.

An Obstacle for Women in the IT Industry

It is becoming increasingly apparent that this post is not going to do any good, so I have deleted the content.

Sorry to the people who were offended.

I won’t be writing about such topics again.

Political Blog Posts

Currently in the US the main political parties are deciding who will contest the next presidential election. Naturally this gets some commentary from all sides.

Planet Debian has syndicated two blog posts commenting on these issues, it’s interesting to compare them:

First John Goerzen writes a post about an issue he (and almost everyone in the Debian community) considers important – copyright law [1]. He quotes the Randall Munroe who is the author of www.xkcd.com (described by the author as “A webcomic of romance, sarcasm, math, and language“) which is wildly popular in the free software community (I believe that there is a fan-club which has meetings). Randall’s commentary is quite interesting and I recommend reading it [2]. The most noteworthy fact is that Barak Obama has sought advice from Lawrence Lessig – who has done more for the free software community than any other lawyer.

John’s post doesn’t contain much original content, but citing a lengthy post from a highly regarded source and quoting a particularly relevant paragraph make it very noteworthy.

Next Jaldhar Vyas writes about “the war against Islamic terrorism” [3] and makes the claim “we are finally gaining the upper hand in Iraq“. To consider that claim it’s best to read some expert commentary. William S. Lind [4] is a widely respected conservative who is an expert on military strategy, his comments about stabilising Iraq are always interesting [5]. It’s sufficient to note that William has been predicting failure for the US occupation of Iraq right from the start and that events over the last five years have proven him correct.

William S. Lind’s thoughts on Democracy are interesting too [8], I don’t agree with him in this regard (I generally disagree with him on most things other than military matters) but it’s an interesting thought.

Jaldhar also claims that “Democrats on the other hand have a closer association with the media who are behind many of the sillier IP laws“, this seems to be a reference the the “liberal media” conspiracy theories. There seems no clear party association with bad IP legislation (both the Democrats and Republicans do silly things in this regard), and debunking the liberal media claims can be done simply by watching some TV (choose a couple of random news shows on different channels). Fox is not the only right-wing news outlet.

Jaldhar also has a conspiracy theory about the Supreme Court and the “need to ensure that the court does not veer leftward again as it will inevitably do during a Democratic administration“, not that the Democratic party is particularly “left” by whatever definition you might apply to the word. I prefer the www.politicalcompass.org analysis which shows that all the serious contenders in the Democratic primary are Authoritarian Right [6]. So the result of the next US presidential election will determine how authoritarian and right-wing the government will be, the fact that it will be authoritarian and right-wing is beyond doubt. That said it would be good if the authoritarian right-wing government that we (*) get at least has some decent policies in regard to IP law (which is the hope for Barak Obama).

John Nichols has an interesting analysis of Ann Coulter’s support of Hillary Clinton over John McCain [7]. It would be interesting to see Jaldhar’s comments on this issue.

(*) I use the term “we” when talking about US politics to acknowledge the fact that the Australian government will take orders from Washington. Some pundits predict that the Greens will be the second major party in Australian politics in a couple of elections time, so maybe in 8 years time we will have an Australian government that represents Australians instead of Americans. But for the entire duration in office of the next US president (be it 4 years or 8) they will be able to get the Australian government to do almost anything that they desire.

Linux Resource Controls

Using the “ulimit” controls over process resource use it is possible to limit RAM for processes and to limit the number of processes per UID. The problem is that this often is only good for accidental problems not dealing with malicious acts.

For a multi-user machine each user needs to be allowed to have two processes to be able to do anything (IE the shell and a command that they execute). A more practical limit is five processes for a single shell session (one or two background jobs, a foreground job where one process pipes data to another, and the shell). But even five processes is rather small (a single Unix pipe can have more than that). A shell server probably needs a limit of 20 processes per user if each user will have the possibility of running multiple logins. For running the occasional memory intensive process such as GCC the per-process memory limit needs to be at least 20M, if the user was to compile big C++ programs then 100M may be needed (I’ve seen a G++ process use more than 90M of memory when compiling a KDE source file). This means that a single user who can launch 20 processes which can each use 20M of memory could use 400M of memory, if they have each process write to a pages in a random order then 400M of RAM would be essentially occupied by that user.

If a shell server had 512M of RAM (which until quite recently was considered a lot of memory – the first multi-user Linux machine I ran on the net had 4M of RAM) then 400M of that could be consumed by a single hostile user. Leaving 100M for the real users might make the machine unusable. Note that the “hostile user” category also encompasses someone who gets fooled by the “here’s a cool program you should run” trick (which is common in universities).

I put my first SE Linux Play Machine [1] on the net in the middle of 2002 and immediately faced problems with DOS attacks. I think that the machine had 128M of RAM and because the concept was new (and SE Linux itself was new and mysterious) many people wanted to login. Having 20 shell users logged in at one time was not uncommon, so a limit of 50 processes for users was minimal. Given that GCC was a necessary part of the service (users wanted to compile their own programs to test various aspects of SE Linux) the memory limit per process had to be high. The point of the Play Machine was to demonstrate that “root” was restricted by SE Linux such that even if all Unix access control methods failed then SE Linux would still control access (with the caveat that a kernel bug still makes you lose). So as all users logged into the same account (root) the process limit had to be adequate to handle all their needs, 50 processes was effectively the bare minimum. 50 processes with 5M of memory each is more than enough to cause a machine with 128M of RAM to swap to death.

One thing to note is that root owned system processes count towards the ulimit for user processes as SE Linux does not have any resource usage controls. The aim of the SE Linux project is access control not protection against covert channels [2]. This makes it a little harder to restrict things as the number of processes run by daemons such as Postfix varies a little over time so the limits have to be a little higher to compensate, while Postfix is run with no limits the processes that it creates apply to the global limit when determining whether user processes can call fork().

So it was essentially impossible to implement any resource limits on my Play Machine that would prevent a DOS. I changed the MOTD (message of the day – displayed at login time) to inform people that a DOS attack is the wrong thing to do. I implemented some resource limits but didn’t seriously expect them to help much (the machine was DOSed daily).

Recently I had a user of my Play Machine accidentally DOS it and ask whether I should install any resource limits. After considering the issue I realised that I can actually do so in a useful manner nowadays. My latest Play Machine is a Xen DomU which I have now assigned 300M of RAM, I have configured the limit for root processes to be 45, as the system and my login comprise about 30 processes that leaves 15 for unprivileged (user_r) logins. Of recent times my Play Machine hasn’t been getting a lot of interest, having two people logged in at the same time is unusual so 15 processes should be plenty. Each process is limited to 20M of memory so overflowing the 300M of RAM should take a moderate amount of effort.

Recently I intentionally have not used swap space on that machine to save on noise when there’s a DOS attack (on the assumption that the DOS attack would succeed regardless of the amount of swap). Now that I have put resource limits in place I have installed 400M of swap space. A hostile user can easily prevent other unprivileged users from logging in by keeping enough long-running processes active – but they could achieve the same goal by having a program kill users shells as soon as they login (which a few people did in the early days). But it should not be trivial for them to prevent me from logging in via a simple memory or process DOS attack.

Update: It was email discussion with Cam McKenzie that prompted this blog psot.

The Failure of my Security Blogging Contest

On the 20th of January (8 days before the start of linux.conf.au) I advertised contest to write blog posts related to computer security for the conference Planet [1].

The aim of the contest was to encourage (by money prizes) people who had no prior experience in computer security to get involved by writing blog posts. The rules permitted security professionals to enter but only for an honourable mention, the money was reserved for people without prior experience.

The money that I initially advertised was a fraction of what was reserved for prizes, the idea being that if the contest went well then the prize pool could be easily increased but that if it didn’t go well then there would only be one small prize for someone to win by default. At the time I considered a single entry winning by default to be the worst case scenario.

The eventual result was that there was only one entry, this was from Martin Krafft on the point of keysigning [2]. Martin has prior experience in the computer security field which excludes him from a money prize, but he gets the only honourable mention. From a quick conversation with him it seems that his desire from entering the contest was to get his ideas about weaknesses in the keysigning process spread more widely, so this seems like a fairly ideal result for him. I agree with Martin that there are significant issues related to the keysigning process, but my ideas about them are a little different (I’ll blog about it later). His point about people merely checking that the picture matches on the ID and not verifying what the ID means is significant, the fact is that the vast majority of people are not capable of recognising ID from other countries. Other than requiring passports (which differ little between countries) I can’t think of a good solution to this problem.

Congratulations Martin! It is a good post and a worthy entry.

Now as to why the contest failed. I spoke to some people at the end of the conference about this. One delegate (who I know has the skills needed to produce a winning entry) said that I advertised it too soon before the conference and didn’t give delegates time to write entries. While I can’t dispute his own reasons for not entering I find it difficult to believe that more than a small proportion of delegates had that motivation. The LCA Planet had some lengthy posts by other delegates, and the guy who won second prize in the hack-fest spent something like 20 hours coding on his entry during the conference time (I suspect that my contest had the potential for a better ratio of work to prize money). Also the 8 days before the conference started was a good time to write entries for the contest.

One suggestion was that I propose that the conference organisers run such a contest next year. The problem with this is that it’s difficult to say “I tried this and failed, could you please try it too”. If nothing else I would need some significant reasons to believe that the contest has the potential to be more successful before attempting it on a larger scale. If the contest had been backed by the LCA organisers then it might have been more successful, but that possibility seems unlikely (and there is scope for an event to be more successful than mine while still being a failure). The reason that I consider it unlikely that official support would make it more successful is that I first advertised the event on my blog (syndicated to the conference Planet). Everyone who has a blog and attends the conference can be expected to have read about it. I then advertised it on the conference mailing list which I believe had as subscribers a large portion of the people who have enough spare time to create a blog specifically for the purpose of entering such a contest.

A blogging contest related to a conference but which had a wider scope (IE not limited to one field but instead covering anything related to the conference) might be successful. If someone wants to run such a contest next year then it’s probably worth doing.

Of course I have not given up on the plan of getting more people involved in computer security, expect to see some blog posts from me in the near future with other approaches to this. Suggestions would be appreciated.

Party for U18s at LCA 2009

This year at Linux.Conf.Au there was a student party sponsored by Google. The party was held in a bar and lots of free drinks were provided. This was fine for the university students, but for school kids it was obviously lacking.

Some people point out that it’s “quite legal” to run a party that excludes children, the point however is whether it’s desirable to exclude them. Also the concept of a state where laws dictate all aspects of your life to a degree such that obeying the law is the only possibility is fortunately restricted to science fiction.

Another common fallacy is when people point out that we should be grateful for Google’s sponsorship. As far as I am aware Google doesn’t insist on any particular conditions for sponsoring a party. If the conference organisers were to request a party at a more child-friendly venue (for example a restaurant – which if licensed could serve alcohol to adults who desire it) then I doubt that Google would refuse. Being grateful for Google’s sponsorship is entirely unrelated to the issue of whether their sponsorship money was spent in the best possible manner.

My interest in this topic started at LCA 2007 when I heard complaints from young delegates who were excluded from the Google party. This year the Google party (a different event from the “Google student party”) allowed everyone to attend and issued coloured wrist-bands to indicate whether the person had shown suitable ID to be served alcohol. The Google party was inviting for all and I believe that it was a significant improvement over last year (more attention was paid to serving food than alcohol). I have suggested that at future events some tables be reserved for people who aren’t drinking. As a general rule people enjoy being around people who have consumed a similar type and quantity of mind-altering substances (something I learned from my friends in Amsterdam).

There is of course demand for serious drinking, and it seems impossible to satisfy people who want to do serious drinking at the same party as people who won’t or can’t drink at all.

If there is not going to be an official party that is suitable for U18s then I’ll arrange it and pay for it myself. The consensus of opinion seems to be that less than six U18s are not worth catering for (one of the common objections to my suggestions is that there may be only four or five U18s). I can pay for a party for that many people which (in terms of food and non-alcoholic drinks) compares well with whatever Google might offer for the drinkers.

The rough idea is that U18s will have free food and non-alcoholic drinks. The venue will be some suitable restaurant (is there a Pizza hut or similar near the next LCA?). The party will be open to parents and delegates who are >18 and don’t plan to drink (but I’m sorry I can’t afford to shout you). Opportunities to learn about cool Linux stuff will abound (I expect that a number of knowledgable Linux people who can teach some interesting things will be interested in attending).

If I’m paying then children who aren’t delegates will also be welcome to attend, but their parents would have to pay for their food/drink. But this is merely a matter of budget, if it was to be an official event or there were other sponsors then this might not be the case.

What I would like right now is expressions of interest from young people who plan to attend the conference and from parents (who plan to attend the conference or whose children will attend). If it looks like there will be a suitably large number of people interested in this then the conference organisers may decide to make it an official event.

Also comments from adults who would prefer an alcohol free event (whether it be due to medical reasons, religion, or choice) would be of interest. It’s all about the number of people who will attend.