Archives

Categories

Mobile SSH Client

There has been a lot of fuss recently about the release of the iPhone [1] in Australia. But I have not been impressed.

I read an interesting post Why I don’t want an iPhone [2] which summarises some of the issues of it not being an open platform (and not having SSH client support). Given all the fuss about iPhones (which have just arrived in Australia) I had been thinking of writing my own post about this, but TK covered most of the issues that matter to me. One other thing I have to mention is the fact that I want a more fully powered PC with me. So even if I had a Green Phone (which doesn’t seem to be on general sale) [3] or OpenMoko [4] I would still want at least a PDA running Familiar and preferrably a laptop – I often carry both. A Nokia N8x0 series Internet Tablet [4] would satisfy my PDA needs (and also remove the need to carry an MP3/MP4 player and audio recorder).

When doing serious travelling I carry a laptop, a PDA, and a MP3 player all areas of my digital needs are covered better than an iPhone could reasonably manage. Finally mobile phones tend to not work or not work well ($1 per minute calls is part of my definition of “not well”) in other countries. While I haven’t been doing a lot of traveling recently I still try to avoid buying things that won’t work in other countries.

I had planned to just mention TK’s post in a links post. But then a client offered to buy me an iPhone. He wants me to be able to carry a ssh client with me most places that I go so that whenever his systems break I can login. Now apart from the lack of ssh client support an iPhone seems ideal. :-#

The cheapest Optus iPhone plan seems to be $19 per month for calls and data (which includes 100M of data) and $21 per month over 24 months for the iPhone thus giving a cost of $40 per month for 100M of data transfer (and a nice phone). There is a plan ofr a $19 per month iPhone, but that has a $19 per month un-capped phone plan and doesn’t sound like a good way of saving $2 per month. The “Three” phone company offers USB 3G modems for $5 per month (on a 24 month contract) and their cheapest plan is $15 per month which gives you 1GB of data per month and $0.10/M for additional data transfer. So it’s $20 per month for 1G (which requires a laptop) vs $40 per month for 100M.

Three also has a range of phone plans that allow 3G data access over bluetooth to a PC, it seems that a Nokia N8x0 tablet can be used with that which gives a result of two devices the size of mobile phones. But that costs $20 per month (on top of a regular Three bill) for a plan that offers 500M of data and still requires two devices while not giving the full PC benefits.

In the past I’ve done a lot of support work with a Nokia Communicator, so I’ve found that anything less than a regular keyboard really slows things down. While a EeePC keyboard is not nearly as good as a full sized keyboard it is significantly better than a touch-screen keyboard on a PDA (IE the Nokia N8x0 or the OpenMoko).

At the moment I’m looking at the option of carrying an EeePC with a USB Internet access device. That will cost $20 per month for net access. The cost of the EeePC is around $300 for a low-end model or about $650 for a 901 series that can run Xen (as noted in my previous post I’m considering the possibilities for having a mobile Xen simulation of a production network [5]). The savings of $20 per month over 24 months will entirely cover the cost of a low-end EeePC (ssh terminal, web browsing, and local storage of documentation) and cover most of the cost of a high-end EeePC. Another possibility to consider is using an old Toshiba Satellite I have hanging around (which I used to use as a mobile SE Linux demonstration machine) for a few months while the price on the EeePC 901 drops (as soon as the 70x series is entirely sold out and the 1000 series is available I expect that the 901 will get a lot cheaper).

Logic and Pants

I just read an interesting post about proposed new laws in the US prohibiting exposing underpants [1]. This is not a new thing and is part of a debate that has been taking place in many countries since the trend of “hip hop” saggy pants.

The first thing that occurs to me is to wonder what the difference really is between underpants and bathers. It seems to me that bathers are simply underpants that don’t turn transparent when they get wet (and which are made of materials that don’t degrade easily when exposed to sea water, UV light, and chlorinated water from swimming pools. So it seems that unless there is some clear legal difference between bathers and underpants such laws will not be effective. Could an underwear company produce products that are essentially the same as it’s regular products but which say “swimming attire” on the label to allow it’s customers to escape silly laws? In fact why not label all underwear as “swimming attire” just in case?

Would the prudes who object to a glimpse of underwear want police to go checking the labels of underwear to determine if they are permitted to be seen? The fascist trend in first-world countries is already quite bad, I don’t think we want to add underpants inspection to the list of police powers. Also it should be noted that a small portion of the police officers are corrupt, the idea of corrupt cops inspecting underpants is really not appealing…

It would be possible to define any clothes worn under other clothes as “underwear”, but this has problems too. For example when I was younger I used to often wear jeans over my bathers when on the way to/from a beach (often there were no adequate facilities for changing clothes near a beach). If I was to wear jeans over my bathers while walking to a beach could I get booked for showing a small section of my bathers over the top of my jeans – and then legally entirely display my bathers while swimming? Of course there are legal nude beaches in many localities, but blurring the distinction between a regular beach and a nude beach by permitting activity that would be “indecent exposure” on all beaches seems likely to have results that would not make the prudes happy.

The next logical implication of laws against exposing underpants is that they encourage wearing smaller underpants. My experience is that it is impossible to wear boxer-shorts without them being exposed above the top of my jeans. Should I be essentially prohibited from wearing boxer shorts because of the risk that if my shirt is not tucked in then someone might catch a glimpse of my underwear?

Now if “underwear” was defined to be “anything work beneath the outer layer of clothes” then what about the situation of having multiple layers of clothes? For example when an athlete who wears a track-suit over shorts, are those shorts “underwear”? If so do they cease being “underwear” once the track-suit is removed? Is there a race condition [2] where an athlete can wear shorts on the track, a track-suit on the bench, but they have to remove the track-suit as fast as possible because they are committing indecent exposure while removing the track-suit?

If underwear is defined as being the innermost layer of clothing, then what of the practice of “free-balling” (the practice of a man wearing a track-suit with no underpants) and the Scottish tradition of “nothing is worn under the kilt”? Can a track-suit or kilt be defined as underwear? If so how would it be enforced, would police look up the kilts of all men to ensure that the kilt is not the underwear?

As for “plumber’s crack” the only solution seems to be to compel plumbers to wear overalls. Of course then plumbers would increase their rates to cover the expense and inconvenience involved in a forced change of attire. I think that most people would prefer to hire a cheap plumber who shows some “crack” than an expensive plumber.

New SE Linux Policy for Lenny

I have just uploaded new SE Linux policy packages for Debian/Unstable which will go into Lenny (provided that the FTP masters approve the new packages in time).

The big change is that there are no longer separate packages for strict and targeted policies. There is now a package named selinux-policy-default which has the features of both strict and targeted. When you install it you get the features of targeted. If you want the strict features then you need to run the following commands as root:

semanage login -m -s user_u __default__
semanage login -m -s root root

Then you can logout and login and you get the main benefit of the strict policy (users being constrained). IE you can convert from targeted to strict without a reboot! The above only changes the access for user login sessions (and cron jobs). To fully convert to the strict policy you need to remove the unconfined module with the command “semodule -r unconfined“, currently that results in a system that doesn’t boot – I’m working on this and will have it fixed before Lenny. Also it’s possible to have some users unconfined and some restricted in the way that strict policy always did.

When running in the full strict configuration you need to run the command “newrole -r sysadm_r” immediately after logging in as root. When you login you default to staff_r which doesn’t give you the access needed to perform routine sys-admin tasks.

Due to the change in the function of the policy packages (in terms of not having a strict package) it made sense to revise the naming (Fedora 9 has a package named selinux-policy-targeted which also provides the strict configuration – I don’t want to do that and don’t have as much legacy as Fedora). This is why I decided to not have package names that include the word “policy” twice. Of course all policy packages get new names, but the ones that matter needed new names anyway.

Another new feature is the package selinux-policy-mls, as the name suggests this implements Multi Level Security [1]. I don’t expect that the MLS policy will boot in enforcing mode in a regular configuration at this time (you could probably hack it to boot in permissive mode and switch to enforcing mode just before it starts networking). I uploaded it in this state so that people can start testing it (there is a lot of testing that you can do in permissive mode) and so that it can get added to the package list in time for Lenny. I expect that I’ll have it booting shortly (it should not be much more difficult than getting the strict configuration booting).

In terms of the use of MLS, I don’t expect that anyone will want to pay the money needed for LSPP [2] certification. NB The wikipedia page about LSPP really needs some work.

I believe that the main benefit for having MLS in Debian is for the use of students. I periodically get requests from students for advice on how to get a job related to military computer security. Probably the best advice I can offer is to visit the career section of an agency from your government that works on computer security issues, for US readers the NSA careers page is here [3]. The second best advice I can offer is to work on MLS support in your favourite free OS. Not only will you learn about technology that is used in military systems but you will also learn a lot about how your OS works as MLS breaks things. ;)

Finally I’d like to thank Manoj for all his work. For a while I didn’t have time to do much work on SE Linux and he did a lot of good work. Recently he seems to have been busy on other things and I’ve had a little more time so I’m taking over some of it.

Is a GPG pass-phrase Useful?

Does a GPG pass-phrase provide a real benefit to the majority of users?

It seems that there will be the following categories of attack which result in stealing the secret-key data:

  1. User-space compromise of account (EG exploiting a bug in a web browser or IRC client).
  2. System compromise (EG compromising a local account and exploiting a kernel vulnerability to get root access).
  3. Theft of the computer system while powered down when the system was configured to not use swap or to encrypt the swap space with a random key at boot time.
  4. Theft of a computer system while running or that did not have encrypted swap.
  5. Theft of unencrypted backup media.

Category 1 will permit an attacker to monitor user processes and intercept one that asks for a GPG pass-phrase as well as to copy the secret key. Category 2 will do the same but for all users on the system.

Category 3 will give the potential for stealing the private key (if it’s not encrypted) but no direct potential for getting the pass-phrase.

Category 4 has the potential for copying a pass-phrase from memory or swap. I am inclined to trust Werner Koch (and anyone else who submitted code to the GPG project) to have written code to correctly lock memory and scrub pass-phrase data and decrypted private key data from memory after use. But I really doubt the ability of most people who write code to interface with GPG to do the same. So every time that a GUI program prompts for a GPG pass-phrase I think that there is the potential for it to be stored in swap or to remain indefinitely in RAM. Therefore stealing a machine that does not have it’s swap-space encrypted with a random key (which is the most practical way of encrypting swap) or stealing a running machine (as mentioned in a previous post [1]) can potentially grant a hostile party access to the pass-phrase.

So it seems to me that out of all the possible ways of getting access to a GPG private key, the only ones where a pass-phrase ones is going to really do some good are categories 3 and 5. While it’s good to protect against those situations, it seems to me that the greatest risk to a GPG key is from category 1, with category 2 following close behind.

I previously wrote about the slow progress towards using SE Linux and GPG code changes to make it more difficult to steal the secret key [2] – something that I’ve been occasionally working on over the last 6 years.

Now it seems to me that the same benefits can and should be made available to people who don’t use SE Linux. If a system directory such as /var/spool/gpg was mode 1770 then gpg could be setgid to group “gpg” so that it could create and access secret keys for users under /var/spool/gpg while the users in question could not directly access them. Then the sys-admin would be responsible for backing up GPG keys. Of course it would probably be ideal to have an option as to whether a new secret key would be created in the system spool or in the user home directory, and migrating the key from the user home directory to the system spool would be supported (but not migrating it back).

This would mean that an attacker who compromised a local user account (maybe through a vulnerability in a web browser or MUA) would not be able to get the GPG secret key. They could probably get the pass-phrase by ptracing the MUA (or some other GUI process that calls GPG) but without the secret key itself that would not do as much good – of course once they had the pass-phrase and local access they could use the machine so sign and decrypt data which would still be a bad thing. But it would not have the same scope as stealing the secret key and the pass-phrase.

I look forward to reading comments on this post.

Label vs UUID vs Device

Someone asked on a mailing list about the issues related to whether to use a label, UUID, or device name for /etc/fstab.

The first thing to consider is where the names come from. The UUID is assigned automatically by mkfs or mkswap, so you have to discover it after the filesystem or swap space has been made (or note it during the mkfs/mkswap process). For the ext2/3 filesystems the command “tune2fs -l DEVICE” will display the UUID and label (strangely mke2fs uses the term “label” while the output of tune2fs uses the term “volume name“). For a swap space I don’t know of any tool that can extract the UUID and name. On Debian (Etch and Unstable) the file command does not display the UUID for swap spaces or ext2/3 filesystems and does not display the label for ext2/3 filesystems. After I complete this blog post I will file a bug report.

If you are using a version of Debian earlier than Lenny (or a version of Unstable with this bug fixed) then you will be able to easily determine the label and UUID of a filesystem or swap space. Other than that the inconvenience of determining the UUID and label will be a reason for not using them in /etc/fstab (keep in mind that sys-admin work sometimes needs to be done at 3AM).

One problem with mounting by UUID or label is that it doesn’t work well with snapshots and block device backups. If you have a live filesystem on /dev/sdc and an image from a backup on /dev/sdd then there is a lot of potential for excitement when mounting by UUID or label. Snapshots can be made by a volume manager (such as LVM), a SAN, or an iSCSI server.

Another problem is that if a file-based backup is made (IE tar or cpio) then you lose the UUID and label. tune2fs allows setting the UUID, but that seems like a potential recipe for disaster. So this means that if mounting by UUID then you would potentially need to change /etc/fstab after doing a full filesystem restore from a file-based backup, this is not impossible but might not be what you desire. Setting the label is not difficult, but it may be inconvenient.

When using old-style IDE disks the device names were of the form /dev/hda for the first disk on the first controller (cable) and /dev/hdd for the second disk on the second controller. This was quite unambiguous, adding an extra disk was never going to change the naming.

With SCSI disks the naming issue has always been more complex, and which device gets the name /dev/sda was determined by the order in which the SCSI HAs were discovered. So if a SCSI HA which had no disks attached suddenly had a disk installed then the naming of all the other disks would change on the next boot! To make things more exciting Fedora 9 is using the same naming scheme for IDE devices as for SCSI devices, I expect that other distributions will follow soon and then even with IDE disks permanent names will not be available.

In this situation the use of UUIDs or LABELS is required for the use of partitions. However a common trend is towards using LVM for all storage, in this case LVM manages labels and UUIDs internally (with some excitement if you do a block device backup of an LVM PV). So LV names such as /dev/vg0/root then become persistent and there is no need for mounting via UUID or label.

The most difficult problem then becomes the situation where a FC SAN has the ability to create snapshots and make them visible to the same machine. UUID or label based mounting won’t work unless you can change them when creating the snapshot (which is not impossible but is rather difficult when you use a Windows GUI to create snapshots on a FC SAN for use by Linux systems). I have had some interesting challenges with this in the past when using a FC based SAN with Linux blade servers, and I never devised a good solution.

When using iSCSI I expect that it would be possible to force an association between SCSI disk naming and names on the server, but I’ve never had time to test it out.

Update: I have submitted Debian bug #489865 with a suggested change to the magic database.

Below are /etc/magic entries for displaying the UUID and label on swap spaces and ext2/3 filesystems:

Continue reading Label vs UUID vs Device

New ZCAV Development

I have just been running some ZCAV tests on some new supposedly 1TB disks (10^40 bytes is about 931*2^30 so is about 931G according to almost everyone in the computer industry who doesn’t work for a hard disk vendor).

I’ve added a new graph to my ZCAV results page [1] with the results.

One interesting thing that I discovered is that the faster disks can deliver contiguous data at a speed of more than 110MB/s, previously the best I’d seen from a single disk was about 90MB/s. When I first wrote ZCAV the best disks I had to test with all had a maximum speed of about 10MB/s so KB/s was a reasonable unit. Now I plan to change the units to MB/s to make it easier to read the graphs. Of course it’s not that difficult to munge the data before graphing it, but I think that it will give a better result for most users if I just change the units.

The next interesting thing I discovered is that by default GNUplot defaults to using exponential notation at the value of 1,000,000 (or 1e+06). I’m sure that I could override that but it would still make it difficult to read for the users. So I guess it’s time to change the units to GB.

I idly considered using the hard drive manufacturer’s definition of GB so that a 1TB disk would actually display as having 1000GB (the Wikipedia page for Gibibyte has the different definitions [2]). But of course having decimal and binary prefixes used in the X and Y axis of a graph would be a horror. Also the block and chunk sizes used have to be multiples of a reasonably large power of two (at least 2^14) to get reasonable performance from the OS.

The next implication of this is that it’s a bad idea to have a default block size that is not a power of two. The previous block sizes were 100M and 200M (for 1.0x and 1.9x branches respectively). Expressing these as 0.0976G and 0.1953G respectively would not be user-friendly. So I’m currently planning on 0.25G as the block size for both branches.

While changing the format it makes sense to change as many things as possible at once to reduce the number of incompatable file formats that are out there. The next thing I’m considering is the precision. In the past the speed in K/s was an integer. Obviously an integer for the speed in M/s is not going to work well for some of the slower devices that are still in use (EG a 4* CD-ROM drive maxes out at 600KB/s). Of course the accuracy of this is determined by the accuracy of the system clock. The gettimeofday() system call returns the time in micro-seconds. I expect that most systems don’t approach miro-second accuracy. I expect that it’s not worth reporting with a precision that is greater than the accuracy. Then there’s no point in making the precision of the speed any greater than the precision of the time.

Things were easier with the Bonnie++ program when I just reduced the precision as needed to fit in an 80 column display. ;)

Finally I ran my tests on my new Dell T105 system. While I didn’t get time to do as many tests as I desired before putting the machine in production I did get to do a quick test of two disks running at full speed. Previously when testing desktop systems I had not found a system which when run with two disks of the same age as the machine could extract full performance from both disks simultaneously. While the Dell T105 is a server-class system, it is a rather low-end server and I had anticipated that it would lack performance in this regard. I was pleased to note that I could run both 1TB disks at full speed at the same time. I didn’t get a chance to test three or four disks though (maybe for scheduled down-time in the future).

Advertising a Scam

Below is a strange Google advert that appeared on my blog. It appeared when I did a search on my blog, it also appears on my post about perpetual motion. It seems quite strange that they are advertising their product as a scam. It’s accurate, but I can’t imagine it helping sales.

Google advert for SCAM

Awful Computers for Kids

I have just observed demonstration units of the V-Smile system [1]. They have “educational games” aimed at ages 3-5, 4-7, and some similar ranges. The first thing I noticed was that children who were able to correctly play the games were a lot older than the designated ages. For example 10yo children were playing the Scooby-Doo addition game (supposedly teaching children to add single-digit numbers) and apparently finding the non-addition part of the game challenging (I tried it myself and found catching flying hamburgers while dodging birds to be challenging enough that it was difficult to find numbers). For children who were in the suggested age-range (and a suitable age for learning the basic lessons contained in the games) the only ones who actually managed to achieve the goals were the ones who were heavily directed by their father. So my observation is that the games will either be used by children who are too old for the basic lessons or be entirely directed by parents (I didn’t observe any mother giving the amount of assistance necessary for a 5yo to complete the games but assume that it happens sometimes).

I doubt that there are many children who have the coordination needed for a platform game who have not yet learned to recognise printed letters (as supposedly taught in the Winnie the Pooh game). The Thomas the Tank Engine spelling game had a UI that was strange to say the least (using a joystick not to indicate which direction to go but instead to move a cursor between possible tracks) and I doubt that it does any good at teaching letter recognition. There was also a game that involved using a stylus for tracing the outline of a letter, as I had great difficulty in doing this (due to the poor interface and the low resolution of the touch-pad) it seems very unlikely that a young child who is just learning to write letters would gain anything from it. Strangely there was a game that involved using the touch-pad to indicate matching colors. Recognising matching colors is even easier than recognising letters and I don’t think that a child who can’t recognise the colors would be able to manage the touch-pad.

The V-Smile system seems to primarily consist of a console designed for connection to a TV but also has hand-held units that take the same cartridges. The same company produces “laptops” which sell for $50 and have a very low resolution screen and only the most basic functionality (and presumably other useless games).

Sometimes the old-fashioned methods are best. It seems that crayons are among the best tools for teaching letter recognition and writing.

But if there is a desire to use a computer for teaching, then a regular PC or laptop should do. Letter recognition can be taught by reading the text menus needed to launch games. The variety of computer poker games can be used for recognising matching colors and numbers as can the Mahjong series of games. Counting can be taught through the patience games, and the GIMP can be used for teaching computer graphics and general control of the mouse and the GUI. NB I’m not advocating that all education be done on a computer, merely noting the fact that it can be done better with free software on an open platform than on the proprietary systems which are supposedly designed for education.

Finally with a PC children can take it apart! I believe that an important part of learning comes from disassembling and re-building toys. While it’s obvious that a PC is not going to compare with a Lego set, I think it’s good for children (and adults) to know that a computer is not a magic box, it’s a machine that they can understand (to a limited extent) and which is comprised of a number of parts that they could also understand if they wanted to learn the details. This idea is advocated by Gever Tulley advocates such disassembly of household items in his TED talk “5 dangerous things you should let your kids do” [2]. Gever runs The Tinkering School [3] which teaches young children how to make and break things.

Finally I just checked some auction sites and noticed that I can get reasonably new second-hand laptops for less than $300. A laptop for $250 running Linux should not be much more expensive than a proprietary laptop that starts at $50 once you include the price of all the extra games. For an older laptop (P3) the price is as low as $100 on an auction with an hour to go. Then of course for really cheap laptops you would buy from a company that is getting new machines for their staff. It’s not uncommon for companies to sell old laptops to employees for $50 each. At a recent LUG meeting I gave away a Thinkpad with a 233MHz Pentium-MMX CPU, 96M of RAM, and a 800*600 color display – by most objective criteria such a machine would be much more capable than one of those kids computers (either V-Smile or a competitor).

Of course the OLPC [4] is the ideal solution to such problems. It’s a pity that they are not generally available. I have previously written about the planned design for future OLPC machines [5] which makes it a desirable machine for my own personal use.

New Dell Server

My Dell PowerEdge T105 server (as referenced in my previous post [1]) is now working. It has new memory (why replace just the broken DIMM when you can replace both) and a new BIOS (Dell released an “Urgent” update yesterday that fixes a problem with memory timing and Opteron CPUs). The BIOS update can be installed from a DOS executable (traditionally done from a floppy disk) or an i386 Linux executable. As I didn’t have a floppy drive in my new server I had to use Linux (not that I object to using Linux, but I’d rather have had the technician do it all for me). I used rescue mode from a Fedora 9 CD that was convenient, mounted a USB stick that I had used to store the BIOS update, and then ran it.

The Dell service was quite good, on-site service and the problem was fixed approximately 27 hours after I called them. Replacing a couple of DIMMs is hardly a test of skill for the repair-man (unlike the time in Amsterdam when a Dell repair-man swapped a motherboard in a server with only 20 minutes of down-time). So I haven’t seen evidence of them doing anything really great, but getting someone on-site close to 24 hours after the report is quite decent, especially considering that I paid for the cheapest support that they offer.

When I got it working I was a little surprised by the memory speed, I had hoped that a new 2GHz Opteron would perform similarly to an Intel E2160 and better than an old Pentium-D (see the results here [2]). Also the memtest86+ run took ages on the step of writing random numbers (I don’t recall ever seeing that step on previous runs, let alone having a system spend half an hour doing it). It seems that the CPU (Opteron 1212) doesn’t perform well for random number generation.

In terms of actual operation all I’ve done so far is to install Debian. The process of installing Debian packages was quite fast (even with a RAID-1 reconstruction occurring at the same time) and the boot time is also very quick.

The hard drive “rails” seemed a little flimsy. The way they attach to the drive is that they have screws that end in pins, so you screw them into plastic and the pins just sit in the holes in the drive where screws normally attach. I think that it would make more sense to have them not screw onto the plastic and instead screw onto the disk. Then if the plastic part that connects the two sides was to break it would still be usable. In fact they could just make the “rails” be separate rails as most other manufacturers do.

One thing that surprised me was the lack of PS/2 keyboard and mouse ports. I had expected that such ports would last longer than serial ports and floppy drives. However my Dell has a power connector for a floppy drive and has a built-in serial port (with some BIOS support for management via a serial port – I have not investigated this because I always plan to use a keyboard and monitor). Of course I expect that most other machines will start shipping without PS/2 ports now and I will have to dispose of my stockpile of PS/2 keyboards and mouses. I generally like to keep a few on hand so that I can give friends and relatives a chance to try a selection and discover which type suits them the best. But I probably don’t need a dozen of them for that purpose.

While a comment on my previous post noted that the floppy drive bay can be used for another disk, it seems that a disk is not going to fit in there easily. It looks like I might be able to install a disk there from the front if I unscrew the face-plate – but that’s more effort than I’m prepared to exert for testing the system (for production I will only have two disks).

In terms of noise, the Dell seems considerably better than a NEC machine which was designed for desktop use. Of course it’s difficult to be certain as part of the noise is from hard disks and one of the disks I’ve installed in the Dell is a WD “Green” disk and the other may have newer technology to minimise noise. Also the mounting brackets for disks in a server may be better at damping vibrations than screwing a disk to the chassis of a desktop machine. Finally the NEC machine does seem to make more noise now than it used to, so maybe it would be best to compare after a few months use to allow for minor wear on the moving parts.

I was initially going to run Debian/Etch on the machine. But as Debian didn’t recognise the built-in Ethernet card and the Xen kernel crashed when doing intensive disk IO I was forced to use CentOS. CentOS 5.1 didn’t start my DomU’s for some reason (which I never diagnosed) but CentOS 5.2 worked perfectly.

Finally I was shocked when I realised that the Dell has no sound hardware! When the CentOS post-install program said that it couldn’t find a sound device I thought that meant that it didn’t support the hardware (it’s the sort of thing that sometimes happens when you get a new machine). But it actually has no sound support! It seems really strange that Dell design a desk-side server (which is quiet) and don’t include sound support. If nothing else then using something like randomsound to take input from the microphone line as a source of entropy is going to be useful on servers.

While the seven USB ports initially seemed like a lot, being forced to use them for keyboard, mouse, and sound (if I end up using it on a desktop) means that there would only be four left.

Shared Context and Blogging

One interesting aspect of the TED conference [1] is the fact that they only run one stream. There is one lecture hall with one presentation and everyone sees the same thing. This is considerably different to what seems to be the standard practice for Linux conferences (as implemented by LCA, OLS, and Linux Kongress) where there are three or more lecture halls with talks in progress at any time. At a Linux conference you might meet someone for lunch and start a conversation by asking “did you attend the lecture on X“, as there are more than two lecture halls the answer is most likely to be “no“, which then means that you have to describe the talk in question before talking about what you might really want to discuss (such as how a point made in the lecture in question might impact the work of the people you are talking two). In the not uncommon situation where there is an interesting implication of combining the work described in two lectures it might be necessary to summarise both lectures before describing the implication of combining the work.

Now there are very good reasons for running multiple lecture rooms at Linux conferences. The range of topics is quite large and probably very few delegates will be interested in the majority of the talks. Usually the conference organisers attempt to schedule things to minimise the incidence of people missing talks that interest them, one common way of doing so is to have conference “streams”. Of course when you have for example a “networking” stream, a “security” stream, and a “virtualisation” stream then you will have problems when people are interested in the intersection of some of those areas (virtual servers do change things when you are working on network security).

There seem some obvious comparisons between Planet installations (as aggregates of RSS feeds) and conferences (as aggregates of lectures). On Planet Debian [2] there has traditionally been a strong shared context with many blog posts referring to the same topics – where one person’s post has inspired others to write about similar topics. After some discussion (on blogs and by email) it was determined that there would be no policy for Planet Debian and that anyone who doesn’t want to read some of the content should filter the feed. Of course this means that the number of people who read (or at least skim) the entire feed will drop and therefore we lose the shared context.

Planet Linux Australia [3] currently has a discussion about the issue of what types of content to aggregate. Michael Davies has just blogged a survey about what types of content to include [4]. I think it’s unfortunate that he decided to name the post after one blogger who’s feed is aggregated on that Planet as that will encourage votes on the specific posts written by that person rather than the general issue. But I think it’s much better to tailor a Planet to the interests of the people who read it than to include everything and encourage readers to read a sub-set.

When similar issues were in discussion about Planet Debian I wrote about my ideas on the topic [5]. In summary I think that the Gentoo idea of having two Planet installations (one for the content which is most relevant and one for everything that is written by members) is a really good one. It’s also a good thing to have a semi-formal document about the type of content that is expected – this would be useful both for using a limited feed for people who go significantly off-topic and as a guideline for people who want to write posts that will be appreciated by the majority of the readers. Planet Ubuntu has a guideline, but it was not very formal last time I checked.

Finally in regard to short posts, they generally don’t interest me much. If I want to get a list of hot URLs then I could go to any social media site to find some. I write a list post at most once a month, and I generally don’t include a URL in the list unless I have a comment to make about it. I always try to describe each page that I link to in enough detail that if the reader can’t view it then they at least have some idea of what it is about (no “this is cool” or “this sucks” links).