|
|
There is a lot of interest in making organisations “green” nowadays. One issue is how to make the IT industry green. People are talking about buying “offsets” for CO2 production, but the concern is that some of the offset schemes are fraudulent. Of course the best thing to do is to minimise the use of dirty power as much as possible.
Of course the first thing to do is to pay for “green power” (if available) and if possible install solar PV systems on building roofs. While the roof space of a modern server room would only supply a small amount of the electricity needed (maybe less than needed to power the cooling) every little bit helps. The roof space of an office building can supply a significant portion of the electricity needs, two years ago Google started work on instralling Solar PV panels on the roof of the “Googleplex” [1] with the aim of supplying 30% of the building’s power needs.
For desktop machines a significant amount of power can be saved if they are turned off overnight. For typical office work the desktop machines should be idle most of the time, so if the machine is turned off outside business hours then it will use something close to 45/168 of the power that it might otherwise use. Of course this requires that the OS support hibernation (which isn’t supported well enough in Linux for me to want to use it) or that applications can be easily stopped and restarted so that the system can be booted every morning. One particular corner case is that instant-messaging systems need to be server based with an architecture that supports storing messages on the server (as Jabber does [2]) rather than requiring that users stay connected (as IRC does). Of course there are a variety of programs to proxy the IRC protocol and using screen on a server to maintain a persistent IRC presence is popular among technical users (for a while I used that at a client site so that I could hibernate the PowerMac I had on my desktop when I left the office).
It seems that most recent machines have BIOS support for booting at a pre-set time. This would allow the sys-admin to configure the desktop machines to boot at 8:00AM on every day that the office is open. That way most employees will arrive at work to find that their computer is already booted up and waiting for them. We have to keep in mind the fact that when comparing the minimum pay (about $13 per hour in Australia) with the typical electricity costs ($0.14 per KWh – which means that a desktop computer might use $0.14 of electricity per day) there is no chance of saving money if employee time is wasted. While companies are prepared to lose some money in the process of going green, they want to minimise that loss as much as possible.
The LessWatts.org project dedicated to saving energy on Linux systems reports that Gigabit Ethernet uses about 2W more power than 100baseT on the same adapter [3]. It seems most likely that similar savings can be achieved from other operating systems and also from other network hardware. So I expect that using 100baseT speed would not only save about 2W at the desktop end, but it would also save about 2W at the switch in the server-room and maybe 1W in cooling as well. If you have a 1RU switch with 24 Gig-E ports then that could save 48W if the entire switch ran at 100baseT speed, compared to a modern 1RU server which might take a minimum of 200W that isn’t very significant.
The choice of server is going to be quite critical to power use, it seems that all vendors are producing machines that consume less power (if only so that they can get more servers installed without adding more air-conditioning), so some effort in assessing power use before purchase could produce some good savings. When it comes time to decommission old servers it is a good idea to measure the power use and decommission the most power hungry ones first whenever convenient. I am not running any P4 systems 24*7 but have a bunch of P3 systems running as servers, this saves me about 40W per machine.
It’s usually the case that the idle power is a significant portion of the maximum power use. In the small amount of testing I’ve done I’ve never been able to find a case where idle power was less than 50% of the maximum power – of course if I spun-down a large number of disks when idling this might not be the case. So if you can use one virtual server that’s mostly busy instead of a number of mostly idle servers then you can save significant amounts of power. Before I started using Xen I had quite a number of test and development machines and often left some running idle for weeks (if I was interrupted in the middle of a debugging session it might take some time to get back to it). Now if one of my Xen DomU’s doesn’t get used for a few weeks it uses little electricity that wouldn’t otherwise be used. It is also possible to suspend Xen DomU’s to disk when they are not being used, but I haven’t tried going that far.
Xen has a reputation for preventing the use of power saving features in hardware. For a workstation this may be a problem, but for a server that is actually getting used most of the time it should not be an issue. KVM development is apparently making good progress, and KVM does not suffer from any such problems. Of course the down-side to KVM is that it requires an AMD64 (or Intel clone) system with hardware virtualisation, and such systems often aren’t the most energy efficient. A P3 system running Xen will use significantly less power than a Pentium-D running KVM – server consolidation on a P3 server really saves power!
I am unsure of the energy benefits of thin-client computing. I suspect that thin clients can save some energy as the clients take ~30W instead of ~100W so even if a server for a dozen users takes 400W there will still be a net benefit. One of my clients does a lot of thin-client work so I’ll have to measure the electricity use of their systems.
Disks take a significant amount of power. For a desktop system they can be hibernated at times (an office machine can be configured such that the disks can spin-down during a lunch break). This can save 7W per disk (the exact amount depends on the type of disk and the efficiency of the PSU – (see the Compaq SFF P3 results and the HP/Compaq Celeron 2.4GHz on my computer power use page [4]). Network booting of diskless workstations could save 7W for the disk (and also reduce the noise which makes the users happy) but would drive the need for Gigabit Ethernet which then wastes 4W per machine (2W at each end of the Ethernet cable).
Recently I’ve been reading about the NetApp devices [5]. By all accounts the advanced features of the NetApp devices (which includes their algorithms for the use of NVRAM as write-back cache and the filesystem journaling which allows most writes to be full stripes of the RAID) allow them to deliver performance that is significantly greater than a basic RAID array with a typical filesystem. It seems to me that there is the possibility of using a small number of disks in a NetApp device to replace a larger number of disks that are directly connected to hosts. Therefore use of NetApp devices could save electricity.
Tele-commuting has the potential to save significant amounts of energy in employee travel. A good instant-messaging system such as Jabber could assist tele-commuters (it seems that a Jabber server is required for saving energy in a modern corporate environment).
Have I missed any ways that sys-admins can be involved in saving energy use in a corporation?
Update: Albert pointed out that SSD (Solid State Disks) can save some power. They also reduce the noise of the machine both by removing one moving part and by reducing heat (and therefore operation of the cooling fan). They are smaller than hard disks, but are large enough for an OS to boot from (some companies deliberately only use a small portion of the hard drives in desktop machines to save space on backup tapes). It’s strange that I forgot to mention this as I’m about to buy a laptop with SSD.
I’ve been considering the possibility of using Xen on an ASUS EeePC as a mobile test platform for an Internet service. While the real service uses some heavy hardware it seems that a small laptop could simulate it when running with a small data set (only a few dozen accounts) and everything tuned for small amounts of RAM (small buffers for database servers etc).
According to the wikipedia page about the EeePC [1] the 70x and 900 versions of the EeePC use a Celeron-M CPU. According to Wikipedia that is based on the Pentium-M (which lacks PAE support and therefore can’t run Xen).
The Fedora Tutorial about the EeePC has a copy of the /proc/cpuinfo data from an EeePC [2] which shows that the model in question (which is not specified) lacks PAE. Are there any 70x or 90x variants that have PAE? Intel sometimes dramatically varies the features within a range of CPUs…
The 901 version and the 1000 series use an Intel “Atom” CPU. According to discussion on the Gentoo Forums some Atom CPUs have the “lm” flag (64bit) but no “vmx” flag for virtualisation [3] (which means that they can run Xen paravirtualised but no KVM or hardware virtualisation for Xen), it also has PAE. This is more than adequate.
According to the Wikipedia page the Atom comes in both 32bit and 64bit variants [4]. Hopefully the 901 version and the 1000 series EeePC will have the 64bit version.
The 90x versions have support for up to 4G of RAM but the 1000 series is only listed as supporting 2G, hopefully that will be 4G or more (although I wouldn’t be surprised if Intel had a chipset supporting only 4G of address space and PCI reservations limiting the machine to 3G). But even 3G will be enough for a mobile test/development platform which should make it easier to debug some problems remotely.
The 901 is available in Australia for just under $700. It’s a little more expensive than previous EeePC variants ($500 is a magic number below which things can be purchased with significantly less consideration), but it still might be something that one of my clients will pay for.
The prime aim is to be a mobile sys-admin platform that can be carried anywhere, running a Xen simulation of the target network is an added bonus.
Any suggestions for other laptops that should be considered will be welcome. It needs to be light (1.14Kg for a 901 EeePC is more than I desire), small (a reduced display size is not a problem), and not overly expensive ($700 is more than desired).
Update: JB HiFi is selling the 1000H model [5]. The 1000H has an 80G hard disk and weighs 1.45Kg. The extra 210g and slightly larger size are a down-side, as is the extra ~$50 in price.
A comment was made that OpenVZ could be used. If that avoids the need for PAE then a 702 series would do the job (with some USB flash devices as extras). The 702 is a mere 920g.
Update: This ZDNET review shows that the 901 can only handle 2G of RAM and has an Atom CPU that is only 32bit [6].
There has been a lot of fuss recently about the release of the iPhone [1] in Australia. But I have not been impressed.
I read an interesting post Why I don’t want an iPhone [2] which summarises some of the issues of it not being an open platform (and not having SSH client support). Given all the fuss about iPhones (which have just arrived in Australia) I had been thinking of writing my own post about this, but TK covered most of the issues that matter to me. One other thing I have to mention is the fact that I want a more fully powered PC with me. So even if I had a Green Phone (which doesn’t seem to be on general sale) [3] or OpenMoko [4] I would still want at least a PDA running Familiar and preferrably a laptop – I often carry both. A Nokia N8x0 series Internet Tablet [4] would satisfy my PDA needs (and also remove the need to carry an MP3/MP4 player and audio recorder).
When doing serious travelling I carry a laptop, a PDA, and a MP3 player all areas of my digital needs are covered better than an iPhone could reasonably manage. Finally mobile phones tend to not work or not work well ($1 per minute calls is part of my definition of “not well”) in other countries. While I haven’t been doing a lot of traveling recently I still try to avoid buying things that won’t work in other countries.
I had planned to just mention TK’s post in a links post. But then a client offered to buy me an iPhone. He wants me to be able to carry a ssh client with me most places that I go so that whenever his systems break I can login. Now apart from the lack of ssh client support an iPhone seems ideal. :-#
The cheapest Optus iPhone plan seems to be $19 per month for calls and data (which includes 100M of data) and $21 per month over 24 months for the iPhone thus giving a cost of $40 per month for 100M of data transfer (and a nice phone). There is a plan ofr a $19 per month iPhone, but that has a $19 per month un-capped phone plan and doesn’t sound like a good way of saving $2 per month. The “Three” phone company offers USB 3G modems for $5 per month (on a 24 month contract) and their cheapest plan is $15 per month which gives you 1GB of data per month and $0.10/M for additional data transfer. So it’s $20 per month for 1G (which requires a laptop) vs $40 per month for 100M.
Three also has a range of phone plans that allow 3G data access over bluetooth to a PC, it seems that a Nokia N8x0 tablet can be used with that which gives a result of two devices the size of mobile phones. But that costs $20 per month (on top of a regular Three bill) for a plan that offers 500M of data and still requires two devices while not giving the full PC benefits.
In the past I’ve done a lot of support work with a Nokia Communicator, so I’ve found that anything less than a regular keyboard really slows things down. While a EeePC keyboard is not nearly as good as a full sized keyboard it is significantly better than a touch-screen keyboard on a PDA (IE the Nokia N8x0 or the OpenMoko).
At the moment I’m looking at the option of carrying an EeePC with a USB Internet access device. That will cost $20 per month for net access. The cost of the EeePC is around $300 for a low-end model or about $650 for a 901 series that can run Xen (as noted in my previous post I’m considering the possibilities for having a mobile Xen simulation of a production network [5]). The savings of $20 per month over 24 months will entirely cover the cost of a low-end EeePC (ssh terminal, web browsing, and local storage of documentation) and cover most of the cost of a high-end EeePC. Another possibility to consider is using an old Toshiba Satellite I have hanging around (which I used to use as a mobile SE Linux demonstration machine) for a few months while the price on the EeePC 901 drops (as soon as the 70x series is entirely sold out and the 1000 series is available I expect that the 901 will get a lot cheaper).
I just read an interesting post about proposed new laws in the US prohibiting exposing underpants [1]. This is not a new thing and is part of a debate that has been taking place in many countries since the trend of “hip hop” saggy pants.
The first thing that occurs to me is to wonder what the difference really is between underpants and bathers. It seems to me that bathers are simply underpants that don’t turn transparent when they get wet (and which are made of materials that don’t degrade easily when exposed to sea water, UV light, and chlorinated water from swimming pools. So it seems that unless there is some clear legal difference between bathers and underpants such laws will not be effective. Could an underwear company produce products that are essentially the same as it’s regular products but which say “swimming attire” on the label to allow it’s customers to escape silly laws? In fact why not label all underwear as “swimming attire” just in case?
Would the prudes who object to a glimpse of underwear want police to go checking the labels of underwear to determine if they are permitted to be seen? The fascist trend in first-world countries is already quite bad, I don’t think we want to add underpants inspection to the list of police powers. Also it should be noted that a small portion of the police officers are corrupt, the idea of corrupt cops inspecting underpants is really not appealing…
It would be possible to define any clothes worn under other clothes as “underwear”, but this has problems too. For example when I was younger I used to often wear jeans over my bathers when on the way to/from a beach (often there were no adequate facilities for changing clothes near a beach). If I was to wear jeans over my bathers while walking to a beach could I get booked for showing a small section of my bathers over the top of my jeans – and then legally entirely display my bathers while swimming? Of course there are legal nude beaches in many localities, but blurring the distinction between a regular beach and a nude beach by permitting activity that would be “indecent exposure” on all beaches seems likely to have results that would not make the prudes happy.
The next logical implication of laws against exposing underpants is that they encourage wearing smaller underpants. My experience is that it is impossible to wear boxer-shorts without them being exposed above the top of my jeans. Should I be essentially prohibited from wearing boxer shorts because of the risk that if my shirt is not tucked in then someone might catch a glimpse of my underwear?
Now if “underwear” was defined to be “anything work beneath the outer layer of clothes” then what about the situation of having multiple layers of clothes? For example when an athlete who wears a track-suit over shorts, are those shorts “underwear”? If so do they cease being “underwear” once the track-suit is removed? Is there a race condition [2] where an athlete can wear shorts on the track, a track-suit on the bench, but they have to remove the track-suit as fast as possible because they are committing indecent exposure while removing the track-suit?
If underwear is defined as being the innermost layer of clothing, then what of the practice of “free-balling” (the practice of a man wearing a track-suit with no underpants) and the Scottish tradition of “nothing is worn under the kilt”? Can a track-suit or kilt be defined as underwear? If so how would it be enforced, would police look up the kilts of all men to ensure that the kilt is not the underwear?
As for “plumber’s crack” the only solution seems to be to compel plumbers to wear overalls. Of course then plumbers would increase their rates to cover the expense and inconvenience involved in a forced change of attire. I think that most people would prefer to hire a cheap plumber who shows some “crack” than an expensive plumber.
I have just uploaded new SE Linux policy packages for Debian/Unstable which will go into Lenny (provided that the FTP masters approve the new packages in time).
The big change is that there are no longer separate packages for strict and targeted policies. There is now a package named selinux-policy-default which has the features of both strict and targeted. When you install it you get the features of targeted. If you want the strict features then you need to run the following commands as root:
semanage login -m -s user_u __default__
semanage login -m -s root root
Then you can logout and login and you get the main benefit of the strict policy (users being constrained). IE you can convert from targeted to strict without a reboot! The above only changes the access for user login sessions (and cron jobs). To fully convert to the strict policy you need to remove the unconfined module with the command “semodule -r unconfined“, currently that results in a system that doesn’t boot – I’m working on this and will have it fixed before Lenny. Also it’s possible to have some users unconfined and some restricted in the way that strict policy always did.
When running in the full strict configuration you need to run the command “newrole -r sysadm_r” immediately after logging in as root. When you login you default to staff_r which doesn’t give you the access needed to perform routine sys-admin tasks.
Due to the change in the function of the policy packages (in terms of not having a strict package) it made sense to revise the naming (Fedora 9 has a package named selinux-policy-targeted which also provides the strict configuration – I don’t want to do that and don’t have as much legacy as Fedora). This is why I decided to not have package names that include the word “policy” twice. Of course all policy packages get new names, but the ones that matter needed new names anyway.
Another new feature is the package selinux-policy-mls, as the name suggests this implements Multi Level Security [1]. I don’t expect that the MLS policy will boot in enforcing mode in a regular configuration at this time (you could probably hack it to boot in permissive mode and switch to enforcing mode just before it starts networking). I uploaded it in this state so that people can start testing it (there is a lot of testing that you can do in permissive mode) and so that it can get added to the package list in time for Lenny. I expect that I’ll have it booting shortly (it should not be much more difficult than getting the strict configuration booting).
In terms of the use of MLS, I don’t expect that anyone will want to pay the money needed for LSPP [2] certification. NB The wikipedia page about LSPP really needs some work.
I believe that the main benefit for having MLS in Debian is for the use of students. I periodically get requests from students for advice on how to get a job related to military computer security. Probably the best advice I can offer is to visit the career section of an agency from your government that works on computer security issues, for US readers the NSA careers page is here [3]. The second best advice I can offer is to work on MLS support in your favourite free OS. Not only will you learn about technology that is used in military systems but you will also learn a lot about how your OS works as MLS breaks things. ;)
Finally I’d like to thank Manoj for all his work. For a while I didn’t have time to do much work on SE Linux and he did a lot of good work. Recently he seems to have been busy on other things and I’ve had a little more time so I’m taking over some of it.
Does a GPG pass-phrase provide a real benefit to the majority of users?
It seems that there will be the following categories of attack which result in stealing the secret-key data:
- User-space compromise of account (EG exploiting a bug in a web browser or IRC client).
- System compromise (EG compromising a local account and exploiting a kernel vulnerability to get root access).
- Theft of the computer system while powered down when the system was configured to not use swap or to encrypt the swap space with a random key at boot time.
- Theft of a computer system while running or that did not have encrypted swap.
- Theft of unencrypted backup media.
Category 1 will permit an attacker to monitor user processes and intercept one that asks for a GPG pass-phrase as well as to copy the secret key. Category 2 will do the same but for all users on the system.
Category 3 will give the potential for stealing the private key (if it’s not encrypted) but no direct potential for getting the pass-phrase.
Category 4 has the potential for copying a pass-phrase from memory or swap. I am inclined to trust Werner Koch (and anyone else who submitted code to the GPG project) to have written code to correctly lock memory and scrub pass-phrase data and decrypted private key data from memory after use. But I really doubt the ability of most people who write code to interface with GPG to do the same. So every time that a GUI program prompts for a GPG pass-phrase I think that there is the potential for it to be stored in swap or to remain indefinitely in RAM. Therefore stealing a machine that does not have it’s swap-space encrypted with a random key (which is the most practical way of encrypting swap) or stealing a running machine (as mentioned in a previous post [1]) can potentially grant a hostile party access to the pass-phrase.
So it seems to me that out of all the possible ways of getting access to a GPG private key, the only ones where a pass-phrase ones is going to really do some good are categories 3 and 5. While it’s good to protect against those situations, it seems to me that the greatest risk to a GPG key is from category 1, with category 2 following close behind.
I previously wrote about the slow progress towards using SE Linux and GPG code changes to make it more difficult to steal the secret key [2] – something that I’ve been occasionally working on over the last 6 years.
Now it seems to me that the same benefits can and should be made available to people who don’t use SE Linux. If a system directory such as /var/spool/gpg was mode 1770 then gpg could be setgid to group “gpg” so that it could create and access secret keys for users under /var/spool/gpg while the users in question could not directly access them. Then the sys-admin would be responsible for backing up GPG keys. Of course it would probably be ideal to have an option as to whether a new secret key would be created in the system spool or in the user home directory, and migrating the key from the user home directory to the system spool would be supported (but not migrating it back).
This would mean that an attacker who compromised a local user account (maybe through a vulnerability in a web browser or MUA) would not be able to get the GPG secret key. They could probably get the pass-phrase by ptracing the MUA (or some other GUI process that calls GPG) but without the secret key itself that would not do as much good – of course once they had the pass-phrase and local access they could use the machine so sign and decrypt data which would still be a bad thing. But it would not have the same scope as stealing the secret key and the pass-phrase.
I look forward to reading comments on this post.
Someone asked on a mailing list about the issues related to whether to use a label, UUID, or device name for /etc/fstab.
The first thing to consider is where the names come from. The UUID is assigned automatically by mkfs or mkswap, so you have to discover it after the filesystem or swap space has been made (or note it during the mkfs/mkswap process). For the ext2/3 filesystems the command “tune2fs -l DEVICE” will display the UUID and label (strangely mke2fs uses the term “label” while the output of tune2fs uses the term “volume name“). For a swap space I don’t know of any tool that can extract the UUID and name. On Debian (Etch and Unstable) the file command does not display the UUID for swap spaces or ext2/3 filesystems and does not display the label for ext2/3 filesystems. After I complete this blog post I will file a bug report.
If you are using a version of Debian earlier than Lenny (or a version of Unstable with this bug fixed) then you will be able to easily determine the label and UUID of a filesystem or swap space. Other than that the inconvenience of determining the UUID and label will be a reason for not using them in /etc/fstab (keep in mind that sys-admin work sometimes needs to be done at 3AM).
One problem with mounting by UUID or label is that it doesn’t work well with snapshots and block device backups. If you have a live filesystem on /dev/sdc and an image from a backup on /dev/sdd then there is a lot of potential for excitement when mounting by UUID or label. Snapshots can be made by a volume manager (such as LVM), a SAN, or an iSCSI server.
Another problem is that if a file-based backup is made (IE tar or cpio) then you lose the UUID and label. tune2fs allows setting the UUID, but that seems like a potential recipe for disaster. So this means that if mounting by UUID then you would potentially need to change /etc/fstab after doing a full filesystem restore from a file-based backup, this is not impossible but might not be what you desire. Setting the label is not difficult, but it may be inconvenient.
When using old-style IDE disks the device names were of the form /dev/hda for the first disk on the first controller (cable) and /dev/hdd for the second disk on the second controller. This was quite unambiguous, adding an extra disk was never going to change the naming.
With SCSI disks the naming issue has always been more complex, and which device gets the name /dev/sda was determined by the order in which the SCSI HAs were discovered. So if a SCSI HA which had no disks attached suddenly had a disk installed then the naming of all the other disks would change on the next boot! To make things more exciting Fedora 9 is using the same naming scheme for IDE devices as for SCSI devices, I expect that other distributions will follow soon and then even with IDE disks permanent names will not be available.
In this situation the use of UUIDs or LABELS is required for the use of partitions. However a common trend is towards using LVM for all storage, in this case LVM manages labels and UUIDs internally (with some excitement if you do a block device backup of an LVM PV). So LV names such as /dev/vg0/root then become persistent and there is no need for mounting via UUID or label.
The most difficult problem then becomes the situation where a FC SAN has the ability to create snapshots and make them visible to the same machine. UUID or label based mounting won’t work unless you can change them when creating the snapshot (which is not impossible but is rather difficult when you use a Windows GUI to create snapshots on a FC SAN for use by Linux systems). I have had some interesting challenges with this in the past when using a FC based SAN with Linux blade servers, and I never devised a good solution.
When using iSCSI I expect that it would be possible to force an association between SCSI disk naming and names on the server, but I’ve never had time to test it out.
Update: I have submitted Debian bug #489865 with a suggested change to the magic database.
Below are /etc/magic entries for displaying the UUID and label on swap spaces and ext2/3 filesystems:
Continue reading Label vs UUID vs Device
I have just been running some ZCAV tests on some new supposedly 1TB disks (10^40 bytes is about 931*2^30 so is about 931G according to almost everyone in the computer industry who doesn’t work for a hard disk vendor).
I’ve added a new graph to my ZCAV results page [1] with the results.
One interesting thing that I discovered is that the faster disks can deliver contiguous data at a speed of more than 110MB/s, previously the best I’d seen from a single disk was about 90MB/s. When I first wrote ZCAV the best disks I had to test with all had a maximum speed of about 10MB/s so KB/s was a reasonable unit. Now I plan to change the units to MB/s to make it easier to read the graphs. Of course it’s not that difficult to munge the data before graphing it, but I think that it will give a better result for most users if I just change the units.
The next interesting thing I discovered is that by default GNUplot defaults to using exponential notation at the value of 1,000,000 (or 1e+06). I’m sure that I could override that but it would still make it difficult to read for the users. So I guess it’s time to change the units to GB.
I idly considered using the hard drive manufacturer’s definition of GB so that a 1TB disk would actually display as having 1000GB (the Wikipedia page for Gibibyte has the different definitions [2]). But of course having decimal and binary prefixes used in the X and Y axis of a graph would be a horror. Also the block and chunk sizes used have to be multiples of a reasonably large power of two (at least 2^14) to get reasonable performance from the OS.
The next implication of this is that it’s a bad idea to have a default block size that is not a power of two. The previous block sizes were 100M and 200M (for 1.0x and 1.9x branches respectively). Expressing these as 0.0976G and 0.1953G respectively would not be user-friendly. So I’m currently planning on 0.25G as the block size for both branches.
While changing the format it makes sense to change as many things as possible at once to reduce the number of incompatable file formats that are out there. The next thing I’m considering is the precision. In the past the speed in K/s was an integer. Obviously an integer for the speed in M/s is not going to work well for some of the slower devices that are still in use (EG a 4* CD-ROM drive maxes out at 600KB/s). Of course the accuracy of this is determined by the accuracy of the system clock. The gettimeofday() system call returns the time in micro-seconds. I expect that most systems don’t approach miro-second accuracy. I expect that it’s not worth reporting with a precision that is greater than the accuracy. Then there’s no point in making the precision of the speed any greater than the precision of the time.
Things were easier with the Bonnie++ program when I just reduced the precision as needed to fit in an 80 column display. ;)
Finally I ran my tests on my new Dell T105 system. While I didn’t get time to do as many tests as I desired before putting the machine in production I did get to do a quick test of two disks running at full speed. Previously when testing desktop systems I had not found a system which when run with two disks of the same age as the machine could extract full performance from both disks simultaneously. While the Dell T105 is a server-class system, it is a rather low-end server and I had anticipated that it would lack performance in this regard. I was pleased to note that I could run both 1TB disks at full speed at the same time. I didn’t get a chance to test three or four disks though (maybe for scheduled down-time in the future).
Below is a strange Google advert that appeared on my blog. It appeared when I did a search on my blog, it also appears on my post about perpetual motion. It seems quite strange that they are advertising their product as a scam. It’s accurate, but I can’t imagine it helping sales.

I have just observed demonstration units of the V-Smile system [1]. They have “educational games” aimed at ages 3-5, 4-7, and some similar ranges. The first thing I noticed was that children who were able to correctly play the games were a lot older than the designated ages. For example 10yo children were playing the Scooby-Doo addition game (supposedly teaching children to add single-digit numbers) and apparently finding the non-addition part of the game challenging (I tried it myself and found catching flying hamburgers while dodging birds to be challenging enough that it was difficult to find numbers). For children who were in the suggested age-range (and a suitable age for learning the basic lessons contained in the games) the only ones who actually managed to achieve the goals were the ones who were heavily directed by their father. So my observation is that the games will either be used by children who are too old for the basic lessons or be entirely directed by parents (I didn’t observe any mother giving the amount of assistance necessary for a 5yo to complete the games but assume that it happens sometimes).
I doubt that there are many children who have the coordination needed for a platform game who have not yet learned to recognise printed letters (as supposedly taught in the Winnie the Pooh game). The Thomas the Tank Engine spelling game had a UI that was strange to say the least (using a joystick not to indicate which direction to go but instead to move a cursor between possible tracks) and I doubt that it does any good at teaching letter recognition. There was also a game that involved using a stylus for tracing the outline of a letter, as I had great difficulty in doing this (due to the poor interface and the low resolution of the touch-pad) it seems very unlikely that a young child who is just learning to write letters would gain anything from it. Strangely there was a game that involved using the touch-pad to indicate matching colors. Recognising matching colors is even easier than recognising letters and I don’t think that a child who can’t recognise the colors would be able to manage the touch-pad.
The V-Smile system seems to primarily consist of a console designed for connection to a TV but also has hand-held units that take the same cartridges. The same company produces “laptops” which sell for $50 and have a very low resolution screen and only the most basic functionality (and presumably other useless games).
Sometimes the old-fashioned methods are best. It seems that crayons are among the best tools for teaching letter recognition and writing.
But if there is a desire to use a computer for teaching, then a regular PC or laptop should do. Letter recognition can be taught by reading the text menus needed to launch games. The variety of computer poker games can be used for recognising matching colors and numbers as can the Mahjong series of games. Counting can be taught through the patience games, and the GIMP can be used for teaching computer graphics and general control of the mouse and the GUI. NB I’m not advocating that all education be done on a computer, merely noting the fact that it can be done better with free software on an open platform than on the proprietary systems which are supposedly designed for education.
Finally with a PC children can take it apart! I believe that an important part of learning comes from disassembling and re-building toys. While it’s obvious that a PC is not going to compare with a Lego set, I think it’s good for children (and adults) to know that a computer is not a magic box, it’s a machine that they can understand (to a limited extent) and which is comprised of a number of parts that they could also understand if they wanted to learn the details. This idea is advocated by Gever Tulley advocates such disassembly of household items in his TED talk “5 dangerous things you should let your kids do” [2]. Gever runs The Tinkering School [3] which teaches young children how to make and break things.
Finally I just checked some auction sites and noticed that I can get reasonably new second-hand laptops for less than $300. A laptop for $250 running Linux should not be much more expensive than a proprietary laptop that starts at $50 once you include the price of all the extra games. For an older laptop (P3) the price is as low as $100 on an auction with an hour to go. Then of course for really cheap laptops you would buy from a company that is getting new machines for their staff. It’s not uncommon for companies to sell old laptops to employees for $50 each. At a recent LUG meeting I gave away a Thinkpad with a 233MHz Pentium-MMX CPU, 96M of RAM, and a 800*600 color display – by most objective criteria such a machine would be much more capable than one of those kids computers (either V-Smile or a competitor).
Of course the OLPC [4] is the ideal solution to such problems. It’s a pity that they are not generally available. I have previously written about the planned design for future OLPC machines [5] which makes it a desirable machine for my own personal use.
|
|