Archives

Categories

Christmas Toys for Children

It’s almost Christmas and time to buy scientific toys for children. Here are some random ideas, of course it’s probably too late for the online stores to deliver – but there’s always next year.

The MikroKopter is a kit helicopter based on four propellors (two rotating in each direction) and is capable of semi-autonomous flight while carrying a decent payload (such as a digital camera) [1]. The parts cost over E700 and the skill involved in assembling it is significant, but it would be an excellent present for anyone who is 14+. Of course the type of helicopter described in my previous post [2] is a much more affordable present and can be used by a young child.

The scitoys.com site has a good range of scientific toys for sale [3], they are assembled into kits.

The United Nuclear site has a huge range of chemical supplies and related things [4] that can be used for making some great science experiments and toys. They also have really detailed information about the potential hazards of the things that they sell (not that I was planning to buy Cesium or Uranium anyway). If buying exotic alloys elsewhere it’s probably a good idea to check the United Nuclear site for the hazard information (not all sites provide as much information as they should).

Professor Bunsen is an Australian site selling similar produces to SciToys [5]. The range is slightly different, and if you are in Australia then it’s safer to buy locally – there is a significant amount of stuff on the United Nuclear site that would be unlikely to pass Australian customs.

The Spirograph is a good way to introduce mathematical concepts to children [6], it would probably be a good present for children who’s parents are not interested in maths and science.

Lego [7] is always good, but there are also other similar products to consider. Meccano [8] is good for older children, but the ranges for younger children have some deficiencies (the 2+ stuff requires the finger strength of an adult to assemble all parts).

The Italian company Quercetti has some great products [9]. Unfortunately their web site is only in Italian and they have no links to sites in other countries/languages (they do have the names of some distributors who may have web sites). Their products includes a gears set (suitable for teaching children as young as 1yo) and a model car with an engine that has pistons moving inside a clear plastic case, a two-speed gearbox, and a fully functional differential (designed for 2yo+).

For more hands-on tasks that require supervision (not something that can go under a tree) one good option for ages 3+ is to disassemble a variety of computers and computer parts. CD-ROM drives are good because you can connect a 9V battery to the head assembly motor or the tray eject motor to make it work. Hard disks have insanely strong magnets inside them – don’t give two such magnets to a child.

A multi-meter is a great educational toy that can be used for many great experiements (as well as for practical tasks such as determining which battery of a set is flat) [10]. The parts from a hard disk can be used to demonstrate how a generator works (the mechanism to move the heads provides you with strong magnets and a coil that fits them). Note that an analogue meter is needed for such experiments as the amount of electricity generated is small and AC – a digital meter will average it out to zero (at least for the more affordable meters that I have used). It’s probably best to own both a digital meter and an analogue one, the minimum age for owning such a meter is probably about 10, the minimum age for using one with supervision is about 3.

An oscilloscope is a great educational toy [11], unfortunately they are quite expensive (Ebay seems to have nothing below about $450). They can be used for all sort of fun tasks such as measuring the speed of sound. The wikipedia page notes that you can get PC based Oscilloscopes (PCO) which are cheaper. I wonder if they have Linux support for such things…

The OLPC is a great computer for kids in developing countries [12]. They are now available in Australia [13].

For most children in first-world countries a second-hand laptop of a more traditional design is probably a better option. There are a significant number of old laptops gathering dust which can easily have Linux installed with a variety of educational software. Buying an OLPC on the “give one get one” deal costs $400US plus tax and shipping, while a second-hand laptop can be purchased for significantly less than that. While giving an OLPC to some random needy child is a good thing, as the person who gives a laptop locally is probably going to provide support for it there are some benefits to giving a regular laptop.

Conferences often have bags of random junk to give out to delegates, and trade shows always have lots of little toys with company logos on them. Such things are usually of little use – but children like them. Also the trinkets that computer companies give away are often educational. If you have a cupboard filled with such things then unloading them on some children is a good idea – of course you have to make sure that anything you give to young children can’t be swallowed and has no sharp points.

Please Turn off Your Spam Protection

Hi, I’d like to send an email from a small domain that you’ve never heard of or from a big ISP that’s known for being slack about spam (*), I can’t send the mail to you because of your anti-spam measures. I think that this is unfair, it’s discrimination, and you are cutting off your nose to spite your face in rejecting my mail.
So please reconfigure your mail server now and accept more spam in your inbox, my message is important enough to justify the extra time you will spend manually deleting mail and the risk of accidentally deleting legitimate mail while deleting heaps of spam (I am in fact more important than you).
By not accepting my mail you are being an asshole. I only receive a dozen spam messages a day and I don’t mind it, without even knowing you or having bothered to do a web search to see how well your email address is known I’m sure that you don’t receive any more spam than me and therefore you too can turn off most anti-spam measures and manually sort through the spam.
You don’t really have a problem with spam, you are just paranoid, I’m sure that you installed your anti-spam measures before receiving any spam and then never bothered to check the hit rates.
My sys-admin knows that one of the DNSBLs has an old entry from when his server was broken, but he won’t request that it be removed – so you can change your server because my sys-admin doesn’t want to click on a URL that you sent him.
The RFC-ignorant.org service is used and run by ignorant people – I know this without even reading their web site to discover how it works.

The above is a summary of a number of complaints that I have received about my anti-spam measures. I’ve para-phrased them so that they make sense, I have not actually had someone directly say “I’m more important than you so you should just accept more spam”, but the implication is quite clear.

Now there are some legitimate reasons for requesting that anti-spam measures be reduced. In the distant past almost everyone had working reverse DNS entries which matched the forward entries and it was common to reject mail from systems that didn’t have valid DNS. Nowadays there are many big ISPs that delegate IP address space without permitting reverse DNS entries, and there are companies that have one department in charge of IP addresses (who don’t have a clue about reverse DNS) and another department running mail servers (who are cluefull). So the environment has changed to make reverse DNS checks a non-viable anti-spam measure. Requesting that people remove such checks is reasonable.

Anti-spam measures that attack innocent third parties are bad. Sending “warnings” about viruses has made no sense for many years as all modern viruses fake the sender address, an employee of one company once admitted to me that they were sending out anti-virus “warning” messages as a way of sending unsolicited advertising to random people (I reported them as a spam source). Some time ago on a routine upgrade of ClamAV I accidentally copied in a default configuration file that made it send such warnings – I was grateful when someone informed me of my mistake so that I could fix it. Challenge-response is another technology that causes harm to others. I think it makes a lot of sense for mailing lists (every modern list server will confirm subscription requests), while it does result in sending unwanted mail to innocent third parties (every time a new virus becomes prevalent I receive a number of confirmation messages from list servers), but it’s not something that I will use on a regular email account (but I am prepared to do paid work implementing CR for other people).

Requesting that manually implemented blocks be removed is quite reasonable. Occasionally I find that mail from one of my servers is blocked because a previous owner of the IP address space did bad things. In such a situation it is quite reasonable to provide an assurance that the new owner takes abuse issues seriously and to request that the block be removed.

Requesting that I make any change to my system without making a minimal effort to get your broken mail server fixed is totally unreasonable. If the system administrator is not prepared to click on a URL to get their system removed from a black-list or if the user is unwilling to report the problem to the sysadmin then I will probably be unwilling to make any change to my system. The only exceptions to this rule are for clients, colleagues, and for people who use mail services that are large and unresponsive to users (IE the users don’t directly pay). I recently made a white-list entry for a large European ISP that is used by a Debian Developer for their work, as the ISP is known to be unresponsive to requests and mail related to Debian work is important to me I added a white-list entry.

One thing I am planning to do is to document my anti-spam measures and then allow people the opportunity of suggesting new anti-spam measures that I haven’t tried yet if they want me to turn off any of my current protections.

(*) I’m not aware of any big ISP that takes strong measures against spamming customers and customers whose computers are trojaned. I am aware of one having done it in the past, but I suspect that they may have ceased doing so after their brush with bankruptcy. I suspect that many ISPs simply rate-limit their customers connections to the outbound mail relays and hope that they don’t get enough infected customers at any time to get themselves listed as a spam source.

Toy Helicopter

toy helicopter in front of SE Linux mug
I have just bought myself a toy helicopter. I had been tempted to buy one for a while and when I saw them on sale for $30 I couldn’t resist.

My helicopter is model FJ-702 from Flyor, it is controlled by infra-red and is designed for indoor use only. It seems that the trick to flying one is to control the rate of ascent and descent. If the helicopter rises too fast then it may bounce off the ceiling which results in it swaying uncontrollably and crash-landing. If it is allowed to descend too fast then it becomes impossible to slow the rate of descent, I suspect that this is the settling with power [2] problem that is documented in Wikipedia. The helicopter is very fragile, I broke one of the skids and part of the tail assembly before I learned how to control it properly. Probably the main thing to look for when buying a model helicopter is a solid design – some time after buying (and breaking) my helicopter I visited the shop which sold it and heard the owner advising other customers to buy the $45 model which is apparently more solid.

It seems that an ideal design would be a frame made of spring-steel (not to make it springy but to avoid it breaking when it hits). I recommend flying in a room with a carpeted floor, bouncing off a solid surface such as a wood floor will break a helicopter.

Controlling a helicopter is really difficult. The models that I have tried and seen demonstrated all have serious problems with unwanted rotation. My helicopter and the others I have seen have coaxial rotors to avoid rotation without a tail rotor. According to the Wikipedia page a lot of energy is used by a tail rotor [1], as there has been obvious difficulty in designing the helicopter with adequate power (in terms of the light and weak frame and the short battery life) it seems that they didn’t use the tail rotor design to save energy. It’s a pity that instead the designers couldn’t have skipped the flashing LEDs etc.

One strange thing is that one pair of blades can have their angle changed (which appears to be similar to the “semirigid” design shown on the wikipedia page). I’m not sure how increasing the angle of one blade while simultaneously decreasing the angle of it’s pair will do any good. I expect that this has something to do with the fact that the helicopter will rotate at different rates when under different amounts of vertical thrust. This incidentally makes it almost impossible to maneuver the craft. It has a tail rotor on a vertical axis to control forward and reverse movements, but the extreme difficulty in keeping it facing in one direction makes this almost useless.

I wonder what the minimum size at which a gyro-stabiliser becomes practical. But as Wikipedia doesn’t document the existence of an autopilot for full size helicopters the chance of getting one for a toy is small.

In summary, while I have had $30 of fun, I think that a more solid helicopter would be a better investment.

More about Australian Internet Censorship

As the Australian government wants to show how well they understand technology, they have started a blog about the “Digital Economy” [1]. So far they have hundreds of comments, most of which just tell them that their censorship ideas are wrong.

In what may be related news, Barack Obama has announced details of some of his plans [2]. He will spend money on improving net access (something that the Australian government could learn from) and on improving schools (which will probably be about as effective as putting lipstick on a pig). I really hope that we don’t have someone in his administration deciding that improving schools requires censoring the Internet for the entire population (people tend to turn their brains off when it’s time to think about the children). He is also allocating money to road building (a stupid idea when cars are becoming increasingly expensive and world fuel supplies are running out – he should be building train and tram lines). But his idea about improving the energy efficiency of federal buildings is a really good idea, that will lead the development of technology that everyone can use to increase efficiency. He also wants to “modernise” the health-care system by moving to electronic medical records – this seems unlikely but I guess that all spending on IT is somehow good for those of us who are involved in the computer industry. One of his advisors has realised that there are economic benefits to really fixing the health-care system, so there is some hope that it will get fixed [3].

The FLOSS Manuals project has released a document about circumventing censorship systems [4], I expect that many people will be using them before the government even gets their Chinese-style filter installed (if they ever do).

New version of Bonnie++ and Violin Memory

I have just released version 1.03e of my Bonnie++ benchmark [1]. The only change is support for direct IO in Bonnie++ (via the -D command-line parameter). The patch for this was written by Dave Murch of Violin Memory [2]. Violin specialise in 2RU storage servers based on DRAM and/or Flash storage. One of their products is designed to handle a sustained load of 100,000 write IOPS (in 4K blocks) and 200,000 read IOPS per second for it’s 10 year life (but it’s not clear whether you could do 100,000 writes AND 200,000 reads in a second). The only pricing information that they have online is a claim that flash costs less than $50 per gig, while that would be quite affordable for dozens of gigs and not really expensive for hundreds of gigs, as they are discussing a device with 4TB capacity it sounds rather expensive – but of course it would be a lot cheaper than using hard disks if you need that combination of capacity and performance.

I wonder how much benefit you would get from using a Violin device to manage the journals for 100 servers in a data center. It seems that 1000 writes per second is near the upper end of the capacity of a 2RU server for many common work-loads, this is of course just a rough estimation based on observations of some servers that I run. If the main storage was on a SAN then using data journaling and putting the journals on a Violin device seems likely to improve latency (data is committed faster and the application can report success to the client sooner) while also reducing the load on the SAN disks (which are really expensive).

Now given that their price point is less than $50 per gig, it seems that a virtual hosting provider could provide really fast storage to their customers for a quite affordable price. $5 per month per gig for flash storage in a virtual hosting environment would be an attractive option for many people. Currently if you have a small service that you want hosted a virtual server is the best way to do it, and as most providers offer little information on the disk IO capacity of their services it seems quite unlikely that anyone has taken any serious steps to prevent high load from one customer from degrading the performance of the rest. With flash storage you not only get a much higher number of writes per second, but one customer writing data won’t seriously impact read speed for other customers (with hard drive one process that does a lot of writes can cripple the performance of processes that do reads).

The experimental versions of Bonnie++ have better support for testing some of these usage scenarios. One new feature is measuring the worst-case latency of all operations in each section of the test run. I will soon release Bonnie++ version 1.99 which includes direct IO support, it should show some significant benefits for all usage cases involving Violin devices, ZFS (when configured with multiple types of storage hardware), NetApp Filers, and other advanced storage options.

For a while I have been dithering about the exact feature list of Bonnie++ 2.x. After some pressure from a contributor to the OpenSolaris project I have decided to freeze the feature list at the current 1.94 level plus direct IO support. This doesn’t mean that I will stop adding new features in the 2.0x branch, but I will avoid doing anything that can change the results. So in future benchmark results made from Bonnie++ version 1.94 can be directly compared to results that will be made from version 2.0 and above. There is one minor issue, new versions of GCC have in the past made differences to some of the benchmark results (the per-character IO test was the main one) – but that’s not my problem. As far as I am concerned Bonnie++ benchmarks everything from the compiler to the mass storage device in terms of disk IO performance. If you compare two systems with different kernels, different versions of GCC, or other differences then it’s up to you to make appropriate notes of what was changed.

This means that the OpenSolaris people can now cease using the 1.0x branch of Bonnie++, and other distributions can do the same if they wish. I have just uploaded version 1.03e to Debian and will request that it goes in Lenny – I believe that it is way too late to put 1.9x in Lenny. But once Lenny is released I will upload version 2.00 to Debian/Unstable and that will be the only version supported in Debian after that time.

Gmail and Anti-Spam

I have just received an email with a question about SE Linux that was re-sent due to the first attempt being blocked by my anti-spam measures. I use the rfc-ignorant.org DNSBL services to stop some of the spam that is sent to me.

The purpose of rfc-ignorant.org is to list systems that are run by people who don’t know how to set up mail servers correctly. But the majority of mail that is blocked when using them comes from large servers owned by companies large enough that they almost certainly employ people who know the RFCs (and who could for a trivial fraction of their budget hire such people). So it seems more about deliberately violating the standards than ignorance.

The person who sent me the email in question said “hopefully, Google knows how to make their MTA compliant with RFC 2142“, such hope is misplaced as a search for gmail.com in the rfc-ignorant.org database shows that it is listed for not having a valid postmaster address [1]. A quick test revealed that two of the Gmail SMTP servers support the postmaster account (or at least it doesn’t give an error response to the RCPT TO command that is referenced in the complaint). However Gmail administrators have not responded to the auto-removal requests, which suggests that postmaster@gmail.com is a /dev/null address.

However that is not a reason to avoid using Gmail. Some time ago Gmail took over the role of “mail server of last resort” from Hotmail. If you have trouble sending email to someone then using a free Gmail account seems to be the standard second option. Because so many people use Gmail and such a quantity of important mail is sent through that service (in my case mail from clients and prospective clients) it is not feasible to block Gmail. I have whitelisted Gmail for the rfc-ignorant.org tests and if Gmail starts failing other tests then I will consider additional white-lists for them.

Gmail essentially has a monopoly of a segment of the market (that of free webmail systems). They don’t have 100%, but they have enough market share that it’s possible to ignore their competitors (in my experience). When configuring mail servers for clients I make sure that whatever anti-spam measures they request don’t block Gmail. As a rule of thumb, when running a corporate mail server you have to set up anti-spam measures to not block the main ISPs in the country (this means not blocking Optus or Telstra BigPond for Australian companies) and not block Gmail. Not blocking Yahoo (for “Yahoo Groups”) is also a good thing, but I have had a client specifically request that I block Yahoo Groups in the past – so obviously there is a range of opinions about the value of Yahoo.

Someone contacted Biella regarding an email that they couldn’t send to me [2]. I have sent an email to Biella’s Gmail account from my Gmail account – that should avoid all possibility of blocking. If the person who contacted Biella also has a Gmail account then they can use that to send me email to my Gmail account (in the event that my own mail server rejects it – I have not whitelisted Gmail for all my anti-spam measures and it is quite possible for SpamAssassin to block mail from Gmail).

It turns out that the person in question used an account on Verizon’s server, according to rfc-ignorant.org Verizon have an unusually broken mail server [3].

If your ISP is Optus, BigPond, Verizon, or something similarly broken and you want to send mail to people in other countries (where your ISP is just another annoyance on the net and not a significant entity that gets special treatment) then I suggest that you consider using Gmail. If nothing else then your Gmail account will still work even after your sub-standard ISP “teaches you a lesson” [4].

Physical vs Virtual Servers

In a comment on my post about Slicehost, Linode, and scaling up servers [1] it was suggested that there is no real difference between a physical server and a set of slices of a virtual server that takes up all the resources of the machine.

The commentator notes that it’s easier to manage a virtual machine. When you have a physical machine running at an ISP server room there are many things that need to be monitored, this includes the temperature at various points inside the case and the operation of various parts (fans and hard disks being two obvious ones). When you run the physical server you have to keep such software running (you maintain the base OS). If the ISP owns the server (which is what you need if the server is in another country) then the ISP staff are the main people to review the output. Having to maintain software that provides data for other people is a standard part of a sys-admin’s job, but when that data determines whether the server will die it is easier if one person manages it all. If you have a Xen DomU that uses all the resources of the machine (well all but the small portion used by the Dom0 and the hypervisor) then a failing hard disk could simply be replaced by the ISP staff who would notify you of the expected duration of the RAID rebuild (which would degrade performance). For more serious failures the data could be migrated to another machine, in the case of predicted failures (such as unexpected temperature increases or the failure of a cooling fan) it is possible to migrate a running Xen DomU to another server. If the server migration is handled well then this can be a significant benefit of virtualisation for an ISP customer. Also Xen apparently supports having RAM for a DomU balloon out to a larger size than was used on boot, I haven’t tested this feature and don’t know how well it works. If it supports ballooning to something larger than the physical size in the original server then it would be possible to migrate a running instance to a machine with more RAM to upgrade it.

The question is whether it’s worth the cost. Applications which need exactly the resources of one physical server seem pretty rare to me. Applications which need resources that are considerably smaller than a single modern server are very common, and applications which have to be distributed among multiple servers are not that common (although many of us hope that our projects will become so successful ;). So the question of whether it’s worth the cost is often really whether the overhead of virtualisation will make a single large machine image take more resources than a single server can provide (moving from a single server to multiple servers costs a lot of developer time, and moving to a larger single server exponentially increases the price). There is also an issue of latency, all IO operations can be expected to take slightly longer so even if the CPU is at 10% load and there is a lot of free RAM some client operations will still take longer, but I hope that it wouldn’t be enough to compete with the latency of the Internet – even a hard drive seek is faster than the round trip times I expect for IP packets from most customer machines.

VMware has published an interesting benchmark of VMware vs Xen vs native hardware [2]. It appears to have been written in February 2007 and while it’s intent is to show VMware as being better than Xen, in most cases it seems to show them both as being good enough. The tests involved virtualising 32bit Windows systems, this doesn’t seem an unreasonable test as many ISPs are offering 32bit virtual machines as 32bit code tends to use less RAM. One unfortunate thing is that they make no explanation of why “Intger Math” might run at just over 80% native performance on VMware and just under 60% native performance on Xen. The other test results seem to show that for a virtualised Windows OS either VMware or Xen will deliver enough performance (apart from the ones where VMware claims that Xen provides only a tiny fraction of native performance – that’s a misconfiguration that is best ignored). Here is an analysis of the VMware benchmark and the XenSource response (which has disappeared from the net) [3].

The Cambridge Xen people have results showing a single Xen DomU delivering more than 90% native performance on a variety of well known benchmarks [4].

As it seems that in every case we can expect more than 90% native performance from a single DomU and as the case of needing more than 90% native performance is rare it seems that there is no real difference that we should care about when running servers and that the ease of management outweighs the small performance benefit from using native hardware.

Now it appears that Slicehost [5] caters to people who desire this type of management. Their virtual server plans have RAM going in all powers of two from 256M to 8G, and then they have 15.5G – which seems to imply that they are using physical servers with 16G of RAM and that 15.5G is all that is left after the Xen hypervisor and the Dom0 have taken some. One possible disadvantage of this is that if you want all the CPU power of a server but not so much RAM (or the other way around) then the Slicehost 15.5G plan might involve more hardware being assigned to you than you really need. But given the economies of scale involved in purchasing and managing the large number of servers that Slicehost is running it might cost them more to run a machine with 8G of RAM as a special order than to buy their standard 16G machine.

Other virtual hosting companies such as Gandi and Linode clearly describe that they don’t support a single instance taking all the resources of the machine (1/4 and 1/5 of a machine respectively are the maximums). I wonder if they are limiting the size of virtual machines to avoid the possibility of needing to shuffle virtual machines when migrating a running virtual machine.

One significant benefit of having a physical machine over renting a collection of DomUs is the ability to run virtual machines as you desire. I prefer to have a set of DomUs on the same physical server so that if one DomU is running slowly then I have the option to optimise other DomUs to free up some capacity. I can change the amounts of RAM and the number of virtual CPUs allocated to each DomU as needed. I am not aware of anyone giving me the option to rent all the capacity of a single server in the form of managed DomUs and then assign the amounts of RAM, disk, and CPU capacity to them as I wish. If Slicehost offered such a deal then one of my clients would probably rent a Slicehost server for this purpose as soon as their current contract runs out.

It seems that there is a lot of potential to provide significant new features for virtual hosting. I expect that someone will start offering these things in the near future. I will advise my clients to try and avoid signing any long-term contracts (where long means one year in the context of hosting) so that they keep their options open for future offers.

Leaving Optus

Today I phoned Optus to disconnect my Internet service. Some time ago I got an Internode [1] SOHO connection. This gave me a much faster upload speed (typically 100KB/s) compared with Optus having a maximum of 25KB/s. Also Internode has better value for large data transfer (where “large” in Australia means 25GB per month) and I get a static IP address. I also get unfiltered Internet access, Optus blocks outbound connections to port 25 which forced me to ssh to another server to test my clients’ mail servers.

But the real reason for leaving Optus is based on events two years ago. When I first signed up with Optus four years ago my contract said “unlimited uploads“. What they really meant was “upload as much as you want but if you transfer more than 8KB/s for any period of time you get disconnected“. They claimed that running a default configuration of BitTorrent was a DOS (Denial of Service) attack (the only part of their terms of service that even remotely permitted them to disconnect me). So I was quite unhappy when they cut me off for this.

What really offended me was the second time they cut my connection. I had been running BitTorrent on Friday and Saturday, and they cut my connection off on Wednesday. Once it was determined that the issue was uploads we had a bit of a debate about when my BitTorrent session was terminated, it was my clear memory of using killall to end BitTorrent during a commercial break of a TV show on the Saturday night vs the Optus idiot claiming they had a record of me doing big uploads on the Sunday. But I let the help desk person think that they had won that debate in order to focus on the big issue, why large uploads on a Saturday (or a Sunday) should result in a loss of service on Wednesday (three or four days later). They said “it was to teach you a lesson“! The lesson I learned is that it is best to avoid doing business with Optus. I didn’t immediately cancel my contract, if you have both phone and Internet service through Optus they do offer a reasonable deal (there are a variety of discounts that are offered if you have multiple services through them).

When discussing this matter in the past it had been suggested to me that I try appealing to the Telecommunications Industry Ombudsman etc. However I didn’t do this because I was in fact breaking the Optus acceptable usage policy for most of the time that I was a customer. When I signed up their AUP prohibited me from running a server and from memory I think it had a specific example of a shell server as something that should not be done, it now prohibits running any automated application that uses the Internet when a human is not present (which presumably includes servers). I’m pretty sure that my SE Linux Play Machine [2] met the criteria.

While I’m reviewing Optus service I need to mention their mail server, here is the summary of the Optus anti-spam measures in protecting my email address etbe@optushome.com.au in September (other months were much the same):
131 emails have been sent to your Inbox.
52 of these emails were identified as spam and moved to the Spam Folder.
39% of your email has been identified as spam.

The email address in question only received legitimate mail from Optus. This meant that I received between two and four valid messages a month, the rest were all spam. So of the 79 messages delivered to me, at least 75 were spam, and Optus blocked less than half the spam. But to be fair, given that the Optus mail servers are listed on some of the DNSBLs it seems reasonable for them to be lax in anti-spam measures. I wonder whether it would be reasonable for an ISP of Optus scale to run the SpamAssassin milter on mail received by their outbound relays to reject the most gross spam from customer machines.

But Optus are good at some things. The download speed was always very good (I could receive data at 1MB/s if the remote server could send that fast). Also their procedures for account cancellation are quite good. The guy who took my call offered to transfer me to the complaints department when I mentioned how I was “taught a lesson”, he also offered me a significant discount if I was to continue using the service. In retrospect I should have had that conversation six months ago and had some cheap service from Optus before getting rid of them. Getting the account terminated happened in a couple of hours. It was so quick that I hadn’t got around to transferring my Play Machine to my Internode account before it happened, so I had a few hours of down-time.

Links November 2008

Netatia has an interesting series of articles about running a computer for two people [1]. It is a bit of a kludge, they have a single X server that covers both displays and then use Xephyr to divide it into two virtual screens. The positive aspecct of this is that it shuld allow a single wide monitor to be used by two sessions as displays are getting wider regardless of the wishes of manufacturers and consumers [2] this should be useful. It’s a pity that no-one has solved the problem of having multiple video cards, sound cards, and input devices to allow a single desktop system to be used for 6 or more people. It seems that the problems that need to be solved are only the support for multiple video cards, mouse-wheel support, and sound support.

Paul Ewald gave an interesting TED talk about changing the conditions for diseases so that they evolve to be benign [3]. The first example is Cholera which if spread by water will benefit from being as toxic as possible (to cause the greatest amount of diorrhea – killing the host not being a problem), but if spread by human contact benefits from leaving it’s host well enough to walk around and meet people. This and the other examples he cites seem like strong reasons for universal health-care provided by the government. If clean water is provided to all the poor people then cholera will evolve to be less harmful, and if a rich person (such as myself) is unlucky enough to catch it then the results won’t be so bad. He also notes that less harmful bacteria will often result in the victim not seeking anti-biotics and therefore less pressure for the disease to evolve resistance to anti-biotics. Therefore the people who really need them (the elderly, the very young, and people who are already sick) will find them to be more effective.

Paul Stamets gave a great TED talk about fungus [4]. One of his discoveries was that fungi can be used for breaking down petro-chemicals (they can eat oil). It would be interesting to see this tested on a large scale with one of the oil spils or with the polluted land around an ooil refinery. Also he has patented a method for using fungus to kill wood-eating ants (such as the ones that briefly infested his home).

Robert Full gave an interesting TED talk on robot feet [5]. I found the bit about leg spikes particularly interesting (I had always wondered why insects have spikey legs).

Alan Kay gave a very interesting presentation on using computers to teach young children about science [6]. An OLPC is referenced. It makes me want to buy an OLPC for everyone I know who has young children. The start of the talk is a little slow.

Dan Barber gave a very interesting TED talk about organic and humane production of foie gras in Extramuda [7]. Apparently it tastes a lot better too.

Incidentally I don’t list all the TED talks I watch, only the better ones. Less than half the TED talks that I see announced seem interesting enough to download, and of those less than half are good enough that I will recommend them. The ones that I don’t recommend don’t suck in any way, it’s just that I can’t write a paragraph about every talk. Of recent times my video watching has been divided about equally between “The Bill” and TED talks.

Here’s an interesting article about Sarah Palin and “anti-elitism”: The prospects of a Palin administration are far more frightening, in fact, than those of a Palin Institute for Pediatric Neurosurgery. Ask yourself: how has “elitism” become a bad word in American politics? There is simply no other walk of life in which extraordinary talent and rigorous training are denigrated. We want elite pilots to fly our planes, elite troops to undertake our most critical missions, elite athletes to represent us in competition and elite scientists to devote the most productive years of their lives to curing our diseases. And yet, when it comes time to vest people with even greater responsibilities, we consider it a virtue to shun any and all standards of excellence. When it comes to choosing the people whose thoughts and actions will decide the fates of millions, then we suddenly want someone just like us, someone fit to have a beer with, someone down-to-earth in fact, almost anyone, provided that he or she doesn’t seem too intelligent or well educated.[8]

Sarah will be representing the Republican party in 2012, the desire for leaders of average intelligence (or less) will still be around then. It will be interesting to see how many votes she gets and amusing to see her interviewed.

The proceedings of the “Old Bailey” – London’s Central Criminal Court have been published [9]. It’s interesting to read some of the historical information about the legal system at the time. It made me appreciate how civilised the UK (and other countries that I have visited) are now.

Bruce Schneier writes about the feture of ephemeral communication [10]. He concludes with the point “until we have a Presidential election where both candidates have a complete history on social networking sites from before they were teenagers we aren’t fully an information age society“. Of course as he notes the rules are written by the older people, currently I don’t think that any candidate for high office (cabinet minister or above) anywhere in the world can have a good history on the Internet. During the course of a decade or more on the net it’s impossible not to write something that can be used against you and no reasonable person could avoid changing their views on some issues in such a time period. That’s enough to lose an election with the way things currently work.

Slicehost vs Linode

Six months ago I investigated the options for Xen virtual servers [1]. I ended up receiving an offer of free hosting and not needing that, but the research was useful. There is a good range of options for Xen servers with different amounts of CPU power, RAM, bandwidth, and disk space. There are a couple of things that seem to be missing, options to upgrade from virtual servers to physical servers, and information on dedicated disk and database performance – but I’ll explain that later after some history.

About a week ago a client needed a Xen virtual server in a hurry, their main server (a Xen system that I run on hardware that they rent) was getting a bit overloaded and needed to have one of the largest DomUs moved off. I ended up recommending Linode [2] based on my research and comments I received. The Linode server is working quite well and the client is happy, one nice feature of Linode is the choice of server rooms that they offer. I was able to choose a room in the same region as the other servers that the client owns and thus get ping times that are sometimes less than 2ms!

Due to a missing feature in a program that I’m maintaining for the client a large number of MySQL queries are being made. Due to a problem I’m having with MySQL it won’t let me create a slave database server so all the queries go over the VPN and use a large amount of data. This combined with the other traffic that should be going over that link means that about 600G per month is being used, fortunately that is rather cheap. Linode staff handled this very well, after the server had exceeded it’s quota by 120G they asked my client to confirm that the traffic was legitimate and then suggested an upgrade to a plan that could handle the traffic (which went smoothly). Now I have another week to add the feature in question before I meet the quota again.

Shortly after getting the new virtual server running at full capacity David Welton wrote a detailed review of Linode and Slicehost for the issues that matter to his use [3]. His conclusion seems strongly in favor of Linode.

But now I am looking at getting a Slicehost [4] virtual server for the same client (for a different project) because Slicehost is owned by Rackspace [5], and the new project if successful will need a set of powerful servers and Rackspace seems like a reasonable company to host that.

The problem with Rackspace is that they (and every other ISP I’ve researched so far) seems to offer little in regard to customers who need serious disk IO. I am planning some servers that will have a write bottleneck on a MySQL database (or maybe multiple shards), so serious disk capacity is needed. At least I would like to be able to get disk storage by the tray (12-14 disks) with the controllers having RAID-6 support. Rackspace only offers RAID-5 (according to the “livechat” person), and we didn’t get as far as discussing how to add more trays.

What would be ideal is if there was an ISP that had both virtual servers and physical servers (so I could start with a virtual server and move to a physical server when things are working well), and also serious storage options. They would offer internal disks, external RAID arrays, and NetApp Filers [6] (or some equivalent device). It would be really nice if I could just instruct the ISP to add another NetApp Filer to my back-end network and have it done for me (I’m certainly not going to visit the US to install new hardware). It’s been over a day since I submitted a sales request to NetApp asking whether they partner with any ISPs and I haven’t received a response.

OpenSolaris with ZFS also sounds good for disk IO performance (they have similar features to NetApp). Unfortunately the support for OpenSolaris among ISPs is not that great (while everyone offers Linux and Windows), and I haven’t used any recent version of Solaris. So using OpenSolaris would require finding someone with the skills to manage it who can work for my client – as opposed to a NetApp device that would be just like any other NFS server, SAN, or iSCSI server. But I’m not ruling OpenSolaris out, if someone knows of a good ISP that hosts OpenSolaris machines and supports adding dozens of disks and decent amounts of NVRAM for ZFS then I would be interested to investigate it. Joyent has some interesting OpenSolaris virtual server plans [7], they are a little pricey but offer large amounts of data transfer. They don’t provide any information on disk IO capacity (other than saying that they use ZFS for good performance). I’ve just downloaded Nexenta (Debian with the OpenSolaris kernel) [8] and will test it out over the next few days.

One of the reasons I’m tending towards Rackspace at the moment (with Slicehost as the entry point) is that they seem cooperative to customer requests. My discussions with them (on a web based “livechat” and on the phone) have indicated that they may be able to do something special for me.