Archives

Categories

Three Monopolists

Three

This afternoon I tried to unlock my old Three mobile phones for the purpose of getting cheap net access as described in my previous post [1]. I wanted to use Dodo 3G Internet (via the Optus network) for my parents which would cost them $139 per year and I wanted to use my old Three phone tethered to their PC as the 3G modem (cheaper than buying a new 3G modem). I took in 3 Three phones to the Three store to get unlocked, I actually have 4 old Three phones (my wife and I are each on our third Three phone) but I seem to have misplaced one. It turned out that the two newer phones (LG U890) can’t be unlocked as they are permanently locked to the Three network. The older LG U8110 can be unlocked, doing this took 30 minutes of the Three employee speaking to other Three employees on the phone and I will now have to wait 4 days to receive an SMS with the unlock code.

So the Three anti-competitive behavior of making it unreasonably difficult to get a phone unlocked and of selling phones that (supposedly) can never be unlocked wasted them 30 minutes of store employee time when other potential customers were queuing up as well as 30 minutes of employee time in their call center. If the call-center employee was based in Australia then as the minimum wage is $14.31 per hour [2] that would have cost them at least $14.31 for 2*30min of work, as a rule of thumb it’s generally regarded that the costs of employing people are twice the salary (including costs of maintaining office/shop space, paying managers, doing paperwork, etc). So it probably cost Three about $29 to unlock one of my phones and tell me that the others can’t be unlocked, when I find phone 4 it will cost them another $29. As $29 is my typical monthly bill this has got to make an impact on the profitability of Three. If they were smart they would have sent me an SMS when I got a new phone telling me whether the old phone can be unlocked and if so giving me the code to do so. For phones that can be unlocked I doubt that would make anyone unlock their phone who wouldn’t do so anyway, and for phones that can’t be unlocked they could encourage the owner to give the phone to someone who wants a phone for pre-paid use (thus locking in a new customer).

It probably won’t be worth the effort of cracking an LG U890 phone to save my parents $10 per annum. As I couldn’t get the LG U8110 to talk to my laptop I guess that forces my parents to eventually use Three for 3G net access. But they could have just matched the Dodo price and got the same result without having me spend half an hour in their store.

Update: I just enquired about ending my Three contract for 3G net access ($15 per month for 1G of data) in favor of the yearly prepaid option of $149 per annum for 12G. The prepaid option would save me $31 per annum and allow me to use more than 1G in the busy months. But it seems that I subscribed to a two year contract for that one and I have 6 months to go. Over those 6 months they will make about $15 extra in revenue from me while annoying me in the process, this probably isn’t a good deal. As my 3G modem is locked to the Three network even if I didn’t have a contract I would still be unable to use a different provider.

Optus

My mother phoned Optus about her Internet connection and discovered that she had supposedly renewed her Optus cable Internet contract in September last year. Presumably someone from Optus phoned my parents and asked what seemed like a routine “do you want to keep using the Internet?” question but was really a “do you agree to a 2 year contract with a $250 penalty clause for exiting early?”. This isn’t the first time that Oprus has scammed my parents (previously they charged them rental for a phone that they never supplied), I guess that they have a practice of pulling such stunts on pensioners. I guess I’ll have to call the TIO, which will end up costing them more than the $250 penalty clause.

The irony here is that as Dodo uses the Optus network I would have used Optus by choice for my parents, but now that they are being scum I will willingly pay the extra $10 per annum to use Three (which while annoying aren’t actually hostile).

Google

Finally while Google is admirably living up to their “don’t be evil” motto in regards to China [3] their conduct regarding Google Talk leaves a lot to be desired. Two employees of a company I work for use Google Talk for their instant messaging, this has a Windows client but also allows general access via the Jabber protocol. So these two guys wanted to talk to me via Jabber but Google would just send me email saying “X has invited you to sign up for Google Talk so you can talk to each other for free over your computers“, I received 5 such messages from a colleague who was particularly persistent. It seems impossible for the Google Talk server to send a chat request to my personal Jabber server (which works well with a variety of other Jabber servers).

So I have now started using my Gmail address to talk via the Jabber protocol to other Gmail users. This means that I have a TCP connection to the Google servers open most of the time and Google can boast of having one more active Gmail user. But it doesn’t seem to really provide them a benefit. I am going to keep using my main email address as my primary Jabber ID and only use my Gmail address for talking to Google Talk users – and only when paid to do so.

But as a result of this I recommend that everyone avoid Google Talk as much as possible. Use open Jabber servers such as the ones run by Jabber.org.

It seems to me that none of these companies are really gaining anything from trying to lock customers in. They would be better off spending their efforts on being friendly to people and making them want to be repeat users/customers.

Taking my Thinkpad Apart and Cooling Problems

I’ve been having some cooling problems with my Thinkpad recently. It’s an old model T41p which is outside the service period so IBM/Lenovo won’t help me (at least not unless I give them more money). If I run it for a few minutes at maximum CPU when the ambient temperature is about 20C then it gets to 90C, apparently 93C is the temperature at which it turns itself off, so obviously I need to do something to keep it cool. On the really hot days of summer my air-conditioners can’t keep any part of my house below 30C, so on such days I can’t do any compiles on my Thinkpad or watch videos.

My Thinkpad seems to idle at a temperature that is about 35C higher than the ambient temperature.  At this rate the system could get close to it’s maximum temperature on a 45C day by just idling! Not that I plan to have a warm Thinkpad on my lap if I ever happen to be outside on such a hot day.

I suspect that a large part of the problem is the dust that has accumulated inside the machine. I asked about this on the LUV mailing list and Andrew Chalmers suggested The Chaos Manor review of taking a Thinkpad T41p apart [1]. The Chaos Manor guy wanted to replace his CPU with a faster one so he had to get access to all the same bits.

I followed the instructions until I got to the stage of prying the heatsink off the video chip. I figured that I will never be able to attach it as well as it is currently attached so I will get different cooling problems if I go any further. Taking it apart to that degree was a moderate amount of work, getting the keyboard bezel off was the most difficult part, and taking the palm-rest off required removing bits of plastic that were stuck in place to cover screw holes – which will probably fall off in a week or two.

Probably everyone who owns a T41p that they regularly use has a similar problem to me as all such machines have been out of support long enough to have accumulated a lot of dust. So I recommend that other T41p owners not disassemble their machine as much as I did, but instead go for my plan B which is to blow compressed air through the CPU cooling system. Doing this merely requires removing the keyboard. One tip that I have heard is that you should hold the fan in place when blowing compressed air as the pressure of the air may spin the fan fast enough to generate enough electricity to damage the motherboard or damage it’s bearings. But you might want to wait until I’ve got my Thinkpad done before you blow compressed air through yours, it could very well be destroyed.

The other option is to try and use software to control the temperature. Patricia Fraser suggested controlling the fan speed [2]. I did some experiments and found that increasing the fan speed dramatically slowed the increase in the temperature. A 10 minute build is almost certain to bring the temperature to dangerous levels in a default configuration, but it seems that if I set the fan to maximum speed before starting the build then I can mitigate this problem. Most programs that I work on will compile in significantly less than 10 minutes.

Another possibility that occurred to me is that of limiting the speed of the system. It seems that ACPI has support for reducing the CPU frequency when the temperature rises too much, but Matthew Garrett has pointed out that this effectively increases the amount of energy used (and heat dissipated) for any given quantity of work [3]. So what I want to do is to cause the CPU to idle periodically when it gets too hot. I’ve been idly considering writing a program that uses SIGSTOP and SIGCONT to control the operation of programs such as make, or writing a program that creates a new pty (like script) and pauses the output whenever the CPU gets too hot.

Of course the easy option would be to figure out how to set the threshold temperature where the CPU speed is limited. Which is made slightly more complicated by my choice of kernel 2.6.30 for Ext4 support. Now the kernel doesn’t work with my acpid and I’m starting to get forced into an upgrade to Debian/Testing.

Does anyone have any ideas?

Why Internet Access in Australia Sucks

In a comment on my post about (relatively) Cheap Net Access in Australia [1] sin from Romania said “Somebody needs to whack the aussie ISP in the head with a cluebat. The prices that you pay are insane“.

In Eastern Europe you have optic fibers from Germany and other western European countries that carry vast amounts of data. As the demand for capacity increases it’s not THAT difficult to lay more fibers. You also have competition between different companies that lay fiber. To get data to Australia you must lay cables under the sea, this is expensive and can’t be done quickly. Therefore all international data transfers are expensive to cover the expense of laying the cables. I don’t think that we have any real competition in the market for International connectivity from Australia either.

Now the links between Europe and the US aren’t cheap either, but I believe that there are economies of scale (as well as shorter distances) that make them significantly cheaper than the links to Australia.

Also a good portion of the traffic that you generate as a customer of a European ISP will stay within Europe as there are heaps of good sites in Europe. The number of people living in Europe who speak English as their first language is more than twice that of Australia. The number of Europeans who communicate in English almost as fluently as the native speakers (such as about half the population of the Netherlands) is also quite significant. I expect that the amount of English-language material on the net that is published from the EU is more than three times greater than the quantity published from Australia. People who speak languages that have a more limited geographic spread (IE anything other than English, Spanish, French, and Portuguese) will have a higher portion of local traffic which is therefore cheaper for their ISP. So based on the relative population sizes we should expect to have Australians making a higher portion of their Internet bandwidth be expensive international data transfers than that of Europeans.

Then of course there is the issue of server costs. Running servers in Australia is horribly expensive, while user access to the net is merely annoyingly expensive, the costs of hosting servers are significant – and usually the offers have slower hardware and slower transfers (particularly to the important US and EU markets). My blog is hosted in Germany because the company that was offering me free hosting in Australia encouraged me to host it elsewhere due to the price. Also hosting in Germany gives me slightly lower ping times to the US and significantly lower ping times to Europe. As about half the readers of my blog are based in the US, a significant quantity of the readers are based in the EU, and Australia only contains a small portion of the readers the overall experience for readers of my blog is improved by having it hosted outside Australia. It would be better to have it hosted in the US (where most of my readers are located) but I was offered free hosting in the EU.

It would be nice if there was a cheap and easy way of getting a mirror of my blog running in Australia with Geo-DNS so that people using Australian IP addresses would get a local server. Putting the static images on an Australian server would be trivial, setting up Geo-DNs would be painful and probably increase reliability issues later on but isn’t insurmountable (I have root on both DNS servers). The Debian blog gives some basic information on how to setup GeoDNS [2]. Then I would need to set up a MySQL slave for WordPress data and modify WordPress to send it’s writes to the master server – which is probably impossible for me unless someone else has already written a WordPress plugin for this, I’m really not good at PHP programming. Another possibility would be one of the WordPress cache plugins that maintain static files to avoid needless database lookups.

Until/unless I do such things, every Australian reader of my web site (and those of my friends who do similar things to me regarding hosting) will slightly tilt the balance of Internet transfers in favor of expensive data from foreign servers instead of cheap content from local servers.

Sometimes it just sucks to live on an island.

Cheap Net Access in Australia

The cheapest ADSL or Cable net access in Australia seems to be about $30 per month. I’ve been using 3G net access by the “Three” phone company for 18 months now and it’s been working well [1]. I recently bought a new 3G modem because the old one broke, so it has cost me $250 in modems plus $15 per month for the connection which compares well to $100 (or more) for an ADSL or Cable installation plus $30 per month.

My Three net access gives me 1G of data per month. I have just noticed that they have pre-paid net access that gives 12G of data that must be used within one year which costs $149 per year [2] – that is $12.41 per month or 83% the price of the plan access plus it means that any bandwidth quota that isn’t used in one month can be used the next month (so you can save up for upgrading to a newer distribution of Linux).

Dodo has pre-paid mobile net access on the Optus network for $139 which gives 15G of data that must be used within one year [3]. So that’s equivalent to $11.58 per month or $9.20 per gig.

A member of my local LUG mentioned that Exetel has a 3G plan which is good value if you don’t use much data transfer – but which has per-megabyte charges for excess data transfer. I couldn’t recommend it for my parents as I never know when they will do something that may transfer a lot of data, I could just imagine them saying “loading web pages was really slow for a week and then I got a big bill”.

Ross Barkman’s GPRS/UMTS page gives some critical information on using a 3G phone with a tether [4]. Using that information I discovered that I need to use “AT+CGDCONT=1,"IP","3netaccess"” in my chatscript to get ppp going with my old LG U890 mobile phone (with “3netaccess” being the important word).

I plan to give my old mobile phone to my parents and let them use prepaid 3G net access to reduce their net access bill by more than 1/3 while also giving them more data transfer quota for times when they need to transfer a lot (EG when my sister visits them). At this stage I’m not sure whether I will get them to use Three or Dodo. One advantage of Three is that I’ve used them a lot and know exactly how to get it all working, the other is that my old mobile phone is locked to Three – they agreed to unlock it on demand after the contract is ended (which happened over a year ago) but it will be a hassle to get it done. Saving the hassle of getting an old phone unlocked may be worth the $10 per year cost. Also I have used my 3G modem at my parents house on a few occasions and know that the reception is quite good, while the reception for Dodo (Optus) 3G is unknown

One extra benefit with doing this is that my parents will have some freedom to move their PC. If they decide that the computer room is too hot in summer and want to move their PC to below their air-conditioner they will be able to do so without needing a long Ethernet cable to connect their PC to the cable modem.

For my personal 3G net access (which I require for fixing servers on occasion) I am stuck with Three. When I bought a new 3G modem I decided to save about $20 by getting a device that’s locked to Three. 12G per year is more than enough for sshing to servers and checking email and if I had paid extra for the unlocked modem it would it would probably have died before the savings on net access made up for the higher purchase price.

Update:
Crazy John has a good deal, $129 for 7.5G which expires in a year [5]. I won’t use that for my parents though, the probability of them going over 7.5G is too high to make it worth the risk for a $10 saving.

Links January 2010

Magnus Larsson gave an interesting TED talk about using bacteria to transform dunes into architecture [1]. The concept of making a wall across Africa to stop sand dunes from overtaking farm land is obviously a good one, the idea of using bacteria to convert sand into sandstone to do so cheaply is also good. But making that into houses seems a little risky. I wouldn’t want to live under shifting sand with only bacteria generated sandstone to protect me.

Cory Doctorow gave an interesting speech titled “How to Destroy the Book”, here is the transcript [2]. He talks about how much he loves books and described his opposition to the DRM people who want to destroy the book culture.

Sendmail has a DKIM Wizard for generating ADSP (Domain Signing Policy) records [3]. If I knew that ADSP records were so easy to implement then I would have used them a year ago!

Loretta Napoleoni gave an insightful TED talk about the economics of terrorism [4]. Apparently the US dollar used to be THE currency for international crime, when the PATRIOT act was passed it’s anti-money-laundering provisions encouraged many shady people to invest in Euros instead and thus led to the devaluation of the US currency. It’s also interesting to note that terrorist organisations are driven by economics, if only we could prevent them from making money…

Ryan Lobo gave an interesting TED talk about his photographic work [5]. The effectiveness of the all-women peace-keeping force is noteworthy. The part about the Liberian war criminal who has become an evangelical Christian and who now tours Liberia begging forgiveness from his victims (and their relatives in the case of the people he murdered). Should someone like that be permitted to remain free if his victims forgive him?

Charles Stross has an appealing vision for how Apple and Google can destroy the current mobile telephony market [6]. I can’t wait for the mobile phone market to be entirely replaced by mobile VOIP devices!

James Geary gave an interesting TED talk about metaphors [7]. The benefits of metaphors in poetry are well known (particularly in lyrics), but the impact of metaphors in influencing stock market predictions surprised me.

Shaffi Mather gave an interesting TED talk about his company that makes money from fighting corruption [8]. Instead of paying a bribe you can pay his company to force the official(s) in question to do the right thing. Apparently the cost of doing so tends to be less than 10% the cost of the bribe if you know what you are doing. His previous company was an ambulance service that charges what the patient can afford is also interesting.

John Robb wrote an interesting article about lottery winners and griefers [9]. He suggests that publishing the names, addresses, etc of rich people will be a new trend in Griefing. One thing I’ve been wondering about is the value of the HR database at a typical corporation. A single database typically contains the home addresses, phone numbers, and salaries of all the employees. It would be very easy to do an SQL dump and store it on a USB flash device to carry out of the office. Then it could be sold to the highest bidder. They could probably make a market in the private data about rich people in the same way that there is currently a market for credit card data – maybe they have already done this but it’s kept quiet to stop others from implementing the same idea.

Michael Smith wrote an interesting article for the Washington Times about home schooling and socialisation [10]. It seems that people who were home schooled as children tend to be more academically successful and involved in civic life as well as being happier and having career success.

Richard Seager wrote an interesting article for American Scientist about ocean currents and heat transfer from the tropics [11]. It seems that when the ocean currents shut down the UK and other parts of northern Europe won’t be getting a mini ice-age.

Ian Lance Taylor (most known for the “gold” linker) has written a good summary of the situation in regard to climate change and what must be done about it [12].

The Wrath of the Killdozer – article about how one angry man converted a bulldozer into a tank [13]. This wasn’t a big bulldozer (every mining company has bigger ones) and he didn’t have any serious weapons (only rifles). Imagine what terrorists could do if they started with a mining vehicle and serious weapons…

Simon Singh has written about being sued for libel by the British Chiropractic Association [14]. The BCA didn’t like his article criticising chiropractors for claiming to be able to treat many conditions unrelated to the spine. Remember, chiropractors are not doctors – all they can do is alleviate some back problems. See a GP if you have any medical condition that doesn’t involve a sore back or neck. Avoid uppity chiropracters who claim to be able to cure all ills.

Nicholas D. Kristof wrote an interesting article for the New York times about how happy the people in Costa Rica are [15]. He claims that the Costa Rican government’s decision in 1949 to dissolve it’s armed forces and invest the money in education is the root cause of the happy population. Maybe if the US government would scale back military spending the US population could be as happy as the Costa Ricans. While there are good arguments for having some sort of military, there are no good arguments for spending more money on the military than the rest of the world combined (as the US does).

My Ideal Netbook

I have direct knowledge (through observation or first-hand reports) of the following use cases for Netbooks:

  1. System administrator’s emergency workstation – something light to carry when you might get an SMS about a problem.
  2. A really small laptop for a serious technical user, can be used for programming and other serious tasks. Only someone who is really dedicated to the hobby of computing would choose a system with a tiny screen for their main computer so that they can take it everywhere.
  3. A computer for a child. Children are less demanding about some aspects of their computer experience, no-one wants to buy a really expensive toy for a young child, and children find it difficult to carry full size laptops.

1 and 2 are the options that interest me. But it would be good if the extensive children’s toy market could drive economies of scale and reduce the expenses of my hobby and profession. I have found my EeePC 701 to provide adequate CPU performance (Celeron-M 630MHz) for light compiling tasks and I have used it for some Debian development work. The SSD that is built in is very fast and OpenOffice load times compare well to my Thinkpad T41p because of it.

The main problems I have with my EeePC for sysadmin and coding tasks are the small keyboard (which can’t be fixed – the overall size of the machine needs to remain small), the small screen (which could be expanded without changing the case size), the low resolution screen (which has been fixed in newer netbooks), and the fact that my 3G dongle sticks out and is likely to get broken. More recent Netbooks address these issues – while having the trade-off of being heavier and larger.

There are also some tasks which are generally not performed on a Netbook but that could be if it was properly designed. Here are some examples of things that I think should be done on a Netbook:

  1. Basic image editing and blogging – something that a lot of people do on smart-phones nowadays. It can be done more effectively and with less effort on a general purpose computer – I can’t imagine the GIMP or Inkscape running on a smart-phone.
  2. Reading electronic books. The number of people who want a Netbook but don’t want to read electronic books would be quite small as would the number of people who only want to read electronic books and never want to use a general purpose computer while travelling. No-one really wants to carry both a Netbook and an ebook reader at the same time.
  3. Watching movies.
  4. Everything that you might want to do on a terminal in an Internet cafe – those machines are always 0wned, just say no for security reasons.
  5. Games. The Nintendo DS has two ARM CPUs running at 66MHz and 33MHz and a combined screen resolution of 256*384*18bpp [1] and the Sony PSP has a 333MHz MIPS R4000 CPU and a screen resolution of 480*272*24bpp [2]. The original EeePC had more CPU power and more screen resolution than the DS and the PSP so it IS suitable for games – even though it won’t run the latest 3D games. There are lots of great games like Wesnoth that don’t require much video performance.
  6. Educational Software. Portable educational devices include the awful V.smile system for young children [3] and the educational software for the DS (I’ve seen a demonstration of a training program in rapid completion of basic maths problems for elderly people and presume it’s not the only educational software on that platform).

Image editing requires a color screen of high resolution. Effective blogging requires a platform with a resolution that compares to the typical web user – according to Hitslink 1024*768 is the most popular resolution of web browsing systems at 28.34% and the most popular resolution that is less than 1024*768 is 800*600 at 3.28% [4]. So even the most casual blogger will have an incentive to get a screen that is of higher resolution than all but the latest Netbooks. The recently released EeePC 1201 has a screen resolution of 1366*768 which should be barely adequate for those tasks.

Reading electronic books requires a reasonable resolution. Based on my experience with the 1400*1050 display in my Thinkpad it seems that a resolution of 1366*768 would be barely adequate for reading an academic paper that has two columns in a small font. But as the original Kindle had a resolution of 600*800 and the latest Kindle has a resolution of 824*1200 [5] it seems that perhaps the epaper displays are good enough to allow reading the text at a lower resolution. A display that can draw little or no power when idling (as epaper does) is simply required for an ebook. The Pixel-Qi hybrid displays are claimed to offer the best features of TFT and epaper displays [6] but they haven’t been released yet. I think it’s reasonable to assume that someone will achieve that Pixel-Qi is attempting and that it will become the standard display for a Netbook.

Watching movies and playing games (even games like Wesnoth) requires better video performance than epaper can deliver, we just have to hope that Pixel-Qi release something soon.

Watching movies and reading ebooks are both things that are best done without a keyboard in the way. The Always Innovating “Touch Book” [7] seems like a good solution to this problem. It’s a tablet PC that can be connected to a keyboard base if/when you desire. It should also be good for web browsing and reading email while on the move, I find that my EeePC is unreasonably heavy and awkward for typing email while walking.

Intel CPUs are not particularly energy efficient. As there are ARM CPUs with clock speeds as high as 2GHz and with as many as four CPU cores it seems that the ARM architecture can provide as much CPU power as is required. Debian currently supports two versions of the ARM CPU, if another one became commonly used it wouldn’t be that difficult to run Debian build servers for it.

Given a screen resolution equal to the latest Kindle, CPU power greater than the early Netbooks, and the ability to run a free software OS the range of educational and gaming software should be adequate.

So it seems that the ideal netbook would have a detachable keyboard and base and a touch-screen in the computer part. It would have a Pixel-Qi display (or equivalent) with a resolution of 1400*1050 or better. It would have USB, Gig-E, Wifi, and Bluetooth connectivity and the ability to have an internally mounted USB dongle (as the Always Innovating Touch Book does). I think that this is not overly difficult to achieve – it is basically an Always Innovating system with a better display.

Update: Another criteria is the ability to start operating quickly when requested. Even mobile phones are often limited in their utility by the time that is taken to activate them (I can’t get my mobile phone to take a photo in much less than 7 seconds after removing it from my pocket). The Always Innovating system is apparently always in suspend to RAM mode when it’s not being used so that it can start quickly. That combined with fast application load times and a good menu system could allow turning the system on and launching an application in less than 2 seconds.

If I was buying a Netbook right now the only thing that would stop me from buying an Always Innovating device is the shipping delay. But as my EeePC is working quite well I’m not going to buy another system unless I am going to get significant benefits – such as a high resolution PixelQi display.

The Always Innovating Smartbook/Netbook

Always Innovating have an interesting netbook that can be detached from it’s keyboard [1]. It provides features which are a close match for the tablet PC with optional keyboard that I advocated in my post about the Lenovo U1 [2]. Such devices are deemed to be in a new category of computer called the Smartbook – which is regarded as being like a cross between a Netbook and a smart-phone [3].

The AI system is always idling, so there is no boot up required – like a mobile phone it will respond immediately to input. It has no fans which will be a good improvement over the EeePC – my EeePC 701 is annoyingly loud at times. It is designed to replace Netbooks not desktops, the screen resolution of 1024*600 is reasonable by Netbook standards but is really poor by desktop standards, it also lacks a VGA port.

The company has a stated policy of being friendly to free software, so hopefully a community of developers will form around it. Of course this partly depends on how they develop their new systems. If they make new systems vastly incompatible with older systems then it will fracture the community and make things difficult for everyone. There have been problems in this regard in the past with ARM as the instruction set has changed.

One interesting thing about the Always Innovating “Touch Book” is that you can order the keyboard and extra battery part separately from the main computer/display unit. This means that if you break one part you can replace it without replacing the entire system (handy if you break the keyboard (the cheaper part). It’s interesting to note that their web site offers to sell me as many as 558 complete systems, as many as 896 tablets, or as many as 992 keyboards. So according to the web site anyone who wanted to buy more than 558 systems would have to order the tablets separately from the keyboards. This wouldn’t be a bad thing as the complete unit costs $399, the tablet costs $299, and the keyboard costs $99. So ordering the keyboard and tablet separately would save $1 per unit! Of course anyone who really wanted to buy 600 computers wouldn’t use a web site, they would call the sales people and get a discount that is significantly greater than $1 per unit. But these limits for the web sales seem strange enough to be worthy of comment.

It’s an interesting system, it would be handy for reading documents when on the move and for light sysadmin work (basic login to server and restart crashed daemon stuff). If I was after a new system I would probably buy one.

The Lenovo U1 Hybrid – an example of how Proprietary OSs Suck

Lenovo have announced their innovative new U1 “Hybrid” laptop [1]. It consists of a tablet-style device with a resistive touch-screen that runs Linux on a 1GHz ARM processor which attaches to a base computer that has a keyboard and a Core2 processor running Windows 7. They apparently have some special software to synchronise web browsing on both computers so you can maintain your web sessions when you detach the computers. How they manage to use the power of the Core2 CPU for javascript and flash intensive web sites while allowing the active browser sessions to migrate to a different OS must be a neat technical trick. Revision3 have a youtube review of the U1 which shows what it can do in terms of the hardware interface [2]

Running two computers requires having two batteries, motherboards, etc which means more weight or means less battery life for the weight, whichever way you think of it the user is losing because of this choice. The idea of having two computers in one is one of these cool technical ideas which just won’t provide a benefit for the users.

The best thing for the users would be to have a single light-weight computer that provides an adequate amount of performance. A 1GHz ARM processor should give good performance when running Linux for most tasks (web browsing, office applications, and most games). So it seems that a good tablet with some USB ports and a matching USB keyboard would be a better option. Maybe you could have a spare battery integrated with the keyboard as most times when you want a full sized keyboard weight isn’t such a big problem.

The problem with MS-Windows in regard to such machines is that performance is poor and processors other than i386 and AMD64 have never really been supported (sure you can buy MS-Windows-based PDAs that have ARM and PPC CPUs, but the application support for them is minimal).

These problems will never be solved. MS wants to compel users to continually upgrade their software, this requires always adding new features (bloat) while an older or less featureful OS is often a better option for lesser hardware. Vendors of proprietary applications will never support the range of CPU architectures that a free software distribution such as Debian or NetBSD supports (not that NetBSD is an ideal tablet OS – but it does demonstrate what can be done if you want a portable OS). These inherent flaws in the proprietary software environment lead to some unusual hardware being designed to work around them – such as the Lenovo U1.

I don’t have any reason to believe that the Lenovo is a bad machine, it sounds like a reasonable work-around for some of the problems that MS-Windows has forced on the industry. But I believe that it would be a much better machine if it was lighter, had a better battery life, and had a choice of keyboards – all of which would have been achieved if it was designed as an ARM-only tablet machine that you can connect to an external keyboard. They could even have used a multi-core ARM CPU. But the market for such systems apparently isn’t large enough for Lenovo (or anyone else) to ship ARM laptops for serious use. There have been a couple of netbooks released recently with CPUs that don’t support the i386 or AMD64 instruction set, but they were aimed at web browsing use not general purpose computing. Almost everything that I do on computers could be done at least as well with a CPU that doesn’t run an Intel-based instruction set, the only exception is virtualisation which doesn’t seem to be well supported on architectures other than AMD64 nowadays.

Another Hot Summer

Yesterday was ~30C in my area, today was well over 30C during the day (although cooler in the evening). They forecast 33C for tomorrow in Melbourne, but that means where I live it will probably be about 36C (it’s always a few degrees hotter than the overall forecast for the city). Monday is predicted to be 41C and Tuesday may be 32C.

I turned off my SE Linux Play Machine this morning and will probably leave it off until at least Tuesday evening. The hardware I use for the Play Machine is fairly energy efficient, but it’s in a confined space with some other electronic stuff so it’s best not to take chances. I’ll probably leave it offline for about half the duration of January and February.

I miss Amsterdam and London weather.

Ext4 and Debian/Lenny

I want to use the Ext4 filesystem on Xen DomUs. The reason for this is that the problem of fsck times on ext4 (as described in my previous post about Ext4 [1]) is compounded if you have multiple DomUs running fsck at the same time.

One issue that makes this difficult is the fact that it is very important to be able to mount a DomU filesystem in the Dom0 and it is extremely useful to be able to fsck a DomU filesystem from a Dom0 (for example when you want to resize the root filesystem of the DomU).

I have Dom0 systems running CentOS5, RHEL5, and Debian/Lenny, and I have DomU systems running CentOS5, RHEL4, Debian/Lenny, and Debian/Unstable. So to get Ext4 support on all my Xen servers I need it for Debian/Lenny and RHEL4 (Debian/Unstable has full support for Ext4 and RHEL5 and CentOS5 have been updated to support it [2]).

The Debian kernel team apparently don’t plan to add kernel support for Ext4 in Lenny (they generally don’t do such things) and even backports.debian.org doesn’t have a version of e2fsprogs that supports ext4. So getting Lenny going with Ext4 requires a non-default kernel and a back-port of the utilities. In the past I’ve used CentOS and RHEL kernels to run Debian systems and that has worked reasonably well. I wouldn’t recommend doing so for a Dom0 or a non-virtual install, but for a DomU it works reasonably well and it’s not too difficult to recover from problems. So I have decided to upgrade most of my Lenny virtual machines to a CentOS 5 kernel.

When installing a CentOS 5 kernel to replace a Debian/Lenny kernel you have to use “console=tty0” as a kernel parameter instead of “xencons=tty“, you have to use /dev/xvc0 as the name of the terminal for running a getty (IE xvc0 is a parameter to getty) and you have to edit /etc/rc.local (or some other init script) to run “killall -9 nash-hotplug” as a nash process from the Red Hat initrd goes into an infinite loop. Of course upgrading a CentOS kernel on a Debian system is a little more inconvenient (I upgrade a CentOS DomU and then copy the kernel modules to the Debian DomUs and the vmlinuz and initrd to the Dom0).

The inconvenience of this can be an issue in an environment where multiple people are involved in running the systems, if a sysadmin who lacks skills or confidence takes over they may be afraid to upgrade the kernel to solve security issues. Also “apt-get dist-upgrade” won’t show that a CentOS kernel can be updated, so a little more management effort is required in tracking which machines need to be upgraded.

deb http://www.coker.com.au lenny misc

To backport the e2fsprogs package I first needed to backport util-linux, debhelper, libtool, xz-utils, base-files, and dpkg. This is the most significant and invasive back-port I’ve done. The above apt repository has all the packages for AMD64 and i386 architectures.

For a Debian system after the right kernel is installed and e2fsprogs (and it’s dependencies) are upgraded the command “tune2fs -O flex_bg,uninit_bg /dev/xvda” can be used to enable the ext4 filesystem. At the next reboot the system will prompt for the root password and allow you to manually run “e2fsck -y /dev/xvda” to do the real work of transitioning the filesystem (unlike Red Hat based distributions which do this automatically).

So the state of my Debian systems running this is that the DomUs run the CentOS kernel and my backported utilities while the Dom0 just runs the backported utilities with the Lenny kernel. Thus the Debian Dom0 can’t mount filesystems from the DomUs – which makes things very difficult when there is a problem that needs to be fixed in a DomU, I have to either mount the filesystem from another DomU or boot with “init=/bin/bash“.