2

Storage vs RAM Size

In a comment on my post Shared Objects and Big Applications about memlockd [1] mic said that they use memlockd to lock the entire root filesystem in RAM. Here is a table showing my history of desktop computers with the amounts of RAM, disk capacity, and CPU power available. All systems better than a 386-33 are laptops – a laptop has been my primary desktop system for the last 12 years. The columns for the maximum RAM and disk are the amounts that I could reasonably afford if I used a desktop PC instead of a laptop and used the best available technology of the day – I’m basing disk capacity on having four hard drives (the maximum that can be installed in a typical PC without extra power cables and drive controller cards) and running RAID-5. For the machines before 2000 I base the maximum disk capacity on not using RAID as Linux software RAID used to not be that good (lack of online rebuild for starters) and hardware RAID options have always been too expensive or too lame for my use.

Year CPU RAM Disk Maximum RAM Maximum Disk
1988 286-12 4M 70M 4M 70M
1993 386-33 16M 200M 16M 200M
1998 Pentium-M 233 96M 3G 128M 6G
1999 Pentium-2 400 256M 6G 512M 40G
2000 Pentium-2 600 384M 10G 512M 150G
2003 Pentium-M 1700 768M 60G 2048M 400G
2009 Pentium-M 1700 1536M 100G 8192M 4500G
2010 Core 2 Duo T7500 2200 5120M 100G 8192M 6000G

graph of RAM/disk sizes from the above table

The above graph shows how the modern RAM capacities have overtaken older disk capacities. So it seems that a viable option on modern systems is to load everything that you need to run into RAM. Locking it there will save spinning up the hard drive on a laptop. With a modern laptop it should be possible to lock most of the hard drive contents that are regularly used (IE the applications) into RAM and run with /home on a SD flash storage device. Then the hard drive would only need to be used if something uncommon was accessed or if something large (like a movie) was needed. It also shows that there is potential to run diskless workstations that copy the entire contents of their root filesystem when they boot so that they can run independently of the server and only access the server for /home.

Note that the size of the RAM doesn’t need to be larger than the disk capacity of older machines (some of the disk was used for swap, /home, etc). But when it is larger it makes it clear that the disk doesn’t need to be accessed for routine storage needs.

I generated the graph with GnuPlot [2], the configuration files I used are in the directory that contains the images and the command used was “gnuplot command.txt“. I find the GnuPlot documentation to be difficult to use so I hope that this example will be useful for other people who need to produce basic graphs – I’m not using 1% of the GnuPlot functionality.

7

I Just Bought a new Thinkpad and the Lenovo Web Site Sucks

I’ve just bought a Thinkpad T61 at auction for $AU796. My Thinkpad T41p has cooling problems which I have previously described[1]. It’s also started to rattle a bit when I hold it upside down since I took it apart so I guess I didn’t do a great job at trying to fix it (probably the fan is getting obstructed). Now it has developed some screen problems where the screen back-light will periodically turn off and stay off until I press and release the lid-close button (to turn the screen off and on again), this is apparently the symptom of a broken inverter [2]. I was quoted $160 to fix the inverter and $250 to replace the entire screen by laptop.com.au (a very reliable laptop sales and repair company that I’ve dealt with before) [3]. Also the system has the red screen problem where intermittently the screen turns reddish so paying $250 for a replacement screen is worth considering. I decided not to do this as I have seen refurbished Thinkpad T41p systems on sale for about $400 and spending $160 now and possibly $250 later on a $400 system didn’t seem like a good idea. One thing that has annoyed me about my Thinkpad for a long time is the lack of PAE support in the Pentium-M CPU which makes it impossible to run Xen [4], so upgrading to a newer system will allow me to use virtualisation for the purpose of fixing bugs in Debian/Unstable among other things.

As I want a Trackpoint it seems that a Thinkpad is the best option (Thinkpads are also great in many other ways). So really all want is a new Thinkpad with an equal or higher resolution screen, more than 1.5G of RAM (what I’ve currently got) and at least PAE (but ideally hardware virtualisation for KVM) as rumor has it that ACPI doesn’t work well with Xen and also Xen has a history of being a little unreliable at the best of times. I’m after a portable desktop replacement system, so I’m not after an X series or anything else light either.

Even though the new prices on Thinkpads are generally more than I want to pay I first checked the Lenovo web site. It sucks in a magnificent way. First there were some basic site navigation issues, such as the fact that I often can’t click my middle mouse button on a link to open it in a new tab (I get some sort of Javascript error) – it seems that the Lenovo web team didn’t consider the possibility that I might want to have the details of different series of Thinkpad open in different tabs for the purpose of comparison. But the kicker is the fact that most Thinkpads don’t have the screen resolution displayed! It seems to me that one of the most important factors in purchasing a laptop is the screen resolution – and Lenovo generally don’t provide it!

The Ideapad is described as having a resolution of 1024*600 (a netbook not a laptop), the Thinkpad Edge has 1366*768 (not that good), and the R400 and R500 are WXGA which is anything between 1280*720 (sucky) and 1366*768 (slightly less sucky). So it seems that the low end models have technical details which could allow a potential customer to reject them, while the high end models don’t have technical details needed to justify the purchase price! Fortunately a friend who works for IBM was able to find me the necessary information, this site allows you to enter the part number of any Thinkpad and receive a reasonably complete set of specifications (including display resolution) [5]. With the information on that site I was able to successfully bid on the single Thinkpad in a Lenovo auction of refurbished systems that had a resolution that was satisfactory.

The fact that the Lenovo auctions of refurbished systems also lack the details is another think that Lenovo do wrong. In this case I started bidding one minute before the auction closed and had to push the price up by $125 to win it. Given the number of auctions that Lenovo runs world-wide they would probably benefit from fixing their web site just to get the occasional Thinkpad price increased by $125. Not to mention the number of people who are discouraged from buying new Thinkpads because they can’t get information on what they might be paying for.

12

Which People are Stupid on the Internet?

I don’t think that the answer is “everyone” or even “everyone other than my geeky friends“, but obviously it is a large number of people.

Many people apparently type “facebook” into Google and try to login to the first thing that they see, if it happens to not be Facebook then they whine – this became known after a Google search for “facebook login” happened to not return the Facebook login page as the first link [1].  This blog post claims that they are not stupid [2] – the specific claim is that URLs etc are just too complex.  I disagree, if my mother and my mother-in-law can both do better than that then I think that we should expect that a significant portion of retirees can do so and we should also expect that younger people will do better than older people.

In a more specific sense, when I was in primary school I was taught the Dewey Decimal System aka Dewey Decimal Classification. With the DDC a primary school student can look up the location of a book in the library index system (cardboard files when I was at school) and then know where to find it. After looking up a book on one occasion no-one would want to repeat the effort so the sensible thing to do is to write down the DDC index to any interesting book. The same mental processes can be used for dealing with URLs, someone might find Facebook etc through Google on the first occasion but they can then use browser bookmarks and written notes for traveling to track the URLs that interest them. I think we should expect that a typical adult nowadays should be able to complete any task that would be expected of a 10yo when I was young, and I don’t think it’s unreasonable to call an adult stupid if they can’t compare to a 10yo from the early 80’s! As a specific example, while 10yo children were given assignments to look up various books in the DDC I think that an adult can be expected to work out the value of an index on their own – young children should be expected to require a little more training than adults!

Some people will claim that it’s not stupidity but ignorance. What exactly is supposed to have prevented these people from learning? Have the primary school libraries stopped teaching the DDC and most other things related to storing written knowledge? Is there supposed to be such an utter lack of computer skills in the general population that anyone who wants to learn will be unable to do so? I’m sure that there are plenty of retirees who could seek advice from my mother or my mother-in-law if they wanted to learn about such things. NB I’m not making any general comment about gender specific computer skills here, my father and my father-in-law don’t seem to use the Internet much and they aren’t the ones to complain to me when things break – so I can’t assess their skills. I am talking about four individuals and the only generalisation that I am making is that 2/4 retirees I know well seem to have good Internet skills and therefore I expect there to be a reasonable number of retirees who successfully use the Internet.

The Making Light analysis and discussion of the issue has a lot of good points (Making Light does in fact “make light”) [3]. But does have some claims that I find really strange, one example concerns a woman who misunderstood the way the up/down buttons work to call an elevator. Misunderstanding the buttons is one thing, but she also shared her “knowledge” of elevators with others, presumably she had more than a few people try to correct her and she ignored their advice. I think that someone who ignores advice from a variety of people, ignores advice that can easily be tested (just push the elevator buttons and observe what happens), and then goes around sharing their wrong ideas seems to have clearly crossed the line separating cluelessness from stupidity.

One of the Making Light comments references the Clients From Hell blog – a summary of strange, stupid, and amusing requests that clients have made to web design companies [4]. It seems to me that there are two noteworthy categories of anecdote on that site. One is requests that demonstrate ignorance of the work, such as requesting something significant and complex to be done in an hour. The other is requests that demonstrate contempt for the people doing the work, such as offering to pay $10 per hour. Misjudging the time taken to complete work is forgivable – if someone has the skill to accurately estimate the time required then they would be able to do the work and wouldn’t be asking for a quote for someone else to do it. Demonstrating contempt for someone that you are about to hire is stupid no matter how or why it’s done. Clients From Hell also documents people who have requests that are obviously silly, it’s understandable that someone might expect a blurry image to be sharpened as done on “CSI”, but wanting use image editing to reveal the face of a person who was facing away from the camera is simply assigning magical powers to the computer – the fact that this sort of thing is done in shows such as Star Trek says a lot about the shows in question and their viewers.

Often car metaphors are used for computers, you can be a good driver without knowing the details of how a car works – but you do have to know how the pedals, switches, and steering wheel work as well as the meanings of the various dials. You can be competent at using the Internet without knowing much about bits, bytes, assembler code, or how a CPU works – but you do need to know how the controls work and this means knowing how to type a URL.

The basic operations of browsing the web require considerably less skill than driving a car and less skill than is commonly used in operating the telephone system (including PABX systems, mobile phones, and international calls). Anyone who is unable (not unwilling) to drive a car or make any phone call other than a local direct call and yet is reasonably intelligent could be used as an example of how an intelligent person could be unable to understand some aspects of technology, I don’t think that there are many people in that situation – it’s difficult to find an adult in Australia who can’t drive a car.

Finally while it’s reasonable to be uninterested in some things, it’s not reasonable to be interested in doing something without wanting to learn how to do it properly. If typing “facebook.com” is so difficult that it exceeds someone’s level of interest in the service then they shouldn’t complain if they find that they can’t access the service. Really typing “facebook.com” into the address bar of a web browser is easier than starting the engine of a car with a manual transmission, it’s easier than filling the fuel tank of a car with the correct fuel, or figuring out when a car is due for service.

Now there are serious security issues revealed by this event. I’m sure that lots of people use similar methods to access their online banking etc. I just did a quick Google search for online banking with Australian banks, and I noticed that a few of the search results have adverts from rival banks. So it seems quite plausible that someone could trick Google into thinking that they run a bank (there are many thousands of banks in the world), run adverts competing against established banks, and phish the people who click on them.

I wonder whether the best solution would be for the banks to test the security of their customers. Then any customer who gets phished by the bank’s anti-fraud division would receive increased bank fees for the next few years and the rest of us who are less risk to the bank could receive lower fees. The current situation seems to be that my bank fees are partly determined by the need to recoup the money that the bank loses from customers who just use Google to find their bank’s web site. I would rather not pay for the stupidity of such people.

In the end all security comes down to people issues, technology just helps people do the right thing. I believe that one of the groups of stupid people on the Internet are those who believe that the Internet should be made safe for people who want to know nothing about it – not even the basic library skills that are taught to primary school students.

9

My Ideal Mobile Phone

Based on my experience testing the IBM Seer software on an Android phone [1] I have been considering what type of mobile phone to get when my current contract expires. Here are the features above what is common in current smart phones that I think most people will sorely miss if they don’t have them for the 2011-2012 period:

  1. Camera that takes reasonable quality pictures at a 5MP resolution.
  2. High resolution screen (VGA or better).
  3. GPS (for navigation and augmented reality.
  4. Digital compass for augmented reality.
  5. An open market for applications which allows free software to be installed – such as OpenSSH.

The first two items shouldn’t be a problem, there has been a constant trend towards better cameras and higher resolution screens in phones. The difficult ones are GPS and a Digital compass which require phone software to use them. I get the impression that Android and iPhone are going to share the market for fully functional smart phones (because they have the market of applications). So I predict that by 2012 the phone market will have iPhone and Android fully functional smart phones as well as budget phones that don’t support running applications (and will probably lack a compass and GPS).

Here are the features that while not essential, will greatly increase the experience of using a phone for serious users:

  1. At least 2G of storage built in – installing a 2G micro-SD card is not adequate.
  2. A screen that can be easily read during the day – maybe Pixel Qi.
  3. The ability to give a good quality of sound for playing video and audio recordings with a regular headphone jack (so I can use my Bose headset).

For my use a hardware keyboard (such as is used in the Motorolla A855 “Droid”) is essential. I want to have a pocket sized ssh client for emergencies, and I want to be able to type notes reasonably quickly.

I wonder what portion of the smart-phone user base actually needs a keyboard. I’ve seen many people who use a smart-phone as just a regular phone that can exchange photos. Even among people who are moderately serious about smart-phone use there are probably many who only want to take high resolution photos and tag them with GPS data. Currently there are no Android phones on sale in Australia that have a hardware keyboard, I’m worried that this may be an ongoing trend which will result in people with my requirements being forced to either pay significantly more or compromise on features due to the market meeting the needs of average people.

Finally I would like to have a smart-phone that has a regular USB port for plugging in devices (which would of course require an adapter as the size of a phone doesn’t permit a regular USB port). That would permit copying files from USB flash devices, driving a digital SLR camera, and printing photos directly to a USB printer. It would also allow connecting a USB video device, keyboard, and mouse to make a mobile phone work as a desktop workstation. Current smart phones have a lot more compute power than the desktop machines I was using in 1998, so there’s no reason that one couldn’t be used as a workstation with the appropriate peripherals.

4

Old Mobile Phones as Toys

In the past I have had parents ask for advice on buying a digital camera for a young child. For some years there have been digital cameras on sale for much less than $100 – cheap enough that no-one will be THAT bothered if the child breaks it, so digital photography is a good hobby for a young child. Such cameras are however quite bulky and require AA batteries – which often don’t last that long between charges. Some of the cheap phones are large enough that a 3yo child can have trouble carrying them.

I recently gave an old LG U8110 phone to a young child for use as a camera. The phone has a 640*480 resolution camera and a display that is a few centimeters wide. It’s no good for any remotely serious photography, and among other problems I never managed to get it’s USB connection to work so the only way I ever managed to get a photograph off it was to MMS it to a newer camera. But it’s quite adequate for a child to play with, it’s small, light, and the battery stays charged for ages. Also the phone has a clock built in which is a handy feature – it seems that nowadays the trend in society is away from wearing a watch and towards using a mobile phone to discover the time.

Also a phone is a fairly capable computer, I think that the first two computers that I owned had significantly less CPU power and RAM than an LG U8110 and lots of newer phones compare well to PCs that were manufactured in the mid 90’s. The trend has been towards having an increasing number of applications and games on phones which of course gives more things for a child to play with. I believe that playing with computers that have a variety of different user interfaces and sets of applications is good for the education of young children.

Now to make a phone work you need to have a SIM. If a phone was designed by someone who was intelligent and who was acting on behalf of the owner of the phone then it would support the camera etc without a SIM. But it seems that mobile phones are either designed by idiots or they are designed to act on behalf of the phone companies to the exclusion of the customer’s interests, so I haven’t seen a camera-phone that is usable for any purpose other than calling the emergency services when there is no SIM installed. Fortunately it is possible to get old SIMs, I had one that was replaced due to an intermittent fault that caused calls to drop out. I also have some SIMs from other telcos that would probably work (I’m not sure whether a phone that is locked to one carrier will take photos if a SIM from another carrier is installed).

Update: It seems that there is a range of phones that operate without a SIM, a Nokia N900 (if you consider it to be a phone rather than an Internet tablet), an Android, or a phone running the Symbian OS. I suspect that the majority of phones that are currently in use and due to be replaced soon will require a SIM though.

One final notable aspect of giving a phone to a child is the possibility of it being used to call emergency services (which will work even when there is no SIM or a SIM that is not associated with an account). If you are planning to give a phone to someone else’s child then you should ask the parents first, some parents believe (either correctly or incorrectly) that the chance of their child making prank calls to the emergency services is too great. A present that a child receives which is undesired by their parents will probably get lost or broken quickly…

When such a phone gets broken by a child (they are tough, but almost everything that is used without restriction by a child gets broken) the next thing to do is to disassemble it. With modern design and manufacturing probably all that a child could really learn from a phone is how the keyboard works – and not even that for a touch-screen phone. But it’s still a good experience for a child to take apart old machines. When I was young my father gave me many old machines to take apart, I had a lot of fun and learned some interesting things.

I find it really sad to see those boxes for recycling old phones at the mobile phone stores which are full of 2yo phones that are mostly in good condition. Almost everyone has some young relatives or friends who have children who could find a good use for that stuff. Send the bits to be recycled AFTER the children nearest to you have finished doing things to the old phone!

9

Taking my Thinkpad Apart and Cooling Problems

I’ve been having some cooling problems with my Thinkpad recently. It’s an old model T41p which is outside the service period so IBM/Lenovo won’t help me (at least not unless I give them more money). If I run it for a few minutes at maximum CPU when the ambient temperature is about 20C then it gets to 90C, apparently 93C is the temperature at which it turns itself off, so obviously I need to do something to keep it cool. On the really hot days of summer my air-conditioners can’t keep any part of my house below 30C, so on such days I can’t do any compiles on my Thinkpad or watch videos.

My Thinkpad seems to idle at a temperature that is about 35C higher than the ambient temperature.  At this rate the system could get close to it’s maximum temperature on a 45C day by just idling! Not that I plan to have a warm Thinkpad on my lap if I ever happen to be outside on such a hot day.

I suspect that a large part of the problem is the dust that has accumulated inside the machine. I asked about this on the LUV mailing list and Andrew Chalmers suggested The Chaos Manor review of taking a Thinkpad T41p apart [1]. The Chaos Manor guy wanted to replace his CPU with a faster one so he had to get access to all the same bits.

I followed the instructions until I got to the stage of prying the heatsink off the video chip. I figured that I will never be able to attach it as well as it is currently attached so I will get different cooling problems if I go any further. Taking it apart to that degree was a moderate amount of work, getting the keyboard bezel off was the most difficult part, and taking the palm-rest off required removing bits of plastic that were stuck in place to cover screw holes – which will probably fall off in a week or two.

Probably everyone who owns a T41p that they regularly use has a similar problem to me as all such machines have been out of support long enough to have accumulated a lot of dust. So I recommend that other T41p owners not disassemble their machine as much as I did, but instead go for my plan B which is to blow compressed air through the CPU cooling system. Doing this merely requires removing the keyboard. One tip that I have heard is that you should hold the fan in place when blowing compressed air as the pressure of the air may spin the fan fast enough to generate enough electricity to damage the motherboard or damage it’s bearings. But you might want to wait until I’ve got my Thinkpad done before you blow compressed air through yours, it could very well be destroyed.

The other option is to try and use software to control the temperature. Patricia Fraser suggested controlling the fan speed [2]. I did some experiments and found that increasing the fan speed dramatically slowed the increase in the temperature. A 10 minute build is almost certain to bring the temperature to dangerous levels in a default configuration, but it seems that if I set the fan to maximum speed before starting the build then I can mitigate this problem. Most programs that I work on will compile in significantly less than 10 minutes.

Another possibility that occurred to me is that of limiting the speed of the system. It seems that ACPI has support for reducing the CPU frequency when the temperature rises too much, but Matthew Garrett has pointed out that this effectively increases the amount of energy used (and heat dissipated) for any given quantity of work [3]. So what I want to do is to cause the CPU to idle periodically when it gets too hot. I’ve been idly considering writing a program that uses SIGSTOP and SIGCONT to control the operation of programs such as make, or writing a program that creates a new pty (like script) and pauses the output whenever the CPU gets too hot.

Of course the easy option would be to figure out how to set the threshold temperature where the CPU speed is limited. Which is made slightly more complicated by my choice of kernel 2.6.30 for Ext4 support. Now the kernel doesn’t work with my acpid and I’m starting to get forced into an upgrade to Debian/Testing.

Does anyone have any ideas?

9

My Ideal Netbook

I have direct knowledge (through observation or first-hand reports) of the following use cases for Netbooks:

  1. System administrator’s emergency workstation – something light to carry when you might get an SMS about a problem.
  2. A really small laptop for a serious technical user, can be used for programming and other serious tasks. Only someone who is really dedicated to the hobby of computing would choose a system with a tiny screen for their main computer so that they can take it everywhere.
  3. A computer for a child. Children are less demanding about some aspects of their computer experience, no-one wants to buy a really expensive toy for a young child, and children find it difficult to carry full size laptops.

1 and 2 are the options that interest me. But it would be good if the extensive children’s toy market could drive economies of scale and reduce the expenses of my hobby and profession. I have found my EeePC 701 to provide adequate CPU performance (Celeron-M 630MHz) for light compiling tasks and I have used it for some Debian development work. The SSD that is built in is very fast and OpenOffice load times compare well to my Thinkpad T41p because of it.

The main problems I have with my EeePC for sysadmin and coding tasks are the small keyboard (which can’t be fixed – the overall size of the machine needs to remain small), the small screen (which could be expanded without changing the case size), the low resolution screen (which has been fixed in newer netbooks), and the fact that my 3G dongle sticks out and is likely to get broken. More recent Netbooks address these issues – while having the trade-off of being heavier and larger.

There are also some tasks which are generally not performed on a Netbook but that could be if it was properly designed. Here are some examples of things that I think should be done on a Netbook:

  1. Basic image editing and blogging – something that a lot of people do on smart-phones nowadays. It can be done more effectively and with less effort on a general purpose computer – I can’t imagine the GIMP or Inkscape running on a smart-phone.
  2. Reading electronic books. The number of people who want a Netbook but don’t want to read electronic books would be quite small as would the number of people who only want to read electronic books and never want to use a general purpose computer while travelling. No-one really wants to carry both a Netbook and an ebook reader at the same time.
  3. Watching movies.
  4. Everything that you might want to do on a terminal in an Internet cafe – those machines are always 0wned, just say no for security reasons.
  5. Games. The Nintendo DS has two ARM CPUs running at 66MHz and 33MHz and a combined screen resolution of 256*384*18bpp [1] and the Sony PSP has a 333MHz MIPS R4000 CPU and a screen resolution of 480*272*24bpp [2]. The original EeePC had more CPU power and more screen resolution than the DS and the PSP so it IS suitable for games – even though it won’t run the latest 3D games. There are lots of great games like Wesnoth that don’t require much video performance.
  6. Educational Software. Portable educational devices include the awful V.smile system for young children [3] and the educational software for the DS (I’ve seen a demonstration of a training program in rapid completion of basic maths problems for elderly people and presume it’s not the only educational software on that platform).

Image editing requires a color screen of high resolution. Effective blogging requires a platform with a resolution that compares to the typical web user – according to Hitslink 1024*768 is the most popular resolution of web browsing systems at 28.34% and the most popular resolution that is less than 1024*768 is 800*600 at 3.28% [4]. So even the most casual blogger will have an incentive to get a screen that is of higher resolution than all but the latest Netbooks. The recently released EeePC 1201 has a screen resolution of 1366*768 which should be barely adequate for those tasks.

Reading electronic books requires a reasonable resolution. Based on my experience with the 1400*1050 display in my Thinkpad it seems that a resolution of 1366*768 would be barely adequate for reading an academic paper that has two columns in a small font. But as the original Kindle had a resolution of 600*800 and the latest Kindle has a resolution of 824*1200 [5] it seems that perhaps the epaper displays are good enough to allow reading the text at a lower resolution. A display that can draw little or no power when idling (as epaper does) is simply required for an ebook. The Pixel-Qi hybrid displays are claimed to offer the best features of TFT and epaper displays [6] but they haven’t been released yet. I think it’s reasonable to assume that someone will achieve that Pixel-Qi is attempting and that it will become the standard display for a Netbook.

Watching movies and playing games (even games like Wesnoth) requires better video performance than epaper can deliver, we just have to hope that Pixel-Qi release something soon.

Watching movies and reading ebooks are both things that are best done without a keyboard in the way. The Always Innovating “Touch Book” [7] seems like a good solution to this problem. It’s a tablet PC that can be connected to a keyboard base if/when you desire. It should also be good for web browsing and reading email while on the move, I find that my EeePC is unreasonably heavy and awkward for typing email while walking.

Intel CPUs are not particularly energy efficient. As there are ARM CPUs with clock speeds as high as 2GHz and with as many as four CPU cores it seems that the ARM architecture can provide as much CPU power as is required. Debian currently supports two versions of the ARM CPU, if another one became commonly used it wouldn’t be that difficult to run Debian build servers for it.

Given a screen resolution equal to the latest Kindle, CPU power greater than the early Netbooks, and the ability to run a free software OS the range of educational and gaming software should be adequate.

So it seems that the ideal netbook would have a detachable keyboard and base and a touch-screen in the computer part. It would have a Pixel-Qi display (or equivalent) with a resolution of 1400*1050 or better. It would have USB, Gig-E, Wifi, and Bluetooth connectivity and the ability to have an internally mounted USB dongle (as the Always Innovating Touch Book does). I think that this is not overly difficult to achieve – it is basically an Always Innovating system with a better display.

Update: Another criteria is the ability to start operating quickly when requested. Even mobile phones are often limited in their utility by the time that is taken to activate them (I can’t get my mobile phone to take a photo in much less than 7 seconds after removing it from my pocket). The Always Innovating system is apparently always in suspend to RAM mode when it’s not being used so that it can start quickly. That combined with fast application load times and a good menu system could allow turning the system on and launching an application in less than 2 seconds.

If I was buying a Netbook right now the only thing that would stop me from buying an Always Innovating device is the shipping delay. But as my EeePC is working quite well I’m not going to buy another system unless I am going to get significant benefits – such as a high resolution PixelQi display.

6

The Lenovo U1 Hybrid – an example of how Proprietary OSs Suck

Lenovo have announced their innovative new U1 “Hybrid” laptop [1]. It consists of a tablet-style device with a resistive touch-screen that runs Linux on a 1GHz ARM processor which attaches to a base computer that has a keyboard and a Core2 processor running Windows 7. They apparently have some special software to synchronise web browsing on both computers so you can maintain your web sessions when you detach the computers. How they manage to use the power of the Core2 CPU for javascript and flash intensive web sites while allowing the active browser sessions to migrate to a different OS must be a neat technical trick. Revision3 have a youtube review of the U1 which shows what it can do in terms of the hardware interface [2]

Running two computers requires having two batteries, motherboards, etc which means more weight or means less battery life for the weight, whichever way you think of it the user is losing because of this choice. The idea of having two computers in one is one of these cool technical ideas which just won’t provide a benefit for the users.

The best thing for the users would be to have a single light-weight computer that provides an adequate amount of performance. A 1GHz ARM processor should give good performance when running Linux for most tasks (web browsing, office applications, and most games). So it seems that a good tablet with some USB ports and a matching USB keyboard would be a better option. Maybe you could have a spare battery integrated with the keyboard as most times when you want a full sized keyboard weight isn’t such a big problem.

The problem with MS-Windows in regard to such machines is that performance is poor and processors other than i386 and AMD64 have never really been supported (sure you can buy MS-Windows-based PDAs that have ARM and PPC CPUs, but the application support for them is minimal).

These problems will never be solved. MS wants to compel users to continually upgrade their software, this requires always adding new features (bloat) while an older or less featureful OS is often a better option for lesser hardware. Vendors of proprietary applications will never support the range of CPU architectures that a free software distribution such as Debian or NetBSD supports (not that NetBSD is an ideal tablet OS – but it does demonstrate what can be done if you want a portable OS). These inherent flaws in the proprietary software environment lead to some unusual hardware being designed to work around them – such as the Lenovo U1.

I don’t have any reason to believe that the Lenovo is a bad machine, it sounds like a reasonable work-around for some of the problems that MS-Windows has forced on the industry. But I believe that it would be a much better machine if it was lighter, had a better battery life, and had a choice of keyboards – all of which would have been achieved if it was designed as an ARM-only tablet machine that you can connect to an external keyboard. They could even have used a multi-core ARM CPU. But the market for such systems apparently isn’t large enough for Lenovo (or anyone else) to ship ARM laptops for serious use. There have been a couple of netbooks released recently with CPUs that don’t support the i386 or AMD64 instruction set, but they were aimed at web browsing use not general purpose computing. Almost everything that I do on computers could be done at least as well with a CPU that doesn’t run an Intel-based instruction set, the only exception is virtualisation which doesn’t seem to be well supported on architectures other than AMD64 nowadays.

2

Another Hot Summer

Yesterday was ~30C in my area, today was well over 30C during the day (although cooler in the evening). They forecast 33C for tomorrow in Melbourne, but that means where I live it will probably be about 36C (it’s always a few degrees hotter than the overall forecast for the city). Monday is predicted to be 41C and Tuesday may be 32C.

I turned off my SE Linux Play Machine this morning and will probably leave it off until at least Tuesday evening. The hardware I use for the Play Machine is fairly energy efficient, but it’s in a confined space with some other electronic stuff so it’s best not to take chances. I’ll probably leave it offline for about half the duration of January and February.

I miss Amsterdam and London weather.

5

Shared Objects and Big Applications

Some time ago I wrote a little utility named memlockd [1]. Memlockd will lock files into memory which allows significantly faster access when the system pages heavily, in my simulated tests I have found that having the programs and shared objects needed for logging in locked in memory can make it possible to login without a timeout when there is heavy paging, this can make the difference between recovering a system with some processes that are out of control and having to reboot it (often without discovering the root cause).

As always happens some people use my software in ways that I never planned. One guy is using it to try and make OpenOffice.org load faster. I’m not sure that this is a good idea. In a typical installation when configured for the purpose that I intended it (system recovery from a rabbit process) memlockd will take a bit less than 10M of RAM on an i386 platform (that is for bash, login, sshd, getty, busybox, and all necessary shared objects and a few data files. Since RHEL 4 Red Hat distributions have whinged at boot time if there was less than 256M of RAM available, installation of a Red Hat based system on anything less than 128M of RAM has been impossible for some years, and Debian systems perform very poorly with less than about 128M of RAM when you run apt-get. I initially designed memlockd to run on my SE Linux Play Machine which has 128M of RAM in it’s current incarnation. Locking 7.5% of RAM on the system may impact performance, but as a large part of that RAM is used for things like libc and bash which tend to be partially paged in at all times this shouldn’t be a noticeable impact. But locking 100M or more of OpenOffice seems more likely to have the potential to hurt performance, I often run OpenOffice on a machine with 512M of RAM and the biggest desktop machine I use has 1.5G of RAM – for me it wouldn’t make sense to lock OpenOffice into memory.

But it could be that there is some unusual aspect of his system that makes running memlockd with OpenOffice likely to give worthy benefits in performance without significantly hurting performance for other programs, for example it could have 4G of RAM and a really slow disk. It is also a possibility that the usage of the system makes OpenOffice so much more important than other programs that any decrease in performance in other areas is not relevant. In any case I’m happy to help people use my software to do unusual things so I’ll support this use.

I’ve been asked why memlockd doesn’t seem to give much benefit when starting up OpenOffice when run with “+/opt/openoffice.org3/program/soffice.bin” in the config file, where the + means to lock all shared objects that ldd reports that the binary needs.

for n in `ldd /usr/lib/openoffice/program/soffice.bin|sed -e s/^.*=..// -e s/\ .*// | sort -u` ; do readlink -f $n ; done | sort -u > ldd.txt

I used the above command to get a list of all shared objects that ldd reports for the soffice.bin program. On my system (Debian/Lenny i386) it reports 77 shared objects loaded. When memlockd locks all those ps reports that the RSS is 51944K.

cat /proc/1234/maps|sed -e s/^.*\ //|sort -u > /tmp/map.txt

Then I used the above command to get a list of the files that are memory mapped by OpenOffice when running OpenOffice calc where 1234 is the PID of the soffice.bin process (I expect that the numbers will be similar for writer, impress, etc – I just happened to have a spreadsheet open). It reports 172 memory mapped files which include 9 files related to fonts and 64 shared objects under /usr/lib/openoffice/program which are not found by ldd among other things. It’s quite common for a large application to use dlopen(3) at run-time to map shared objects instead of linking against them. Running memlockd with this list gave an RSS of 118644K, which is more likely to give a useful performance boost to OpenOffice load times.