Advice on Buying a PC

A common topic of discussion on computer users’ group mailing lists is advice on buying a PC. I think that most of the offered advice isn’t particularly useful with an excessive focus on building or upgrading PCs and on getting the latest and greatest. So I’ll blog about it instead of getting involved in more mailing-list debates.

A Historical Perspective – the PC as an Investment

In the late 80’s a reasonably high-end white-box PC cost a bit over $5,000 in Australia (or about $4,000 without a monitor). That was cheaper than name-brand PCs which cost upwards of $7,000 but was still a lot of money. $5,000 in 1988 would be comparable to $10,000 in today’s money. That made a PC a rather expensive item which needed to be preserved. There weren’t a lot of people who could just discard such an investment so a lot of thought was given to upgrading a PC.

Now a quite powerful desktop PC can be purchased for a bit under $400 (maybe $550 if you include a good monitor) and a nice laptop is about the same price as a desktop PC and monitor. Laptops are almost impossible to upgrade apart from adding more RAM or storage but hardly anyone cares because they are so cheap. Desktop PCs can be upgraded in some ways but most people don’t bother apart from RAM, storage, and sometimes a new video card.

If you have the skill required to successfully replace a CPU or motherboard then your time is probably worth enough that getting more value out of a PC that was worth $400 when new and is worth maybe $100 when it’s a couple of years old probably isn’t a good investment.

Times have changed and PCs just aren’t worth enough to be bothered upgrading. A PC is a disposable item not an investment.

Buying Something Expensive?

There are a range of things that you can buy. You can spend $200 on a second-hand PC that’s a couple of years old, $400 on a new PC that’s OK but not really fast, or you can spend $1000 or more on a very high end PC. The $1000 PC will probably perform poorly when compared to a PC that sells for $400 next year. The $400 PC will probably perform poorly when compared to the second-hand systems that are available next year.

If you spend more money to get a faster PC then you are only getting a faster PC for a year until newer cheaper systems enter the market.

As newer and better hardware is continually being released at low enough prices that make upgrades a bad deal I recommend just not buying expensive systems. For my own use I find that e-waste is a good source of hardware. If I couldn’t do that then I’d buy from an auction site that specialises in corporate sales, they have some nice name-brand systems in good condition at low prices.

One thing to note is that this is more difficult for Windows users due to “anti-piracy” features. With recent versions of Windows you can’t just put an old hard drive in a new PC and have it work. So the case for buying faster hardware is stronger for Windows than for Linux.

That said, $1,000 isn’t a lot of money. So spending more money for a high-end system isn’t necessarily a big deal. But we should keep in mind that it’s just a matter of getting a certain level of performance a year before it is available in cheaper systems. Getting a $1,000 high-end system instead of a $400 cheap system means getting that level of performance maybe a year earlier and therefore at a price premium of maybe $2 per day. I’m sure that most people spend more than $2 per day on more frivolous things than a faster PC.

Understanding How a Computer Works

As so many things are run by computers I believe that everyone should have some basic knowledge about how computers work. But a basic knowledge of computer architecture isn’t required when selecting parts to assemble to make a system, one can know all about selecting a CPU and motherboard to match without understanding what a CPU does (apart from a vague idea that it’s something to do with calculations). Also one can have a good knowledge of how computers work without knowing anything about the part numbers that could be assembled to make a working system.

If someone wants to learn about the various parts on sale then sites such as Tom’s Hardware [1] provide a lot of good information that allows people to learn without the risk of damaging expensive parts. In fact the people who work for Tom’s Hardware frequently test parts to destruction for the education and entertainment of readers.

But anyone who wants to understand computers would be better off spending their time using any old PC to read Wikipedia pages on the topic instead of spending their time and money assembling one PC. To learn about the basics of computer operation the Wikipedia page for “CPU” is a good place to start. Then the Wikipedia page for “hard drive” is a good start for learning about storage and the Wikipedia page for Graphics Processing Unit to learn about graphics processing. Anyone who reads those three pages as well as a selection of pages that they link to will learn a lot more than they could ever learn by assembling a PC. Of course there’s lots of other things to learn about computers but Wikipedia has pages for every topic you can imagine.

I think that the argument that people should assemble PCs to understand how they work was not well supported in 1990 and ceased to be accurate once Wikipedia became popular and well populated.

Getting a Quality System

There are a lot of arguments about quality and reliability, most without any supporting data. I believe that a system designed and manufactured by a company such as HP, Lenovo, NEC, Dell, etc is likely to be more reliable than a collection of parts uniquely assembled by a home user – but I admit to a lack of data to support this belief.

One thing that is clear however is the fact that ECC RAM can make a significant difference to system reliability as many types of error (including power problems) show up as corrupted memory. The cheapest Dell PowerEdge server (which has ECC RAM) is advertised at $699 so it’s not a feature that’s out of reach of regular users.

I think that anyone who makes claims about PC reliability and fails to mention the benefits of ECC RAM (as used in Dell PowerEdge tower systems, Dell Precision workstations, and HP XW workstations among others) hasn’t properly considered their advice.

Also when discussing overall reliability the use of RAID storage and a good backup scheme should be considered. Good backups can do more to save your data than anything else.

Conclusion

I think it’s best to use a system with ECC RAM as a file server. Make good backups. Use ZFS (in future BTRFS) for file storage so that data doesn’t get corrupted on disk. Use reasonably cheap systems as workstations and replace them when they become too old.

Update: I find it rather ironic when a discussion about advice on buying a PC gets significant input from people who are well paid for computer work. It doesn’t take long for such a discussion to take enough time that the people involved could spent their time working instead, put enough money in a hat to buy a new PC for the user in question, and still had money left over.

Geographic Sorting – Lessons to Learn from Ingress

I’ve recently been spending a bit of my spare time playing Ingress (see the Wikipedia page if you haven’t heard of it). A quick summary is that Ingress is an Android phone game that involves geo-location of “portals” that you aim to control and most operations on a portal can only be performed when you are within 40 meters – so you do a lot of travelling to get to portals at various locations. One reasonably common operation that can be performed remotely is recharging a portal by using it’s key, after playing for a while you end up with a collection of keys which can be difficult to manage.

Until recently the set of portal keys was ordered alphabetically. This isn’t particularly useful given the fact that portal names are made up by random people who photograph things that they consider to be landmarks. If people tried to use a consistent geographic naming system (which was short enough to fit in large print on a phone display) then it would be really difficult to make it usable. But as joke names are accepted there’s just no benefit in having a sort by name.

A recent update to the Ingress client (the program which runs on the Android phone and is used for all game operations) changed the sort order to be by distance. This makes it really easy to see the portals which are near you (which is really useful) but also means that the order changes whenever you move – which isn’t such a good idea for use on a mobile phone. It’s quite common for Ingress players to recharge portals while on public transport. But with the new Ingress client the list order will change as you move so anyone who does recharging on a train will find the order of the list changing during the process and it’s really difficult to find items in a list which is in a different order each time you look at it.

This problem of ordering by location has a much greater scope than Ingress. One example is collections of GPS tagged photographs, it wouldn’t make any sense to mix the pictures of two different sets of holiday pictures because they were both taken in countries that are the same distance from my current location (as the current Ingress algorithm would do).

It seems to me that the best way of sorting geo-tagged items (Ingress portals, photos, etc) is to base it on the distance from a fixed point which the user can select. It could default to the user’s current location but in that case the order of the list should remain unchanged at least until the user returns to the main menu and I think it would be ideal for the order to remain unchanged until the user requests it.

I think that most Ingress players would agree with me that fixing annoying mis-features of the Ingress client such as this one would be better for the game than adding new features. While most computer games have some degree of make-work (in almost every case a computer could do things better than a person) I don’t think that finding things in a changing list should be part of the make-work.

Also it would be nice if Google released some code for doing this properly to reduce the incidence of other developers implementing the same mistakes as the Ingress developers in this regard.

Conversion of Video Files

To convert video files between formats I use Makefiles, this means I can run “make -j2” on my dual-core server to get both cores going at once. avconv uses 8 threads for it’s computation and I’ve seen it take up to 190% CPU time for brief periods of time, but overall it seems to average a lot less, if nothing else then running two copies at once allows one to calculate while the other is waiting for disk IO.

Here is a basic Makefile to generate a subdirectory full of mp4 files from a directory full of flv files. I used to use this to convert my Youtube music archive for my Android devices until I installed MX Player which can play every type of video file you can imagine [1]. I’ll probably encounter some situation where this script becomes necessary again so I keep it around. It’s also a very simple example of how to run a batch conversion of video files.

MP4S:=$(shell for n in *.flv ; do echo $$n | sed -e s/^/mp4\\// -e s/flv$$/mp4/ ; done)

all: $(MP4S)

mp4/%.mp4: %.flv
        avconv -i $< -strict experimental -b $$(~/bin/video-encoding-rate $<) $@ > /dev/null

Here is a more complex Makefile. I use it on my directory of big videos (more than 1280*720 resolution) and scales them down for my favorite Android devices (Samsung Galaxy S3, Samsung Galaxy S, and Sony Ericsson Xperia X10). My Galaxy S3 can’t play a FullHD version of Gangnam Style without going slow so I need to do this even for the fastest phones. This makefile generates three subdirectories of mp4 files for the three devices.

S3MP4S:=$(shell for n in *.mp4 ; do echo $$n | sed -e s/^/s3\\// -e s/.mp4$$/-s3.mp4/ -e s/.flv$$/-s3.mp4/; done)
XPERIAMP4S:=$(shell for n in *.mp4 ; do echo $$n | sed -e s/^/xperiax10\\// -e s/.mp4$$/-xperiax10.mp4/ -e s/.flv$$/-xperiax10.mp4/; done)
SMP4S:=$(shell for n in *.mp4 ; do echo $$n | sed -e s/^/galaxys\\// -e s/.mp4$$/-galaxys.mp4/ -e s/.flv$$/-galaxys.mp4/; done)

all: $(S3MP4S) $(XPERIAMP4S) $(SMP4S)

s3/%-s3.mp4: %.mp4
        avconv -i $< -strict experimental -s $(shell ~/bin/video-scale-resolution 1280 720 $<) $@ > /dev/null

galaxys/%-galaxys.mp4: %.mp4
        echo avconv -i $< -strict experimental -s $(shell ~/bin/video-scale-resolution 800 480 $<) $@ > /dev/null

xperiax10/%-xperiax10.mp4: %.mp4
        echo avconv -i $< -strict experimental -s $(shell ~/bin/video-scale-resolution 854 480 $<) $@ > /dev/null

The following script is used by the above Makefile to determine the resolution to use. Some Youtube videos have unusual combinations of width and height (Linkin Park seems to like doing this) so I scale them down so it fits the phone in one dimension and the other dimension is scaled appropriately. This requires a script from the Mplayer package and expects it to be in the location that it’s used in the Debian package, for distributions other than Debian a minor change will be required.

#!/bin/bash
set -e
OUT_VIDEO_WIDTH=$1
OUT_VIDEO_HEIGHT=$2

eval $(/usr/share/mplayer/midentify.sh $3)
XMULT=$(echo $ID_VIDEO_WIDTH*100/$OUT_VIDEO_WIDTH | bc)
YMULT=$(echo $ID_VIDEO_HEIGHT*100/$OUT_VIDEO_HEIGHT | bc)
if [ $XMULT -gt $YMULT ]; then
  NEWX=$OUT_VIDEO_WIDTH
  NEWY=$(echo $OUT_VIDEO_WIDTH*$ID_VIDEO_HEIGHT/$ID_VIDEO_WIDTH/2*2|bc)
else
  NEWX=$(echo $OUT_VIDEO_HEIGHT*$ID_VIDEO_WIDTH/$ID_VIDEO_HEIGHT/2*2|bc)
  NEWY=$OUT_VIDEO_HEIGHT
fi
echo ${NEWX}x${NEWY}

Note that I can’t preserve TAB characters in a blog post. So those Makefiles won’t work until you replace strings of 8 spaces with a TAB character.

The Death of the Netbook

The Age has an interesting article about how Apple supposedly killed the Netbook [1]. It’s one of many articles with a similar spin on the news that the last two companies making Netbooks are going to cease production. The main point of these articles is that Apple decided that Netbooks were crap and killed the market for them by producing tablets and light laptops that squeeze them out of the market.

Is the Macbook Air a Netbook?

According to the Wikipedia page the Macbook Air [2] weighs 1080g for the 11″ version and 1340g for the 13″ version. According to Wikipedia the EeePC 701 (the first EeePC) weighs 922g and the last EeePC weighs 1460g [3]. The last EeePC produced is heavier than ANY Macbook Air while the first (and lightest) EeePC is only 158g lighter than the 11″ Macbook Air.

The 11″ Macbook Air is 300*192*17mm (979cm^3) in size while the EeePC 701 is 225*165*35mm (1299cm^3) and the biggest EeePC was 266*191*38mm (1931cm^3). So the 11″ Macbook Air is 13% wider than the widest EeePC but takes less volume than any EeePC. The 13″ Macbook Air is 325*227*17mm (1254cm^3) which is still less volume than any EeePC. The Wikipedia page about Netbooks defines them as being small, lightweight, legacy-free (in terms of hardware not software) and cheap [4]. The Macbook Air clearly meets all the criteria apart from price.

The Apple US web site offers the version of the 11″ Macbook Air with 64G of storage for $999 with free shipping, for comparison the EeePC 701 was on sale in stores for $500 in 2008. The CPI adjusted price for the EeePC 701 would be at least $550 in today’s money. The Macbook is a bit less than twice as expensive as the EeePC was, but that’s more of an issue of Apple being expensive – a few years ago companies like HP were also selling Netbooks that were more expensive than the EeePC.

Unless having an awful keyboard is a criteria for being a Netbook I think that the Macbook Air meets the criteria.

As an aside, a relative recently asked me for advice on a device that is like a Macbook Air but cheaper. Does anyone know of a good option?

Is Netbook Production Ceasing?

Officeworks currently sells an ASUS “Notebook” that has a 11.6″ display and weighs 1.3kg for $398, it’s got a metal body that looks a bit like a Macbook Air (which is the latest fashion and is good for heat dissipation). That’s not advertised as a Netbook or a “Eee” product but it’s cheap, lighter than the heaviest EeePC, and not much bigger than an EeePC.

It seems that the general prices of laptops other than Apple products (which have always had higher prices) have been dropping a lot recently. There are lots of good options if you want a laptop that costs $500 or less. Even Thinkpads (one of the most expensive and best designed ranges of laptops) are well below $1000.

Do the Articles about Netbooks Make Sense?

The claims being made are that Apple skipped Netbooks because they couldn’t make a good profit. This disregards the fact that the iPhone and iPad (which are very profitable) are in the high end of the price range that was occupied by Netbooks. While Apple does make a good deal of money from the iPhone App Market it would be possible to make a Netbook with a lower production price than an iPhone because making things smaller requires more engineering work and often more expensive parts. This also disregards the fact that there are a range devices which work as an iPad case with keyboard, an iPad with such a keyboard meets most criteria for being a Netbook, so Apple is one iPad keyboard device away from selling Netbooks.

It’s interesting to note that I haven’t yet seen an article about the profits from Netbooks which didn’t make an issue of the MS-Windows license fees. The first Netbooks only ran Linux but later on they switched to Windows, that had to make a big impact on profits. An article about Netbooks which just assumes that everyone has to pay a MS license fee is missing too much of the Netbook history to be useful. I wonder if anyone could make products that are as profitable as the iPhone and Macbook Air if they had to pay for MS license fees and design their hardware to work with MS software (as opposed to Apple who can change their software to allow a cheaper hardware design).

The articles also claim that Netbooks give a bad user experience. When I bought my EeePC 701 it was the fastest system I owned for loading OpenOffice, SSD random read speeds were really good (writes sucked but that didn’t matter so much). The keyboard on an EeePC 701 is not nearly as good as a full size laptop but it is also a lot better than using a tablet, I’ve used both a 10″ Android tablet and an EeePC as a ssh client and there is no comparison. When I’m going somewhere that requires random sysadmin work (or other serious typing) and I can’t carry much weight then I still take my EeePC 701 and I don’t consider taking a tablet. The low resolution of the screen is a major issue, but it’s about the same as a Macbook Air so that’s not an advantage for Apple. I knew some people who used an EeePC 701 for the majority of their work, I couldn’t do that but obviously some people have different requirements.

I now use my phone for many tasks that I used to do on my EeePC (even light sysadmin work) so my EeePC sometimes goes unused for months. But it’s still an important part of my collection of computers. It works well for what it does and I don’t feel any need to buy a replacement. When it wears out I’ll probably buy something similar to an 11″ Macbook Air to replace it unless there’s a good option of a tablet with a detachable keyboard.

My plans for computer ownership for the near future are based on a reasonably large Android phone (currently a Samsung Galaxy S3 but maybe a Galaxy Note 2 or similar next year), a small laptop or large tablet with hardware keyboard (currently an EeePC 701), a large laptop (currently a Thinkpad T61), and a workstation (currently a NEC system with an Intel E4600 CPU and a Dell U2711 27″ monitor). A reasonably small and light system with a hardware keyboard and solid state storage is an important part of my computer needs. If tablet computers with hardware keyboards replace traditional Netbooks that’s not really killing Netbooks but introducing a new version of the same thing.

But a good way of getting web hits on an article is to claim that a once popular product is dead.

Servers in the Office

I just had a conversation with someone who thinks that their office should have no servers.

The office in question has four servers, an Internet gateway/firewall system, the old file server (and also Xen server), the new file server, and the VOIP server.

The Internet gateway system could possibly be replaced by a suitably smart ADSL modem type device, but that would reduce the control over the network and wouldn’t provide much of a benefit.

The VOIP server has to be a separate system for low latency IMHO. In theory you could use a Xen DomU for running Asterisk or you could run Asterisk on the Dom0 of the file/Xen server. But that just makes things difficult. A VOIP server needs to be reliable and is something that you typically don’t want to touch once it’s working, in this case the Asterisk server has been a few more years without upgrades than the Xen server. An Asterisk system could be replaced by a dedicated telephony device which some people might consider to be removing a server, but really a dedicated VOIP server device is just as much of a server as a P4 running Asterisk but with greater expense. A major advantage of a P4 running Asterisk is that you can easily replace the system at no cost if there is a hardware problem.

Having two file servers is excessive for a relatively small office. But running two servers is the common practice when one server is being replaced. The alternative is to just immediately cut things over which has the potential for a lot of people to arrive at work on Monday and find multiple things not working as desired. Having two file servers is a temporary problem.

File Servers

The first real problem when trying to remove servers from an office is the file server.

ADSL links with Annex M can theoretically upload data at 3Mb/s which means almost 400KB/s. So if you have an office with a theoretically perfect ADSL2+ Annex M installation then you could save a 4MB file to a file server on the Internet in not much more than 10 seconds if no-one else is using the Internet connection. Note that 4MB isn’t THAT big by today’s standards, the organisation in question has many files which are considerably bigger than that. Large files include TIFF and RAW files used for high quality image processing, MS-Office documents, and data files for most accounting programs. Saving a 65MB quick-books file in 3 minutes (assuming that your Annex M connection is perfect and no-one else is using the Internet) would have to suck.

Then there’s the issue of reading files, video files (which are often used for training and promotion) are generally larger than 100MB which would be more than 30 seconds of download time at ADSL2+ speed – but if someone sends an email to everyone in the office saying “please watch this video” then the average time to load it would be a lot more. Through quickly examining my collection of Youtube downloads I found a video which averaged 590KB/s, if an office using a theoretically perfect ADSL2+ connection giving 24Mb/s (3MB/s) download speed had such a file on a remote file server then a maximum of five people could view it at one time if no-one else in the office was using the Internet.

Now when the NBN is connected (which won’t happen in areas like the Melbourne CBD for at least another 3 years) it will be possible to get speeds like 100Mb/s download and 25Mb/s upload. That would allow up to 20 people to view videos at once and a 65MB quick-books file could be saved in a mere 22 seconds if everyone else was idle. Of course that relies on the size of data files remaining the same for another 3 years which seems unlikely, currently no Youtube videos use resolutions higher than 1920*1080 (so they don’t take full advantage of a $400 Dell monitor) and there’s always potential for storing more financial data. I expect that by the time we all have 100Mb/25Mb speeds on the NBN it will be as useful to us as 24Mb/3Mb ADSL2+ Annex M speeds are today (great for home use but limited for an office full of people).

There are of course various ways of caching data, but all of them involve something which would be considered to be a “server” and I expect that all of them are more difficult to install and manage than just having a local file server.

Of course instead of crunching the numbers for ADSL speeds etc you could just think for a moment about the way that 100baseT networking to the desktop has been replaced by Gigabit networking. When people expect each workstation to have 1000Mb/s send and receive speed it seems quite obvious that one ADSL connection shared by an entire office isn’t going to work well if all the work that is done depends on it.

Management could dictate that there is to be no server in the office, but if that was to happen then the users would create file shares on their workstations so you would end up with ad-hoc servers which aren’t correctly managed or backed up. That wouldn’t be an improvement and technically wouldn’t achieve the goal of not having servers.

Home Networking Without Servers

It is becoming increasingly common to have various servers in a home network. Due to a lack of space and power and the low requirements a home file server will usually be a workstation with some big disks, but there are cheap NAS devices which some people are installing at home. I don’t recommend the cheap NAS devices, I’m merely noting that they are being used.

Home entertainment is also something that can benefit from a server. A MythTV system for recording TV and playing music has more features than a dedicated PVR box. But even the most basic PVR ($169 for a 1TB device in Aldi now) is still a fairly complex computer which would probably conflict with any aim to have a house free of servers.

The home network design of having a workstation run as a file and print server can work reasonably well as long as the desktop tasks aren’t particularly demanding (IE no games) and the system doesn’t change much (IE don’t track Debian/Testing or otherwise have new versions of software). But this is really something that only works if you only have a few workstations.

Running an office without servers seems rather silly as it seems that none of my friends are able to have a home without a server.

Running Internet Services

Hypothetically speaking if one was to run an office without servers then that would require running all the servers in question on the Internet somewhere. For some things this can work better than a local server, for example most of my clients who insist on running a mail server in their office would probably get a better result if they had a mail server running on Linode or Hetzner – or one of the “Hosted Exchange” offerings if they want a Windows mail sever. But for a file server if you were to get around the issue of bandwidth required to access the files in normal use there’s the issue of managing a server (which is going to take more effort and expense than for a server on the LAN).

Then there’s the issue of backups. In my previous post about Hard Drives for Backup [1] I considered some of the issues related to backing data up over the Internet. The big problem however is a complete restore, if you have even a few dozen gigs of data that you want to transfer to a remote server in a hurry it can be a difficult problem. If you have hundreds of gigs then it becomes a very difficult problem. I’m sure that I could find a Melbourne based Data Center (DC) that gives the option of bringing a USB attached SATA disk for a restore – but even that case would give a significant delay when compared to backing things up on a LAN. If a server on the office LAN breaks in the afternoon my client can make arrangements to let me work in their office in the evening to fix it, but sometimes DCs don’t allow 24*7 access and sometimes when they do allow access there are organisational problems that make it impossible when you want it (EG the people at the client company who are authorised become unavailable).

The Growth of Servers

Generally it’s a really bad idea to build a server that has exactly the hardware you need. The smart thing to do is to install more of every resource (disk, RAM, CPU, etc) than is needed and to allow expansion when possible (EG have some RAM slots and drive bays free). No matter how well you know your environment and it’s users you can get surprised by the way that requirements change. Buying a slightly bigger server at the start costs hardly any money but upgrading a server will cost a lot.

Once you have a server that’s somewhat over-specced you will always find other things to run on it. Many things could be run elsewhere at some cost, but if you have unused hardware then you may as well use it. Xen and other virtualisation systems are really good in this regard as they allow you to add more services without making upgrades difficult. This means that it’s quite common to have a server that is purchased for one task but which ends up being used for many tasks.

Anyone who would aspire to an office without servers would probably regard adding extra features in such a manner to be a problem. But really if you want to allow the workers to do their jobs then it’s best to be able to add new services as needed without going through a budget approval process for each one.

Conclusion

There probably are some offices where no-one does any serious file access and everyone’s work is based around a web browser or some client software that is suited to storing data on the Internet. But for an office where the workers use traditional “Office” software such as MS-Office or Libre-Office a file server is necessary.

Some sort of telephony server is necessary no matter how you do things. If you have a traditional telephone system then you might try not to call the PABX a “server”, but really that’s what it is. Then when the traditional phone service becomes too expensive you have to consider whether to use Asterisk or a proprietary system, in either case it’s really a server.

In almost every case the issue isn’t whether to have a server in the office, but how many servers to have and how to manage them.

Breaking SATA Connectors

I’ve just broken my second SATA connector. This isn’t a lot considering the number of hard drives I’ve worked with, but it’s still really annoying as I generally don’t break things.

The problem is that unplugging a SATA cable requires pushing a little clip, this isn’t overly difficult but it unfortunately doesn’t fit well with habits formed from previous hardware. The power cables used for hard drives based on the ST-506 interface which was copied for the IDE interface was large and had a fairly tight fit. Removing such a cable requires a significant amount of force – which is about the same as the amount of force required to break a SATA connector.

When I first started using PCs a reasonably configured AT system cost over $5,000 (maybe something like $10,000 in today’s money). With that sort of price hardly anyone had a set of test PCs. When hardware prices dropped such that hard drives of reasonable size became reasonably affordable on the second-hand market I bought more disks and used some for extra storage and some for testing software. As there was nothing like VMWare for testing OS images the way to test a new OS was to plug in a different hard drive and boot it. So I got a lot of practice at removing IDE power cables with as much force as was necessary.

Now I own a pile of test PCs, SATA disks less than 100G are free, I use Xen for a lot of my testing, and generally I have much less need to swap hard drives around. In most situations in which I would swap hard drives in the 90’s I will now swap PCs and I have piles of PCs ready for this purpose. So I haven’t had enough practice with SATA disks to develop habits for safely removing them.

So far this lack of habit development has resulted in damaging two disks due to changing drives while not concentrating enough. Fortunately duct-tape works well for holding a SATA connector in place when the plastic that attaches to the clip is broken.

Long Term Adverts

I’ve just seen a mailing list post from someone who needs an ancient printer to work with their old software. As the printer is no longer manufactured and changing the software is expensive this puts them in a difficult situation – which can be profitable for someone who happens to own an ancient printer that still works. This sort of thing is not uncommon at all.

Ebay is a nice auction and online store site but it doesn’t cater for long term personal adverts. I’ve got a lot of old computer equipment that I keep because it might be useful at some time and it’s a shame to throw away working equipment. I’d like to be able to list that stuff on a sale site and have the adverts stay online for years just in case someone wants to pay a decent amount of money for it. If there was such a site I would also list all the systems in my test network, I can test software just as well with different hardware if someone wants to pay decent money for what I’ve currently got.

Storage space is pretty cheap and searching for keywords isn’t that difficult either. The cost of running an online personal sale site for items that sell every few years isn’t going to be much greater than running one that sells items after 10 days. But the profit in many cases will be a lot greater, an old printer that sells for $10 on Ebay could go for $200 or more if the seller could wait for a buyer who had some enterprise software that absolutely depended on that particular printer.

Does anyone know of such an online sale site? If not does anyone want to start one?

Standardising Android

Don Marti wrote an amusing post about the lack of standards for Android phones and the fact that the iPhone has a better accessory market as a result [1].

I’d like to see some Android phones get standardised in a similar manner to the PC. The big thing about the IBM PC compatible market was that they all booted the same way, ran the same OS and applications, had the same expansion options, connectors, etc. The early PCs sucked in many ways (there were many other desktop computers in the 80’s that were better in various ways) but the larger market made the PC win.

The PC even killed the Mac! This is something we should remember now when discussing the iPhone.

I’d like to see different Android phones that can run the same OS with the same boot loader. Having HTC, LG, Samsung, and others all sell phones that can run the same version of CyanogenMod and have the same recovery options if a mistake is made when loading CyanogenMod shouldn’t be any more difficult than having IBM, Compaq, HP, DEC, Dell, and others selling PCs that run the same versions of all the OSs of the day and had the same recovery options.

Then there should be options for common case sizes. From casual browsing in phone stores it seems that most phones on sale in Australia are of a tablet form without a hardware keyboard, they have a USB/charger socket, an audio socket, and hardware buttons for power, volume up/down, and “home” – with the “settings” and “back” buttons being through the touch-screen on the Galaxy S but hardware in most others. A hardware button to take a picture is available in some phones.

The variation in phone case design doesn’t seem to be that great and there seems to be a good possibility for a few standards for common formats, EG large tablet, small tablet, and large tablet with hardware keyboard. The phone manufacturers are currently competing on stupid things like how thin a phone can be while ignoring real concerns of users such as having a phone that can last for 24 hours without being charged! But they could just as easily compete on ways of filling a standard case size, with options for screen resolution, camera capabilities, CPU, GPU, RAM, storage, etc. There could also be ways of making a standard case with several options, EG having an option for a camera that extends from the back of the case for a longer focal length – such an option wouldn’t require much design work for a second version of anything that might connect to the phone.

Also standards would need to apply for a reasonable period of time. One advantage that Apple has is that it has only released a few versions of the iPhone and each has been on sale for a reasonable amount of time (3 different sizes of case in 4 years). Some of the Android phones seem to only be on sale in mass quantities for a few months before being outdated, at which time many of the stores will stop getting stock of matching accessories.

Finally I’d be a lot happier if there was good support for running multiple Android phones with the same configuration. Then I could buy a cheap waterproof phone for use at the beach and synchronise all the configuration before leaving home. This is a feature that would be good for manufacturers as it would drive the average rate of phone ownership to something greater than 1 phone per person.

Desktop Equivalent Augmented Reality

Augmented reality is available on all relatively modern smart phones. I’ve played with it on my Android phone but it hasn’t delivered the benefits that I hoped, there is a game where you can walk through a virtual maze which didn’t work for me, and a bunch of programs which show me the position of stars, pizza restaurants, and other things which are cool but not really useful.

It has been proven that larger screen size can make a surprising difference in productivity for increasing monitor size. The general concept seems to be that ideally everything you are thinking about at one time should be on the screen at once. I’m not aware of any research comparing phones to desktop monitors but it is obvious that some tasks become extremely difficult or nearly impossible when attempted on the tiny screen of a phone. One significant example is coding. One noteworthy thing about coding is that the amount of typing is often quite small when compared to the amount of time spent looking at code, so the lack of good keyboard options on phones isn’t always a serious problem.

The iPhone 4 has a resolution of 640*960 which seems to be the best available phone resolution (with 480*854 being the highest resolution that is available in many phones). The Dell Streak at 5 inches seemed to have the largest screen in a phone, but they have stopped selling them. It seems that the largest screen available in a phone is about 4.2 inches. Probably the minimum that would be considered usable for development would be a resolution of about 1280*1024 and a screen size of about 14 inches, while opinion will vary a lot about this I think that the vast majority of programmers will agree that the bigger tablet computers and Netbooks (at about 10 inches and something like 1366*768 resolution) are well below the minimum size.

It seems to me that a possible solution to this problem involves using augmented reality to provide a virtual desktop that is significantly larger and which has a significantly higher resolution. The advantage of augmented reality over merely scrolling is that it should allow faster and more reliable seeking for the section of virtual desktop that is of interest, and seek speed is probably the bottleneck with small monitors. One problem for this would be turning corners when on public transport, but the camera button could be used to reset the current phone position to be the middle of the viewing area, if the process of resetting the angle is fast enough it wouldn’t be a great distraction.

I don’t think that a mobile phone will ever be a great device for software development and I don’t think that the places where a serious computer isn’t available are good places to work. But sometimes I get inspiration for tracking down a difficult bug when on the move and it would be really good to be able to read the code immediately.

I won’t have any time to work on such things myself. I’m just publishing the idea in case someone who likes it happens to have a lot of spare time…

Donating old Hardware

On a recent visit to my local e-waste disposal place I noticed an open PC on the top of the pile with a pair of DIMMs that were begging to be removed. I also noticed three PCI Ethernet cards that were stacked in a manner that made them convenient to grab – possibly some nice person deliberately placed them so someone like me could take them. The DIMMs turned out to be 3G of DDR2-800 RAM and were regarded as good by Memtest86+ – a nice upgrade for one of my test systems that previously only had 1G of RAM.

If you have old hardware to dispose of then please try to take the RAM to your local computer users’ group meeting. In any such gathering there’s always someone who wants old RAM, anything better than PC-133 will find good home unless it’s very small (128M sticks of DDR-266 and 256M sticks of anything faster probably won’t get any interest). RAM is small and light so you can carry it in your pocket without inconvenience. Ethernet cards of all vintages are in demand due to people reusing old desktop systems as routers and PCIe video cards are in great demand, PCI and PCIe cards are small enough that it’s usually not a great inconvenience to transport them.

Hard drives larger than about 100G are in demand as are ATX power supplies, these are really inconvenient to transport unless you travel by car.

For computer systems, anything that can use DDR2-800 RAM will probably be of use to some member of a computer users’ group, if you offer it on the mailing list then you can expect that someone will want to collect it from you at your home or a meeting.

There are organisations such as Computerbank that take donations of old hardware and make systems for disadvantaged people [1], it’s worth considering them if you have hardware to dispose of. But for me the hardware I use every day is quite close to the minimum specs for donations that Computerbank will accept so there’s no possibility of me discarding systems that are useful to them.

I’ve created a page listing hardware that I need, if anyone in my area has such hardware that they don’t need then please let me know [2].