Archives

Categories

Occupy Main Street

The Occupy Wall St blog has an informative summary of attempts to reclaim the American political process which has been pwned badly by financiers in recent times [1]. The basic concept is that people who represent the 99% of the population who aren’t super rich have protests in Wall St and now other business areas. Care2 has an interesting article about US marines opposing the brutal actions of police against Occupy Wall St protesters [2], apparently they treated Iraqis better than US police are treating Americans.

The movement has spread to other locations, the OccupyTogether.org site has information on related events all around the world [3].

We have an ongoing event in Melbourne, Australia. It’s been going for a week and yesterday Robert Doyle (the Mayor of Melbourne) ordered police to disperse the protest, so riot police and mounted police forced the protesters out of the city square [4]. According to the news reports there were only 100 people there at the time, here is a Google Maps link to the location, as you can see 100 people would not take up much of that space, not even with banners etc. The smart move would have been for the government to ignore it all until the protesters got bored.

Now of course we will have more and bigger protests. The use of riot police will probably be considered as a good thing by some of the more aggressive protesters, but anyone who doesn’t want to make the government (and the corporations that control it) look bad would consider it a gross error. Robert Doyle needs to be replaced, the liberal reason for replacing him is that we just don’t want unnecessary force used against peaceful protesters. The “conservative” reason for replacing him is that he’s grossly incompetent, he transformed a small protest that wasn’t getting much media attention and appeared to be losing interest into a large protest with a lot of media attention.

It will be interesting to see what happens next.

Dedicated vs Virtual Servers

A common question about hosting is whether to use a dedicated server or a virtual server.

Dedicated Servers

If you use a dedicated server then you will face the risk of problems which interrupt the boot process. It seems that all the affordable dedicated server offerings lack any good remote management, so when the server doesn’t boot you either have to raise a trouble ticket with the company running the Data-Center (DC) or use some sort of hack. Hetzner is a dedicated server company that I have had good experiences with [1], when a server in their DC fails to boot you can use their web based interface (at no extra charge or delay) to boot a Linux recovery environment which can then be used to fix whatever the problem may be. They also charge extra for hands-on support which could be used if the Linux recovery environment revealed no errors but the system just didn’t work. This isn’t nearly as good as using something like IPMI which permits remote console access to see error messages and more direct control of rebooting.

The up-side of a dedicated server is performance. Some people think that avoiding virtualisation improves performance, but in practice most virtual servers use virtualisation technologies that have little overhead. A bigger performance issue than the virtualisation overhead is the fact that most companies running DCs have a range of hardware in their DC and your system (whether a virtual server or a dedicated server) will be on a random system from their DC. I have observed hosting companies to give different speed CPUs and for dedicated servers different amounts of RAM for the same price. I expect that the disk IO performance also varies a lot but I have no evidence. As long as the hosting company provides everything that they offered before you sign the contract you can’t complain. It’s worth noting that CPU performance is either poorly specified or absent in most offers and disk IO performance is almost never specified. One advantage of dedicated servers in this regard is that you get to know the details of the hardware and can therefore refuse certain low spec hardware.

The real performance benefit of a dedicated server is that disk IO performance won’t be hurt by other users of the same system. Disk IO is the real issue as CPU and RAM are easy to share fairly but disk performance is difficult to share and is also a significant bottleneck on many servers.

Dedicated servers also have a higher minimum price due to the fact that a real server is being used which involves hardware purchase and rack space. Hetzner’s offers which start at E29 per month are about as cheap as it’s possible to get. But it appears that the E29 offer is for an old server – new hardware starts at E49 per month which is still quite cheap. But no dedicated server compares to the virtual servers which can be rented for prices less than $10 per month.

Virtual Servers

A virtual server will typically have an effective management interface. You should expect to get web based access to the system console as well as ssh console access. If console access is not sufficient to recover the system then there is an option to boot from a recovery device. This allows you to avoid many situations that could potentially result in down-time and when things go wrong it allows you to recover faster. Linode is an example of a company that provides virtual servers and provides a great management interface [2]. It would take a lot of work with performance monitoring and graphing tools to give the performance overview that comes for free with the Linode interface.

Disk IO performance can suck badly on virtual servers and it can happen suddenly and semi-randomly. If someone else who is using a virtual server on the same hardware is the target of a DoS attack then your performance can disappear. Performance for CPU is generally fairly reliable though. So a CPU bound server would be a better fit on the typical virtual server options than a disk IO bound server.

Virtual servers are a lot cheaper at the low end so if you don’t need the hardware capabilities of a minimal dedicated server (with 1G of RAM for Hetzner and a minimum of 8G of RAM for some other providers) then you can save a lot of money by getting a virtual server.

Finally the options for running a virtual machine under a virtual machine aren’t good, AFAIK the only options that would work on a commercial VPS offering are QEMU (an x86 CPU instruction emulator), Hercules (a S/370 S/390, and Z series IBM mainframe emulator), and similar CPU emulators. Please let me know if there are any other good options for running a virtual machine on a VPS. Now while these emulators are apparently good for debugging OS development they aren’t something that are generally useful for running a virtual machine. I knew someone who ran his important servers under Hercules so that x86 exploits couldn’t be used for attacking them, but apart from that CPU emulation isn’t generally useful for servers.

Summary

If you want to have entire control of the hardware or if you want to run your own virtual machines that suit your needs (EG one with lots of RAM and another with lots of disk space) then a dedicated server is required. If you want to have minimal expense or the greatest ease of sysadmin use then a virtual server is a better option.

But the cheapest option for virtual hosting is to rent a server from Hetzner, run Xen on it, and then rent out DomUs to other people. Apart from the inevitable pain that you experience if anything goes wrong with the Dom0 this is a great option.

As an aside, if anyone knows of a reliable company that offers some benefits over Hetzner then please let me know.

What I would Like to See

There is no technical reason why a company like Linode couldn’t make an offer which was a single DomU on a server taking up all available RAM, CPU, and disk space. Such an offer would be really compelling if it wasn’t excessively expensive. That would give Linode ease of management and also a guarantee that no-one else could disrupt your system by doing a lot of disk IO. This would be really easy for Linode (or any virtual server provider) to implement.

There is also no technical reason why a company like Linode couldn’t allow their customers to rent all the capacity of a physical system and then subdivide it among DomUs as they wish. I have a few clients who would be better suited by Linode DomUs that are configured for their needs rather than stock Linode offerings (which never seem to offer the exact amounts of RAM, disk, or CPU that are required). Also if I had a Linode physical server that only had DomUs for my clients then I could make sure that none of them had excessive disk IO that affected the others. This would require many extra features in the Linode management web pages, so it seems unlikely that they will do it. Please let me know if there is someone doing this, it’s obvious enough that someone must be doing it.

Update:

A Rimuhosting employee pointed out that they offer virtual servers on dedicated hardware which meets this criteria [3]. Rimuhosting allows you to use their VPS management system for running all the resources in a single server (so no-one else can slow your VM down) and also allow custom partitioning of a server into as many VMs as you desire.

Servers vs Phones

Hetzner have recently updated their offerings to include servers with 16G and 24G of RAM [1]. You can get a dedicated server with two 3TB SATA disks, an i7-2600 quad-core CPU, and 16G of RAM for E49 per month plus an E149 setup fee. That is a good deal and I’ll probably soon be running a few more servers at Hetzner because of it.

HTC is currently offering five different Android phones that have 1G of RAM [2]. So while Hetzner is offering some great deals on dedicated servers, the affordable option has 16* the memory of a modern phone and the high-end option and the biggest option has a mere 24* the RAM of a phone!

Linode is a virtual server provider that I’m using for some of my clients, they offer virtual servers with 20GB of RAM for $US800 per month [3]. That doesn’t compare well to Hetzner’s offer of 24G for E59 ($US81) per month. Admittedly the management interface for Linode is really good while the process of recovering from a Hetzner server with a serious configuration issue is painful – but getting two Hetzner servers in some sort of HA configuration would be 1/5 the cost of a Linode virtual server.

HTC offers two Android phones with 16G of built-in flash storage that have the ability to take a 32G microSD card for a total of 48G of storage. Linode’s smallest virtual server plans have 20G, 30G, and 40G of storage, so a modern phone can store more than twice as much data as the smallest Linode plan and more data than any of the three smallest plans.

While it’s obvious that phones don’t perform well for any real server use (lack of fast network access, disk IO speed, and CPU power being obvious issues) it does seem that the recent announcements of newer cheaper server plans aren’t that exciting when compared to mobile phones. For a similar monthly rate I could get a mobile phone “free” on a two year contract or a Hetzner server that has 16* the RAM.

It’s a pity that Hetzner does’t offer servers with up to 128G of RAM and more than four disks. RAM isn’t THAT expensive nowadays and their business model includes having the customer pay for various options that other companies don’t tend to offer in a similar price range (such as SSD and other hardware customisation).

Also see XKCD’s comparison of HDTV and mobile phones [4].

Akonadi on a MySQL Server

Wouter described how to get Akonadi (the back-end for KDE PIM) to use PostgreSQL [1].

I don’t agree with his “MySQL is a toy” sentiment. But inspired by his post I decided to convert some of my systems to use a MySQL instance running on a server instead of one instance for each user. In the default configuration you have 140M of disk space and 200M of RAM used by each user for a private MySQL installation which has about 24K of data (at least at the moment on systems I run, maybe more in future).

Here’s some pseudo shell script to dump the database and get a new config:

mysqldump --socket=$HOME/.local/share/akonadi/socket-$HOSTNAME/mysql.socket akonadi > dump.sql
akonadictl stop
rm -rf .config/akonadi
rm -rf .local/share/akonadi
mkdir .config/akonadi
cat > .config/akonadi/akonadiserverrc <<EOF
[%General]
Driver=QMYSQL
SizeThreshold=4096
ExternalPayload=false

[QMYSQL]
Name=${USER}_akonadi
Host=IP_OR_HOST
User=$USER
Password=$PASS
StartServer=false
Options=
ServerPath=/usr/sbin/mysqld

[Debug]
Tracer=null
EOF

Then with DBA privs you need to run the following in the mysql client:

create database $USER_akonadi;
GRANT ALL PRIVILEGES ON $USER_akonadi.* to '$USER'@'%' IDENTIFIED BY '$PASS';

Then run the following to import the SQL data:

mysql $USER_akonadi < dump.sql

Ideally that would be it, but on my test installation (Debian/Squeeze MySQL server and Debian/Unstable KDE workstations) I needed to run the following SQL commands to deal with some sort of case problem.
rename table schemaversiontable to SchemaVersionTable;
rename table resourcetable to ResourceTable;
rename table collectionattributetable to CollectionAttributeTable;
rename table collectionmimetyperelation to CollectionMimeTypeRelation;
rename table collectionpimitemrelation to CollectionPimItemRelation;
rename table collectiontable to CollectionTable;
rename table flagtable to FlagTable;
rename table mimetypetable to MimeTypeTable;
rename table parttable to PartTable;
rename table pimitemflagrelation to PimItemFlagRelation;
rename table pimitemtable to PimItemTable;

I am not using any PIM features other than the AddressBook (which hasn’t been working well for a while), so I’m not sure that this is working correctly. But I would prefer that something I don’t use and is probably broken take up disk space and RAM on a server instead of a workstation anyway.

Modern Laptops Suck

One of the reasons why I’m moving from a laptop to a cloud lifestyle [1] is that laptops suck nowadays.

Engineering Trade-offs

Laptops have always had disadvantages when compared to desktop systems. The screen has to be smaller, the keyboard is inconveniently small on the smaller laptops and netbooks, you don’t get PCI slots (CardBus isn’t nearly as good), you usually can’t have multiple hard drives and expansion options for other things are limited. Also due to the difficulty in designing a computer that uses a small volume it’s very difficult to repair a laptop and there are no realistic options for upgrading the motherboard to use a faster CPU etc. This is OK, it’s engineering trade-offs that we have to deal with.

CPU Speed

Modern laptops however have some bad design choices. Firstly they appear to be trying to compete with desktop systems for CPU speed. This was reasonable when desktop systems had 200MHz CPUs which dissipated about 15W (see the Wikipedia page about CPU power dissipation) but now that desktop CPUs are dissipating 65W at the low end and more than 100W at the high end it’s really not practical to try and compete. My Thinkpad T61 has a T7500 CPU that can dissipate 35W, getting that much heat out of a laptop case is a significant engineering challenge no matter how you do it.

It’s a pity that no-one seems to be making laptops with large screens that have a low-power CPU. Sure it takes a moderate amount of CPU power to use a large display for games or playing video, but if you want to use a laptop for work purposes then not much CPU power is required. For tasks which take a lot of CPU power you can offload it to the cloud, I can ssh to a server to do compiles, and one of my clients is setting up an Adobe After Effects render farm [2] (in the broadest sense of the word “Cloud” can include a server accessed by ssh and a few servers on the LAN running After Effects).

Thin Laptops

The next problem is laptops being thin, it is really convenient to have a thin laptop, but the thinner it is the smaller the fans have to be and the faster the cooling air has to travel through small heat sinks. At the best of times this results in more noise from the cooling fan (which really isn’t so bad). But it also increases the rate at which dust builds up inside the case and insulates the heat sink. When a laptop is thin and light for convenience and also wide to have a large display it just can’t be that strong, so laptops tend to bend. If I put an Australian 10c piece (the size of a US Quarter) under one of the feet of my Thinkpad T61 the other three feet touch the desk! Presumably the laptop would bend in every way imaginable if you were to put it on your lap – which of course you can’t do because there are cooling vents in the bottom so it can give you a hot lap and an overheated laptop.

My first Thinkpad was 61mm high according to the IBM spec sheet. I measured my latest one at 34mm. As 61mm wasn’t too bad I think I could survive now with a laptop that was 45mm high and had more strength and less cooling problems.

My Thinkpad T61 currently has some serious cooling problems, I suspect that something is broken inside. As it’s out of warranty I took it apart but couldn’t find anything wrong, so I guess I’ll have to pay to get it repaired. This will be the third time I’ve had a Thinkpad repaired because of cooling problems, but the first time one has been out of warranty. I blame the engineering trade-offs required to make them thin.

Portable Desktop/Server systems

If you want a small portable computer that delivers great performance then a Mac Mini seems to be a good option [3]. The people who use a laptop at their desk at work and their desk at home would probably be better served by a Mac Mini. The Mac Mini can be purchased with SSD storage to reduce the risk of data loss due to being dropped. Admittedly the Mac Mini needs to be plugged in before it can be used, but if you had a USB Ethernet device and a USB hub then only three cables would be required, power, USB, and video – one more cable than the typical office laptop use with Ethernet and power.

Some modern laptop/netbook systems (such as the Thinkpad T61 and the EeePC 701) seem to be designed to use the keyboard as part of the cooling system. If you run it with the lid closed then it becomes significantly hotter. This makes laptops unsuitable for use as a portable server. Probably one exception to this is the Apple laptops which have a rubbery keyboard that doesn’t allow air flow – of course anyone who likes the feel of a real keyboard won’t buy an Apple laptop for that reason (but a keyboard that has one really hot section above the CPU doesn’t feel great either). In the past I’ve used laptops as servers once they become unsuitable for their primary use, probably in future I won’t be able to do that.

ARM Laptops

There are some laptops and tablets with ARM CPUs that should dissipate little heat. But I’m not aware of any such devices that I consider to be practical Linux laptops. I’ve done some work with iPaQ’s running Familiar in the past, it was a nice system but it was a niche market and everything was different from every other system I’ve ever used. That made all the work take longer.

What would be ideal is an ARM based laptop (not netbook – a big screen is good) that boots from a regular CF or SD card (so the main storage can be installed in another machine to fix any boot failures) and which is supported by a major Linux distribution. Does anyone know of any work towards such a goal?

Moving from a Laptop to a Cloud Lifestyle

My Laptop History

In 1998 I bought my first laptop, it was a Thinkpad 385XD, it had a PentiumMMX 233MHz CPU, 96M of RAM, and an 800*600 display. This was less RAM than I could have afforded in a desktop system and the 800*600 display didn’t compare well to the 1280*1024 resolution 17 inch Trinitron monitor I had been using. Having only 1/3 the pixels is a significant loss and a 12.1 inch TFT display of that era compared very poorly with a good Trinitron monitor.

In spite of this I found it a much better system to use because it was ALWAYS with me, I used it for many things that were probably better suited to a PDA (there probably aren’t many people who have carried a 7.1 pound (3.2Kg) laptop to as many places as I did). But some of my best coding was done on public transport.

But I didn’t buy my first laptop for that purpose, I bought it because I was moving to another country and there just wasn’t any other option for having a computer.

In late 1999 I bought my second laptop, it was a Thinkpad 600E [1]. It had twice the CPU speed, twice the RAM, and a 1024*768 display that displayed color a lot better. Since then I had another three Thinkpads, a T21, a T43, and now a T61. One of the ways I measure a display is the number of 80*25 terminal windows that I can display at one time, my first Thinkpad could display four windows with a significant amount of overlap. My second could display four with little overlap, my third (with 1280*1024 resolution) could display four clearly and another two with overlap, and my current Thinkpad does 1680*1050 and can display four windows clearly and another five without excessive overlap.

For most of the last 13 years my Thinkpads weren’t that far behind what I could afford to get as a desktop system, until now.

A Smart Phone as the Primary Computing Device

For the past 6 months the Linux system I’ve used most frequently is my Sony Ericsson Xperia X10 Android phone [2]. Most of my computer use is on my laptop, but the many short periods of time using my phone add up. This has forced some changes to the way I work. I now use IMAP instead of POP for receiving mail so I can use my phone and my laptop with the same mail spool. This is a significant benefit for my email productivity, instead of having 100 new mailing list messages waiting for me when I get home I can read them on my phone and then have maybe 1 message that can’t be addressed without access to something better than a phone. My backlog of 10,000 unread mailing list messages lasted less than a month after getting an Android phone!

A few years ago I got an EeePC 701 that I use for emergency net access when a server goes down. But even a 920g EeePC is more weight than I want to carry, as I need to have a mobile phone anyway there is effectively no extra mass or space used to have a phone capable of running a ssh client. My EeePC doesn’t get much use nowadays.

A Cheap 27 inch Monitor from Dell

Dell Australia is currently selling a 27 inch monitor that does 2560*1440 (WQHD) for $899AU. Dell Australia offers a motor club discount which pretty much everyone in Australia can get as almost everyone is either a member of such a club or knows a member well enough to use their membership number for the discount. This discount reduces the price to $764.15. The availability of such a great cheap monitor has caused me to change my working habits. It doesn’t make sense to have a reasonably powerful laptop used in one location for almost all the time when a desktop system with a much better monitor can be used.

The Plan

Now that my 27 inch monitor has arrived I have to figure out a way of making things work. I still need to work from a laptop on occasion but my main computer use is going to be a smart-phone and a desktop system.

Email is already sorted out, I already have three IMAP client systems (netbook, laptop, and phone), adding a desktop system as a fourth isn’t going to change anything.

The next issue is software development. In the past I haven’t used version control systems that much for my hobby work, I have just released a new version every time I had some significant changes. Obviously to support development on two or three systems I need to use a VCS rigorously. I’m currently considering Subversion and Git. Subversion is really easy to use (for me), but it seems to be losing popularity. Git is really popular so if I use it for my own projects then I could allow anonymous access for anyone who’s interested – maybe that will encourage more people to contribute.

One thing I haven’t even investigated yet is how to manage my web browsing work-flow in a distributed manner. My pattern when using a laptop is to have many windows and tabs open at the same time for issues that I am researching and to only close them days or weeks later when I have finished with the issue. For example if I’m buying some new computer gear I will typically open a web browser window with multiple tabs related to the equipment (hardware, software, prices, etc) and keep them all open until I have received it and got it working. Chromium, Mozilla, and presumably other modern web browsers have a facility to reopen windows after a crash. It would be ideal for me if there was some sort of similar facility that allowed me to open the windows that are open on another system – and to push window open commands to another system. For example when doing web browsing on my phone I would like to be able to push the URLs of pages that can’t be viewed on a phone to my desktop system and have them open waiting for me when I get home.

It would be nice if web browsing could be conceptually similar to a remote desktop service in terms of what the user sees.

Finally in my home directory there are lots of random files. Probably about half of them could be deleted if I was more organised (disk space is cheap and most of the files are small). For the rest it would be good if they could be accessed from other locations. I have read about people putting the majority of their home directory under version control, but I’m not sure that would work well for me.

It would be good if I could do something similar with editor sessions, if I had a file open in vi on my desktop before I left home it would be good if I could get a session on my laptop to open the “same” file (well the same named file checked out of the VCS).

Configuring the Desktop System

One of the disadvantages of a laptop is that RAID usually isn’t viable. With a desktop system software RAID-1 is easy to configure but it results in two disks making heat and noise. For my new desktop system I’m thinking of using a DRBD device for /home to store the data locally as well as almost instantly copying it to RAID-1 storage on the server. The main advantage of DRBD over NFS, NBD, and iSCSI is that I can keep working if the server becomes unavailable (EG use the desktop system to ask Google how to fix a server fault). Also with DRBD it’s a configuration option to allow synchronous writes to return after the data is written locally which is handy if the server is congested.

Another option that I’m considering is a diskless system using NBD or iSCSI for all storage. This will prevent using swap (you can’t swap to a network device to avoid deadlocks) but that won’t necessarily be a problem given the decrease in RAM prices as I can just buy enough RAM to not need swap.

The Future

Eventually I want to be able to use a tablet for almost everything including software development. While a tablet display isn’t going to be great for coding I’m sure that I can make use of enough otherwise wasted time to justify the expense. I will probably need a tablet that acts like a regular Linux computer – not an Android tablet.

Links August 2011

Alex Steffen gave an interesting TED talk summarising the ways that greater urban density can reduce energy use while increasing our quality of life [1].

Geoffrey West gave an interesting TED talk about the way animals, corporations, and cities scale [2]. The main factor is the way that various variables scale in proportion to size. On a logarithmic graph the growth of a city shows a steady increase in both positive factors such as wages and inventions and in negative factors such as crime as it grows larger. So it seems that we need to decrease the crime rate significantly to permit the growth of larger cities and therefore gain more efficiency.

The Mankind Project (MKP) has a mission of “redefining mature masculinity for the 21st Century” [3]. They have some interesting ideas.

Phillip Zimbardo gave a provocative TED talk about the demise of men [4]. He provided little evidence to support his claims though.

Digital Cameras

In May I gave a talk for LUV about the basics of creating video on Linux. As part of the research for that I investigated which cameras were good for such use. I determined that 720p was a good enough resolution, as nothing that does 1080p was affordable and 1080i is lower quality. One thing to note is that 854*480 and 850*480 are both common resolutions for mobile phones and either of those resolutions can be scaled up to full screen on a 1920*1080 monitor without looking too blocky. So it seems that anything that’s at least 850*480 will be adequate by today’s standards. Of course as Dell is selling a 27 inch monitor that can do 2560*1440 resolution for a mere $899 in the near future 720p will be the minimum that’s usable.

Cheap Digital Video Cameras

The cameras I suggested at the time of my talk (based on what was on offer in Melbourne stores) were the Panasonic Lumix DMC-S3 which has 4*optical zoom for $148 from Dick Smith [1] and the Olympus MJU 5010 which has 5*optical zoom camera for $168 (which is now $128) from Dick Smith [2]. Both of them are compact cameras that do 720p video. They are fairly cheap cameras but at the time I couldn’t find anything on offer that had significantly better specs for video without being unreasonably expensive (more than $600).

Update: In the comments Chris Samuel pointed out that Kogan has a FullHD digital video camera for $289 [13]. That’s a very tempting offer.

More Expensive Digital Video Cameras

Teds Cameras has a good range of Digital Video Cameras (including wearable cameras, and cameras that are designed to be attached to a helmet, surfboard, or car) [3]. These are specifically designed as video cameras rather than having the video function be an afterthought.

Ted sells the Sony Handycam HDR-CX110 which does 1080p video, 3MP photos, and 25* optical zoom for $450 [4].

They also sell the pistol-style Panasonic HX-WA10 which is waterproof to 3M, does 1080p video, 11MP pictures, and 5* optical zoom for $500 [5].

For my use I can’t justify the extra expense of the digital video cameras (as opposed to digital cameras that can take video), I don’t think that they offer enough. So a cheap $128 Olympus MJU 5010 is what I will probably get if I buy a device for making video. I can afford to replace a $128 camera in a year or two but a device that costs $500 or more needs to last a bit longer. I expect that in a year or two I will be able to buy something that does 1080p for $200.

Features to look for in Great Digital Cameras

The other option when buying a camera is to buy something that is designed to be a great camera. It seems that RAW file capture [6] is a requirement for good photography. RAW files don’t just contain uncompressed data (which is what I previously thought) but they have raw sensor data which may not even be in a cartesian grid. There is some processing of the data that can be best done with raw sensor data (which may be in a hexagonal array) and which can’t be done properly once it’s been converted to a cartesian array of pixels. Image Magick can convert RAW files to JPEG or TIFF. I haven’t yet investigated the options on Linux for processing a RAW file in any way other than just generating a JPEG. A client has several TB of RAW files and has found Image Magick to be suitable for converting them so it should do.

The next issue is the F number [7]. A brief summary of the F number is that it determines the inverse-square of the amount of light that gets to the CCD which determines the possible shutter speed. For example a camera set to F1 would have a 4* faster shutter speed than a camera set to F2. The F rating of a camera (or lens for interchangeable lens cameras) is a range on many good cameras (or lenses for detachable lens cameras), if you want to take long exposure shots then you increase the F number proportionally. A casual scan of some web sites indicates that anything less than F3 is good, approaching F1 is excellent, and less than F1 is rare. But you don’t want to only use low F numbers, having a higher F number gives a larger Depth of Field, that means that the distance between the nearest and furthest objects that appear to be in focus is greater. So increasing the F number and using a flash can result in more things being in focus than using a low F number without a flash.

Another important issue is the focal length, cheap cameras are advertised as having a certain “optical zoom” which apparently isn’t quite how things work. The magnification apparently varies depending on the distance to the object. Expensive cameras/lenses are specified with the range of focal lengths which can be used to calculate the possible magnification. According to DPReview.com Optical zoom = maximum focal length / minimum focal length, so a 28mm-280mm lens would be “10* optical zoom” [8]. Finally it seems to be that the specified focal length of cameras is usually in “35mm” equivalent. So a lens described as “280mm” won’t be 28cm long, it will be some fraction of that based on the size of the CCD as a proportion of the 35mm film standard (which is 36*24mm for the image/CCD size).

Update: In the comments Aigars Mahinovs said: Don’t bother too much with the zoom. The view of a normal person is equivalent to 50mm lens (in 35mm film equivalent). Anything under 24mm is for landscapes and buildings – it is for sights where you would actually have to move your head to take in the view. Zooms are rarely useful. Something in 85-100mm range is perfectly fine to capture a bird or a person some distance away or some interesting piece of landscape, but anything more and you are in the range of silly stuff for capturing portraits of football players from the stands or for paparazzi photos. And the more zoom is in the lens the crappier the lens optics will be (or more expensive, or both) that is why the best optics are prime lenses with no zoom at all and just one specific optical length each. For example almost all my Debconf photos of the last two years are taken with one lens – Canon 35mm f/2.0 (a 50mm equivalent on my camera) and only the group shots are taken with a lens that is equivalent to 85mm.

So I guess if I was going to get an interchangeable lens camera then I could get fixed focus lenses for things that are close and far away and one with a small zoom range for random other stuff. Of course that would get me way outside my budget unless I got some good deals on the second hand market. Also having a camera that can fit into a pocket is a real benefit, and the ability to rapidly get a camera out and take a picture is important!

A final item is the so-called ISO Number which specifies how fast the film is. A higher number means that a photograph can be taken with less light but that the quality will generally be lower. It seems that you have a trade-off between a low F number (and therefore low Depth of Field), good lighting (maybe a flash), a long exposure time (blurry if the subject or camera isn’t still) and a grainy picture from a high ISO number.

Comparing Almost-Affordable Great Digital Cameras

I visited Michaels camera store in Melbourne [9] and asked for advice about affordable cameras that support RAW capture (every DSLR does but I don’t want to pay for a DSLR). The first option they suggested was the Samsung EX1 that does 10MP, F1.8-F2.4 with a 24-72mm equivalent focal range (3* optical zoom), and 640*480 video [10] for $399.

The next was a Nikon P7000 that does 10MP, F2.8-5.6 with 7* optical zoom (28-200mm equivalent), and 720p video [11] for $599.

The final option they had was the Canon G12 that does 10MP, F2.8-4.5 with 5* optical zoom (28-140mm equivalent), and 720p video [12] for $599.

3* optical zoom isn’t really enough, and $599 is a bit too much for me, so it seems that RAW format might not be an option at this time.

Conclusion

I can’t get what I want for great photography at this time, there seems to be nothing that meets my minimum desired feature set and costs less than $550. A client who’s a professional photographer is going to lend me an old DSLR that he has hanging around for some photography I want to do on the weekend.

I am also considering buying a Olympus MJU 5010 for making videos and general photography, it’s better than anything else I own at this time and $128 is no big deal.

Please let me know if I made any errors (as opposed to gross simplifications) in the above summary of the technical issues, also let me know if there are other things to consider. I will eventually buy a camera that can capture RAW images.

Name Server IP and a Dead Server

About 24 hours ago I rebooted the system that runs the secondary DNS for my zone and a few other zones. I’d upgraded a few things and the system had been running for almost 200 days without a reboot so it was time for it. Unfortunately it didn’t come back up.

Even more unfortunately the other DNS server for my zone is ns.sws.net.au which is also the only other server for the sws.net.au zone. Normally this will work because the servers for the net.au zone have a glue record containing the server IP address. So when asked for the NS records for the sws.net.au domain the reply will include the IP address of ns.sws.net.au. The unfortunate part was that the IP address was the old IP address from before the sws.net.au servers changed to a new IP address range, I wonder whether this was due to the recovery process after the Distribute IT hack [1], as forgetting to change a glue record is not something that I or the other guy who runs that network would forget. But it is possible that we both stuffed up.

The DNS secondary was an IBM P3-1GHz desktop system with two IDE disks in a RAID-1 array. It’s been quite reliable, it’s been running in the same hardware configuration for about four years now with only one disk replacement. It turned out that the cooling fan in the front of the case had seized up due to a lot of dirt and the BIOS wouldn’t let the system boot in that state. Also one of the disks was reporting serious SMART problems and needed to be replaced – poor cooling tends to cause disk errors.

It seems that Compaq systems are good at informing the user of SMART problems, two different Compaq desktop systems (one from before the HP buyout and one from after) made very forceful recommendations that I replace the disk, it’s a pity that the BIOS doesn’t allow a normal boot process after the warning as following the recommendation to backup the data is difficult when the system won’t boot.

I have a temporary server running now, but my plan is to install a P3-866 system and use a 5400rpm disk to replace the 7200rpm that’s currently in the second position in the RAID array. I’ve done some tests on power use and an old P3 system uses a lot less than most new systems [2]. Power use directly maps to heat dissipation and a full size desktop system with big fans that dissipates less than 50W is more likely to survive a poorly cooled room in summer. Laptops dissipate less heat but as their vents are smaller (thus less effective at the best of times and more likely to get blocked) this doesn’t provide a great benefit. Also my past experience of laptops as servers is that they don’t want to boot up when the lid is closed and getting RAID-1 and multiple ethernet ports on a laptop is difficult.

Finally I am going to create a third DNS server for the sws.net.au domain. While it is more pain to run extra servers, for some zones it’s just worth it.

Links July 2011

The Reid Report has an article about the marriage pledge that Michelle Bachmann signed which implies that slavery wasn’t so bad [1]. Greg Carey has written an interesting article for the Huffington Post about marriage and the bible [2], I always knew that the so-called “conservatives” weren’t basing their stuff on the Bible, but the truth surprised me.

Geoff Lemon has written an interesting blog post about the carbon tax debate in Australia [3]. He focuses on how small it is and how petty the arguments against it are.

Lord Bacon wrote an interesting list of the top 100 items to disappear first in a national emergency [4]. Some of them are specific to region and climate but it is still a good source of ideas for things to stockpile.

Markus Fischer gave an interesting TED talk about the SmartBird that he and his team built [5]. A flying machine that flaps it’s wings isn’t that exciting (my local department store sells toys that implement that concept), but having one closely match the way a bird’s wings work is interesting.