Archives

Categories

Debian SE Linux Status June 2012

It’s almost the Wheezy freeze time and I’ve been working frantically to get things working properly.

Policy Status

At the moment I’m preparing an upload of the policy which will support KDE (and probably most desktop environment) logins and many little fixes related to server operations (particularly MTAs). I would like to get another version done before Wheezy is released, but if Wheezy releases with version 2.20110726-6 of the policy that will be OK. It will work well enough for most things that users will be able to use local changes for the things that don’t work.

One significant lack with the current policy is that systemd won’t work. I’ve included most of the policy changes needed, but haven’t done any of the testing and tweaking that is necessary to make it work properly.

I would like to see policy support for systemd in a Wheezy update if I don’t get it done in time for the first release. If I don’t get it done in time for the release and if the release team don’t accept it for an update then I’ll put it in my own repository so anyone who needs it can get it.

/run Labelling

One significant change for Wheezy is to use a tmpfs mounted on /run instead of /var/run. This means that lots of daemon start scripts create subdirectories of /run at boot time which need to have SE Linux labels applied for correct operation. The way things work is that usually the daemon will write to the directory immediately after the init script has created it, so I can’t just have my own script recursively relabel all of /run.

Some packages that need to be patched are x11-common #677831, clamav-daemon #677686, sasl2-bin #677685, dkim-filter #677684, and cups #677580. I am sure that there are others.

[ -x /sbin/restorecon ] && /sbin/restorecon -R $DIR

Generally if you are writing an init script and creating a directory under /run then you need to have some shell code like the above immediately after it’s created. Also the same applies for directories under /tmp and any other significant directories that are created at boot time.

Upgrading

Currently there are some potential problems with the upgrade process, I’m working on them at the moment. Ideally an “apt-get dist-upgrade” would cleanly upgrade everything. But at the moment it seems likely that the upgrade might initially go wrong and then work on the second try. There are some complications such as the selinux-policy-default package owning a config file which is used by mcstransd (which is part of the policycoreutils package), when the config file format changes you get order dependencies for the upgrade.

Kernel Support

My aim when developing a new SE Linux release for Debian is that the policy should work as much as possible with the user-space from the previous release. So if you upgrade from Squeeze to Wheezy you should be able to start the process by upgrading the SE Linux policy (which drags in the utilities and lots of libraries). This means that if you have a server running you don’t have to put it out of action for the entire upgrade, you can get the policy going and then get other things going. I haven’t tested this yet but I don’t expect any problems (apart from all the dependencies).

Also the policy should work with the kernel from the previous release. So if you have a virtual server where it’s not convenient to upgrade the kernel then that shouldn’t stop you from upgrading the user-space and the SE Linux policy. I’ve tested this and found one bug, the sepolgen-ifgen utility that you need to run before audit2allow -R won’t work if the kernel is older than the utilities #677730. I don’t know if it will be possible to get this fixed. Anyway it’s not that important, you can always copy the audit log to another system running the same policy to run audit2allow, it’s not convenient but not THAT difficult either.

The End Result

I think that the result of using SE Linux in Wheezy will be quite good for the people who get the upgrade done and who modify a few init scripts that don’t get the necessary changes in time. I anticipate that someone who doesn’t know much about SE Linux will be able to get a basic workstation or small server installation done in considerably less than an hour if they read the documentation and someone who knows what they are doing will get it done in a matter of minutes (plus download and install time which can be significant on old hardware).

At the moment I’m in the process of upgrading all of my systems to Unstable (currently Testing has versions of some SE Linux packages that are too broken). While doing this I will keep discovering bugs and fix as many of them as possible. But it seems that I’ve already fixed most things that affect common users.

Also BTRFS works well. Not that supporting a new filesystem is a big deal (all that’s needed is XATTR support), but having all the nice new features on one system is a good thing. Now I just need to get systemd working.

New Version of Memlockd

I’ve just released a new version of Memlockd, a daemon to lock essential files in RAM to increase the probability of recovering a system that is paging excessively [1].

The new features are:
Using Debian/Wheezy paths for shared objects on i386 and amd64.

Added a new config file option to not log file not found errors so we don’t see i386 errors on amd64 and amd64 errors on i386.

Added a systemd service file which I haven’t yet tested, but I won’t get to test it for a while so for the moment I’ve released it and hope that the person who submitted the file got it right and that my minor change didn’t break it.

Added a run-parts style config directory, default is /etc/memlock.d and now the config file uses a % to chain to another file or directory.

So I fixed all but one of the Debian bugs in time for Wheezy, provided that the systemd stuff works. If someone has time to test it with systemd for me then that would be great!

The Financial Value of a University Degree

I’ve read quite a few articles about the value of a degree. Most of them come from the US where the combination of increasing tuition fees and uncertain job market makes a degree seem like a risky investment. I think that most analysis of the value of a degree are missing some important points.

The Value of Money at Different Times

The value of money is different at various stages of your life. The impression that I get is that when a married couple have their house fully paid off and they either don’t/won’t have children or their children are old enough to leave home the amount of money that they earn seems to matter a lot less. Doing a university degree involves 3 or 4 years not earning money (or more if doing post-graduate studies), which is usually starting at the age of 18. Effectively getting a degree involves giving up some money while young for the opportunity to earn more when older. Any analysis based on directly comparing the money spent on the degree to the amount of financial return without considering when money is needed is not very useful.

I think that a reasonable analysis would exclude income earned after the age of about 45. By that age most people have either achieved a solid financial position and learned to live within their means or messed up their finances so badly that they won’t live long enough to recover.

A Degree as a Signal

The Wikipedia page on economic signalling gives education as an example of a signal. A signal in this case means something that doesn’t inherently mean anything but which signifies something else. So completing a degree doesn’t necessarily mean that you learned anything relevant to work, but if you are able to do it then it means that you can probably also do things which are economically useful for an employer. This raises the question of how else one might signal their ability to work. One obvious answer is by working, someone who has remained steadily employed for 3 or 4 years has demonstrated their ability to work reliably and get along with other people which should be at least as useful as a signal.

It’s Not Only the Degree

Most analysis seem to compare average income of people with degrees with the average of income with people who didn’t attend university. That is based on the assumption that the degree was the only difference.

When I was young my parents spent a moderate amount of money on a full set of paper encyclopedias (about 2 meters of shelf space). I’m sure that this gave me some educational benefit as they intended, and it was something that was apparently quite rare – I don’t recall seeing a full encyclopedia in anyone else’s house before the Wikireader [1].

My parents also bought me quite a lot of computer gear (back when hardware was really expensive), were always available to drive me to computer users’ group meetings etc, and did everything else that seemed likely to have an educational benefit. The value of such learning opportunities is significant.

I think that almost everyone who had similar learning opportunities to me when they were young will probably have experienced similar support and pressure to attend university. I also think that almost everyone who receives such opportunities will be able to earn more than the median income even if they don’t attend university.

To a large extent people who are going to be successful attend university. A university degree doesn’t make anyone successful if they couldn’t succeed without a degree. There are some careers that just aren’t options if you don’t have a relevant degree (such as medicine and law). But I believe that anyone who is capable of completing a difficult course such as medicine or law (or any other career that has legal requirements for a degree) is capable of being successful without a degree in many other fields. So comparing the wages of a doctor or a lawyer to an average person doesn’t make sense, it makes more sense to try and compare their wages to someone of similar skill who didn’t have such a qualification.

Conclusion

It seems to me that the question is, of the people who had great learning opportunities when they were young and who wanted to succeed, would they have earned much less if they hadn’t attended university?

The next question is, of the people who might earn significantly less without getting a degree, would that salary difference really have mattered, or would it just be a matter of earning some luxury money when they are too old to really need it?

Take Off that Stupid Helmet

Recently I was walking through a park and heard a women call out “Take off that stupid helmet”. Usually I ignore what other people are saying but that seemed noteworthy. It turned out that a young boy (maybe 4yo) was being taught to ride a bike and his parents seemed to think that wearing a helmet was a bad idea. There is ongoing debate about the benefit to an adult in wearing a helmet while riding a bike. But it seems clear that for a young child riding on a concrete path a helmet is a really good thing. When it became apparent that everyone in the park was watching the parents decided to have him ride on the grass instead.

On a related note I was recently talking to an employee of a roadside assistance company about what happens when a child is locked in a car. Apparently if a child is locked in a car with the keys the emergency services people won’t smash a window as long as the child is kicking and screaming. While the child is obviously in distress they apparently aren’t going to immediately die and that’s OK, but when they go quiet it’s time to damage the car to save them! I can imagine situations when it’s OK for the emergency services people to wait for a car expert to open the car without damage, if the weather is cool and the child seems happy then a delay probably doesn’t matter much. But if the child is in distress then the attitude that anything which doesn’t kill the kid is OK seems wrong.

Links May 2012

Vijay Kumar gave an interesting TED talk about autonomous UAVs [1]. His research is based on helicopters with 4 sets of blades and his group has developed software to allow them to develop maps, fly in formation, and more.

Hadiyah wrote an interesting post about networking at TED 2012 [2]. It seems that giving every delegate the opportunity to have their bio posted is a good conference feature that others could copy.

Bruce Schneier wrote a good summary of the harm that post-911 airport security has caused [3].

Chris Neugebauer wrote an insightful post about the drinking culture in conferences, how it excludes people and distracts everyone from the educational purpose of the conference [4].

Matthew Wright wrote an informative article for Beyond Zero Emissions comparing current options for renewable power with the unproven plans for new nuclear and fossil fuel power plants [5].

The Free Universal Construction Kit is a set of design files to allow 3D printing of connectors between different types of construction kits (Lego, Fischer Technic, etc) [6].

Jay Bradner gave an interesting TED talk about the use of Open Source principles in cancer research [7]. He described his research into drugs which block cancer by converting certain types of cancer cell into normal cells and how he shared that research to allow the drugs to be developed for clinical use as fast as possible.

Christopher Priest wrote an epic blog post roasting everyone currently associated with the Arthur C. Clarke awards, he took particular care to flame Charles Stross who celebrated The Prestige of such a great flaming by releasing a t-shirt [8]. For a while I’ve been hoping that an author like Charles Stross would manage to make more money from t-shirt sales than from book sales, Charles is already publishing some of his work for free on the Internet and it would be good if he could publish it all for free.

Erich Schubert wrote an interesting post about the utility and evolution of Favebook likes [9].

Richard Hartmann wrote an interesting summary of the problems with Google products that annoy him the most [10].

Sam Varghese wrote an insightful article about the political situation in China [11]. The part about the downside of allowing poorly educated people to vote seems to apply to the US as well.

Sociological Images has an article about the increased rate of Autism diagnosis as social contagion [12]. People who get their children diagnosed encourage others with similar children to do the same.

Vivek wrote a great little post about setting up WPA on Debian [13]. It was much easier than expected once I followed that post. Of course I probably could have read the documentation for ifupdown, but who reads docs when Google is available?

Another USB Flash Failure

I previously wrote about a failure of a USB flash device in my Internet gateway [1]. I have since had another failure in the same system, so both the original 4G devices are now dead. That’s two dead devices in 10 weeks. It could be that the USB devices that I got for free at an exhibition were just really cheap, I’m sure that they weren’t expecting them to be used in that way. The devices from the same batch which are used for their intended purpose (sneaker-net file sharing) are still working well. But in any case I’m not going to resume this experiment until warmer weather. At this time of year some extra heat dissipation from computer gear in my home is more like a feature and less like a bug.

The second USB device to fail appeared to have it’s failure in the Ext4 journal (the errors were reported at around sector 2000), I didn’t keep a record of the problem with the first device, but from memory I think it was much the same.

Rumor has it that cheap flash storage devices don’t implement wear-levelling to avoid patent infringement. If that rumor is correct then any filesystem that uses a fixed journal in the same way as Ext3/4 is probably unsuitable for any serious use on such devices, while a filesystem based on Copy On Write will probably perform better. In Spring I’ll try using BTRFS on cheap USB flash devices and see if that works better. I have another spare device from the same batch to test so I can eliminate hardware differences. I can’t do enough tests to be a good statistical sample, but if a device lasts from Spring to Autumn using BTRFS with the same use that caused failures with Ext4 in a few weeks then I will consider it a strong indication that BTRFS is better than Ext3/4 for such uses.

For the next 5 months or so I’ll be using a hard drive in my Internet gateway system again.

What I REALLY Want from the NBN

Generally I haven’t had a positive attitude towards the NBN. It doesn’t seem likely to fulfill the claims of commercial success and would be a really bad thing to privatise anyway. Also it hasn’t seemed to offer any great benefits either. The claim that it will enable lots of new technical developments which we can’t even imagine yet that aren’t possible with 25Mb/s ADSL but which also don’t require more than the 100Mb/s speed of the NBN never convinced me.

But one thing it could really do well is to give better Internet access in remote areas. Ideally with static or near-static IPv6 addresses (because we have already run out of IPv4 addresses). Currently 3G networks do all sorts of nasty NAT things to deal with the lack of IPv4 addresses which causes a lot of needless pain if you have a server connected via 3G. One of the NBN plans is for wireless net access to remote homes, with some sanity among the people designing the network such NBN connections would all have static IPv6 subnets as long as they don’t move.

I’m currently working on a project that involves servers on 3G links. I don’t have a lot of options on implementation due to hardware and software constraints. So if the ISPs using the NBN and the NBN itself (for the wireless part) could just give us all IPv6 static ranges then lots of problems would be solved.

Of course I don’t have high hopes for this. One of the many ways that the NBN has been messed up is in allowing the provision of lower speed connections. As having an ADSL2+ speed NBN connection is the cheapest option a lot of people will choose it. Therefore the organisations providing services will have to do so with the expectation that most NBN customers have ADSL2+ speed and thus they won’t provide services to take advantage of higher speeds.

A Quick Review of the Mac Mini with OS/X Lion compared to Linux

A client just lent me a new Mac Mini with OS/X Lion to play with. I think it’s interesting to compare it with regular PCs running Linux.

Hardware

The Mac Mini is tiny. It’s volume can be compared to that of a laptop. The entire outside apart from the base is made from aluminium which helps dissipate heat, it’s not as effective as copper but a lot better than plastic. The ports on the system are sound input/output, 4*USB, Ethernet, Firewire, Thunderbolt (replacement for Firewire), SDXC, and HDMI. It ships with a HDMI to DVI-D adapter which is convenient if you have an older monitor (or if you have a recent monitor but no HDMI cable as I do).

To open the case you unscrew the bottom, this is much like opening a watch. Also like opening a watch it’s not particularly easy to screw it back on tightly, I will probably return the Mac Mini without managing to completely screw the base in.

The hardware is very stylish and intricately designed, what we expect from Apple. It’s also quiet. In every way it’s a much better system than the workstation I’m using to write this blog post. The difference of course is that this workstation was free and the Mac Mini cost just over $1000 including the RAM upgrade. A Mac Mini could be a decent Linux workstation and if I see one about to be recycled I’ll be sure to grab it!

Installation

The Mac OS comes pre-installed so I didn’t get to do a full installation. When I first booted it up it asked me if I wanted to migrate the configuration from an existing server, I don’t know how well this works as I don’t have a second Mac system but the concept is a good one. Maybe having full support for such a migration process would be a good release goal for a Linux distribution.

After determining that the installation is a fresh one I was asked for a mac.com email address or other form of registration. I skipped this step as I don’t have such an email address, but it could be useful. Red Hat has “Kickstart” to allow configuration of an OS install based on a file from a server (via NFS or HTTP). Debian supports “preseeding” to take OS configuration options from a file at install time [1] and the same option can be used for later stages of OS autoconfiguration.

One thing that would be really useful is to allow the user to enter a URL for configuration data for an individual account or for all accounts, so someone with an account on one workstation could upload the configuration (which would be either encrypted or sanitised to not have secret data) and then download it when first logging in to a new system. I can easily take a tar archive of my home directory to a new system, but people like my parents don’t have the skill to do that.

One of the final stages of system configuration was to identify the keyboard. The system asked me to press the key to the right of the left shift key and then the key to the left of the right shift key and then offered me three choices of keyboard. That was an interesting way of reducing the list of possible keyboards offered to the user and thus preventing the user from selecting one that is grossly incorrect.

Cloud Storage

When first logging in I was asked for an iCloud [2] login. iCloud doesn’t seem like a service that should be trusted, it’s based in the US and has been designed to facilitate access by government agencies. Ubuntu One [3] is a similar service that is run by a more reputable organisation, but the data is still stored by Amazon (a US corporation) which seems like a security risk. Ubuntu One isn’t in Debian (which is strange as Ubuntu is based on Debian) so it was too much effort for me to determine whether it encrypts data in a way that protects the users against US surveillance.

The cost of Ubuntu One storage is $4 per month with music streaming. A better option is to use a self-hosted OwnCloud installation for a private or semi-private cloud [4]. A cheap server from someone like Hetzner (E49 per month for 3TB of RAID-1 storage) [5] is a good option for OwnCloud hosting. A cheap Hetzner server is about $US64 per month (at current conversion rates) which is equivalent to about 16 users of Ubuntu One for music streaming. So if 20 people shared a Hetzner server they could save money when compared to Ubuntu One while also getting a lot more storage. I’ve got about 300G of unused disk space on the Hetzner server that hosts my blog and when the system is migrated to a newer Hetzner server with 3TB disks it will have 2.5TB of unused space, I could store a lot of cloud data in that!

The main features of iCloud and Ubuntu One seem to be distribution of random data files (anything you wish), streaming music to various playing systems, and copying pictures from phones as soon as they are taken. These are all great features but it’s a pity that they don’t appear to support distributed document storage. Apple Pages apparently allows documents to be immediately saved to the cloud. I’d like to be able to save a file with Libre Office at home and then access it from my netbook using the cloud, of course that would require encryption for secret files but that’s not so hard to do. One advantage with such distributed storage is that when combined with offline-IMAP for email it would almost entirely remove the need for backups of the desktop systems I maintain for my relatives. I could have all their pictures and documents go to the cloud and all their email stay on the server so if their desktop PC dies I could just give them a new PC and get it all back from the cloud! OwnCloud supports replication, so if I got two servers I would be covered against a server failure. But I think that for a small server with less than a dozen users it’s probably better to just take some down-time when things go wrong and do regular backups to an array of cheap SATA disks.

App Store

Apple has an “App Store” in the OS. The use of such a store on a desktop OS is a new thing for me. It’s basically the same as the Android Market (Google Play) but on the desktop. I think that there is a real scope for an organisation such as Canonical to provide such a market service for Linux. I think that there is a lot of potential for apps to be sold for less than $10 to a reasonable number of Linux users. A small payment would be inconvenient for the seller if they have to interact with the customer in any way and also inconvenient for the buyer if they are entering all their credit card details into a web site for the sale. But for repeat sales with one company being an intermediary it would be convenient for everyone. A market program for a desktop Linux system could provide a friendly interface to selecting free apps from repositories (for Debian, Ubuntu, Fedora, or other distributions) and also have the same interface used for selecting paid applications.

Conclusion

This isn’t much of a review of Apple OS/X or the Mac Mini. Thinking about ways of implementing the best features of Lion on Linux is a lot more interesting. I admire Apple in the same way that I admire sharks, they are really good at what they do but they don’t care about my best interests any more than a hungry shark cares about me.

Update

I got the currency conversion wrong in the first version of this article. It seems that to save money via a shared Hetzner server instead of Ubuntu One about 20 users would be needed instead of 10. But that’s still not too many and would still give a lot more storage. It would be a little more difficult to arrange though, probably anyone who is seriously into computers knows 10 people who would want to share such a service (including people like their parents who want things to just work and don’t understand what’s happening). But getting 20 people would be more difficult.

Liberty and Mobile Phones

I own two mobile phones at the moment, I use a Samsung Galaxy S running Cyanogenmod [1] (Android 2.3.7) for most things, and I have a Sony Ericsson Xperia X10 running Android 2.1 that I use for taking photos, some occasional Wifi web browsing, and using some applications.

Comparing Android Hardware

The hardware for the Xperia X10 is better than that of the Galaxy S in many ways. It has a slightly higher resolution (480*854 vs 480*800), a significantly better camera (8.1MP with a “flash” vs 5MP without), and a status LED which I find really handy (although some people don’t care about it).

The only benefit of the Galaxy S hardware is that it has 16G of internal storage (of which about 2G can be used for applications) and 512M of RAM while the Xperia X10 has 1G of internal storage and 384M of RAM. These are significant issues, I have had applications run out of RAM on the Xperia X10 and I have been forced to uninstall applications to make space.

Overall I consider the Xperia X10 to be a significantly better piece of hardware as I am willing to trade off some RAM and internal storage to get a better resolution screen and a better camera. The problem is that Sony Ericsson have locked down their phones as much as possible and they don’t even give users the option of making a useful backup – they inspired my post about 5 principles of backups [2].

The fact that the Galaxy S allows installing CyanogenMod which then gives me the liberty to do whatever I want with my phone is a massive feature. It outweighs the hardware benefits of the Sony Ericsson phones over Samsung phones prior to the Galaxy Nexus and Galaxy Note.

For an individual user the ability to control their own hardware is a feature. Such an ability wouldn’t be much use if there wasn’t a community of software developers, so if you buy an Android phone that isn’t supported by CyanogenMod or another free Android distribution then whether it is locked probably won’t matter to you. But for any popular Android phone that’s sold on the mass market it seems that if it’s not locked then it will get a binary distribution of Android in a reasonable amount of time.

Comparing with Apple

It seems that Apple is the benchmark for non-free computing at the moment. The iPhone is locked down and Apple takes steps to re-lock phones that can be rooted – as opposed to the Android vendors who ship phones and then don’t bother to update the firmware for any reason. The Apple app market is more expensive and difficult to enter and if an app isn’t in the market then you have to pay if you want to install it on a small number of development/test phones. This compares to Android where the Google market is cheaper and easier to enter and anyone can distribute an app outside the market and have people use it.

But for an individual this doesn’t necessarily cause any problems. I have friends and clients who use iPhones and are very happy with them. In terms of software development it’s a real benefit to have a large number of systems running the same software. As Apple seems to have higher margins and larger volume than any other phone vendor as well as shipping only one phone at any time (compared to every other phone vendor which seems to ship at least 3 different products for different use cases) they are in a much better economic position to get the software development right. As far as I can tell the hardware and software of the iPhone is of very high quality. The iPad (which has a similar market position) is also a quality product. The fact that the Apple app market is more difficult to enter (both in terms of Apple liking the application and the cost of entry) also has it’s advantages, I get the impression that the general quality of iPhone apps is quite high as opposed to Android where there are a lot of low quality apps and many more fraudulent apps than there should be.

The lack of choice in Apple hardware (one phone and one tablet) is a disadvantage for the user. There is no option for a phone with a slide-out keyboard, a large screen (for the elderly and people with fat fingers), or any of the other features that some Android phones have. The lack of a range of sizes for the iPad is also a disadvantage. But it seems that Apple has produced hardware that is good enough for most users so there aren’t many complaints about a lack of choice.

It seems to me that the biggest disadvantage of the closed Apple ecosystem is for society in general. Anyone who wants to write a mobile app to do something which might be considered controversial would probably think twice about whether to develop for the iPhone/iPad as Apple could remove the app at a whim which would waste all the software development work that was invested in writing the app. Google seem to have much less interest in removing apps from their store and if they do remove an app then with some inconvenience it can be distributed on the web without involving them – so the work won’t be wasted.

How Much Freedom Should a Vendor Provide?

The Apple approach of locking everything down is clearly working for them at the moment. The Samsung approach of taking the Google prescribed code and allowing users to replace it is good for the users and works well. The Sony Ericsson approach of taking the Google code, adding some proprietary code, and then locking the phone down is bad for the users and I think it will be bad for Sony Ericsson. People are more likely to tell others about negative experiences and negative reviews are more likely to be noticed than positive reviews. So while many people are reasonably happy with Sony Ericsson products (until they find themselves unable to restore from a backup) it’s still not a good situation for Sony Ericsson marketing.

It seems that there are benefits to hardware vendors for being really open and for locking their users in properly. But being somewhat open isn’t a good choice, particularly for a vendor that ships poor quality proprietary apps such as the Sony Ericsson ones.

In terms of application distribution Google isn’t as nice as they appear. The Skyhook case revealed that Google will do whatever it takes to prevent apps that compete with Google apps from being installed by default [3]. Google is also trying to make money from DRM sales via Youtube which it denies to rooted phones [4]. Again it seems to me that the best options here are being more open than Google is and being as closed as Apple. Google might gain some useful benefits from applying DRM (even though everyone with technical knowledge knows that it doesn’t work) but the Skyhook shenanigans have got to be costing Google more than it’s worth.

How to make Android devices more Free

The F-droid market is an alternative to the Google App market which only has free software [5]. On it’s web site there are links to download the source for the applications, including the source and binaries for old versions. In the Google App market if an upgrade breaks your system then you just lose, with F-droid you can revert to the old version.

A self-hosted OwnCloud installation for a private or semi-private cloud [6] can be used as an alternative to the Google Music store as well as for hosting any other data that you want to store online.

The Open Street Map for Android (Osmand) project provides an alternative to the Google Map service [7]. Osmand allows you to download all the vector data for the regions you will ever visit so it can run without Internet access. But it doesn’t have the ability to search for businesses and the search for an address functionality is clunky and doesn’t accept plain text, which among other things precludes pasting data that’s copied from email or SMS. While Osmand provides some important features that Google Maps will probably never provide, it doesn’t provide some of the most used features of Google Maps so uninstalling Google Maps isn’t a good option at the moment.

The K9mail project provides a nice IMAP client for Android [8]. Use K9 with a mail server that you run and you won’t need to use Gmail.

There are alternatives to all the Google applications. It seems that apart from the lack of commercial data and search ability in Osmand an Android device that is used for many serious purposes wouldn’t lack much if it had no Google apps.

Google seems to be going too far in controlling Android. Escaping from their control and helping others to do the same seems to be good for society and good for the users who don’t need apps which are only available in proprietary form.

Acoustiblok/Thermablok

Acoustiblok is an interesting product for blocking sound, it works by dissipating sound energy through friction within the sound barrier materiel [1]. They sell it in varieties that are designed for use within walls and for use as fences. As it isn’t solid it won’t reflect sound so it can be used to line the walls to stop sound being reflected back at you. It’s design is based on NASA research.

The web site claims that a 3mm sheet of Acoustiblok gives a greater noise reduction than 12 inches (30.7cm) of poured concrete. I am a little dubious about that claim as I’ve read a report of someone using three layers of Acoustiblok to make a quiet room for recording music (and to be used as a play-room for an Autistic child). I find it difficult to imagine someone needing a meter of concrete to stop any sort of noise that they might encounter in a residential area so the fact that someone needed three layers of Acoustiblok is an indication that it might not be quite as good as they claim (although there is the possibility that Acoustiblok was badly installed). I wonder whether the claims about concrete concern particular frequencies. The technical specifications and product comparisons page [2] shows that Acoustiblok is least effective at 130Hz where it only reduces noise by 12dB and that it’s effectiveness increases to 38dB at 5KHz. So perhaps a concrete wall to stop low frequencies and Acoustiblok to stop high frequencies would be the best solution.

The Australian distributor for Acoustiblok is based in Brisbane [3].

The same company also sells Thermablok [4] which is the first aerogel based insulation that I’ve seen being advertised for commercial sale. I guess that it must be rather expensive as they are mostly advertising it for use as thin strips to cover stud faces (steel studs conduct heat well and can cause a lot of heat loss). A note in their FAQ says that it’s available in rolls for insulating entire walls or floors. The FAQ also indicates that they sell samples suitable for science classes. They are also apparently looking for retailers, it would be nice if someone wanted to sell this in Australia.