Archives

Categories

Hostile Web Sites

I was asked whether it would be safe to open a link in a spam message with wget. So here are some thoughts about wget security and web browser security in general.

Wget Overview

Some spam messages are designed to attack the recipient’s computer. They can exploit bugs in the MUA, applications that may be launched to process attachments (EG MS Office), or a web browser. Wget is a very simple command-line program to download web pages, it doesn’t attempt to interpret or display them.

As with any network facing software there is a possibility of exploitable bugs in wget. It is theoretically possible for an attacker to have a web server that detects the client and has attacks for multiple HTTP clients including wget.

In practice wget is a very simple program and simplicity makes security easier. A large portion of security flaws in web browsers are related to plugins such as flash, rendering the page for display on a GUI system, and javascript – features that wget lacks.

The Profit Motive

An attacker that aims to compromise online banking accounts probably isn’t going to bother developing or buying an exploit against wget. The number of potential victims is extremely low and the potential revenue benefit from improving attacks against other web browsers is going to be a lot larger than developing an attack on the small number of people who use wget. In fact the potential revenue increase of targeting the most common Linux web browsers (Iceweasel and Chromium) might still be lower than that of targeting Mac users.

However if the attacker doesn’t have a profit motive then this may not apply. There are people and organisations who have deliberately attacked sysadmins to gain access to servers (here is an article by Bruce Schneier about the attack on Hacking Team [1]). It is plausible that someone who is targeting a sysadmin could discover that they use wget and then launch a targeted attack against them. But such an attack won’t look like regular spam. For more information about targeted attacks Brian Krebs’ article about CEO scams is worth reading [2].

Privilege Separation

If you run wget in a regular Xterm in the same session you use for reading email etc then if there is an exploitable bug in wget then it can be used to access all of your secret data. But it is very easy to run wget from another account. You can run “ssh otheraccount@localhost” and then run the wget command so that it can’t attack you. Don’t run “su – otheraccount” as it is possible for a compromised program to escape from that.

I think that most Linux distributions have supported a “switch user” functionality in the X login system for a number of years. So you should be able to lock your session and then change to a session for another user to run potentially dangerous programs.

It is also possible to use a separate PC for online banking and other high value operations. A 10yo PC is more than adequate for such tasks so you could just use an old PC that has been replaced for regular use for online banking etc. You could boot it from a CD or DVD if you are particularly paranoid about attack.

Browser Features

Google Chrome has a feature to not run plugins unless specifically permitted. This requires a couple of extra mouse actions when watching a TV program on the Internet but prevents random web sites from using Flash and Java which are two of the most common vectors of attack. Chrome also has a feature to check a web site against a Google black list before connecting. When I was running a medium size mail server I often had to determine whether URLs being sent out by customers were legitimate or spam, if a user sent out a URL that’s on Google’s blacklist I would lock their account without doing any further checks.

Conclusion

I think that even among Linux users (who tend to be more careful about security than users of other OSs) using a separate PC and booting from a CD/DVD will generally be regarded as too much effort. Running a full featured web browser like Google Chrome and updating it whenever a new version is released will avoid most problems.

Using wget when you have to reason to be concerned is a possibility, but not only is it slightly inconvenient but it also often won’t download the content that you want (EG in the case of HTML frames).

Monitoring of Monitoring

I was recently asked to get data from a computer that controlled security cameras after a crime had been committed. Due to the potential issues I refused to collect the computer and insisted on performing the work at the office of the company in question. Hard drives are vulnerable to damage from vibration and there is always a risk involved in moving hard drives or systems containing them. A hard drive with evidence of a crime provides additional potential complications. So I wanted to stay within view of the man who commissioned the work just so there could be no misunderstanding.

The system had a single IDE disk. The fact that it had an IDE disk is an indication of the age of the system. One of the benefits of SATA over IDE is that swapping disks is much easier, SATA is designed for hot-swap and even systems that don’t support hot-swap will have less risk of mechanical damage when changing disks if SATA is used instead of IDE. For an appliance type system where a disk might be expected to be changed by someone who’s not a sysadmin SATA provides more benefits over IDE than for some other use cases.

I connected the IDE disk to a USB-IDE device so I could read it from my laptop. But the disk just made repeated buzzing sounds while failing to spin up. This is an indication that the drive was probably experiencing “stiction” which is where the heads stick to the platters and the drive motor isn’t strong enough to pull them off. In some cases hitting a drive will get it working again, but I’m certainly not going to hit a drive that might be subject to legal action! I recommended referring the drive to a data recovery company.

The probability of getting useful data from the disk in question seems very low. It could be that the drive had stiction for months or years. If the drive is recovered it might turn out to have data from years ago and not the recent data that is desired. It is possible that the drive only got stiction after being turned off, but I’ll probably never know.

Doing it Properly

Ever since RAID was introduced there was never an excuse for having a single disk on it’s own with important data. Linux Software RAID didn’t support online rebuild when 10G was a large disk. But since the late 90’s it has worked well and there’s no reason not to use it. The probability of a single IDE disk surviving long enough on it’s own to capture useful security data is not particularly good.

Even with 2 disks in a RAID-1 configuration there is a chance of data loss. Many years ago I ran a server at my parents’ house with 2 disks in a RAID-1 and both disks had errors on one hot summer. I wrote a program that’s like ddrescue but which would read from the second disk if the first gave a read error and ended up not losing any important data AFAIK. BTRFS has some potential benefits for recovering from such situations but I don’t recommend deploying BTRFS in embedded systems any time soon.

Monitoring is a requirement for reliable operation. For desktop systems you can get by without specific monitoring, but that is because you are effectively relying on the user monitoring it themself. Since I started using mon (which is very easy to setup) I’ve had it notify me of some problems with my laptop that I wouldn’t have otherwise noticed. I think that ideally for desktop systems you should have monitoring of disk space, temperature, and certain critical daemons that need to be running but which the user wouldn’t immediately notice if they crashed (such as cron and syslogd).

There are some companies that provide 3G SIMs for embedded/IoT applications with rates that are significantly cheaper than any of the usual phone/tablet plans if you use small amounts of data or SMS. For a reliable CCTV system the best thing to do would be to have a monitoring contract and have the monitoring system trigger an event if there’s a problem with the hard drive etc and also if the system fails to send a “I’m OK” message for a certain period of time.

I don’t know if people are selling CCTV systems without monitoring to compete on price or if companies are cancelling monitoring contracts to save money. But whichever is happening it’s significantly reducing the value derived from monitoring.

Basics of Backups

I’ve recently had some discussions about backups with people who aren’t computer experts, so I decided to blog about this for the benefit of everyone. Note that this post will deliberately avoid issues that require great knowledge of computers. I have written other posts that will benefit experts.

Essential Requirements

Everything that matters must be stored in at least 3 places. Every storage device will die eventually. Every backup will die eventually. If you have 2 backups then you are covered for the primary storage failing and the first backup failing. Note that I’m not saying “only have 2 backups” (I have many more) but 2 is the bare minimum.

Backups must be in multiple places. One way of losing data is if your house burns down, if that happens all backup devices stored there will be destroyed. You must have backups off-site. A good option is to have backup devices stored by trusted people (friends and relatives are often good options).

It must not be possible for one event to wipe out all backups. Some people use “cloud” backups, there are many ways of doing this with Dropbox, Google Drive, etc. Some of these even have free options for small amounts of storage, for example Google Drive appears to have 15G of free storage which is more than enough for all your best photos and all your financial records. The downside to cloud backups is that a computer criminal who gets access to your PC can wipe it and the backups. Cloud backup can be a part of a sensible backup strategy but it can’t be relied on (also see the paragraph about having at least 2 backups).

Backup Devices

USB flash “sticks” are cheap and easy to use. The quality of some of those devices isn’t too good, but the low price and small size means that you can buy more of them. It would be quite easy to buy 10 USB sticks for multiple copies of data.

Stores that sell office-supplies sell USB attached hard drives which are quite affordable now. It’s easy to buy a couple of those for backup use.

The cheapest option for backing up moderate amounts of data is to get a USB-SATA device. This connects to the PC by USB and has a cradle to accept a SATA hard drive. That allows you to buy cheap SATA disks for backups and even use older disks as backups.

With choosing backup devices consider the environment that they will be stored in. If you want to store a backup in the glove box of your car (which could be good when travelling) then a SD card or USB flash device would be a good choice because they are resistant to physical damage. Note that if you have no other options for off-site storage then the glove box of your car will probably survive if your house burns down.

Multiple Backups

It’s not uncommon for data corruption or mistakes to be discovered some time after it happens. Also in recent times there is a variety of malware that encrypts files and then demands a ransom payment for the decryption key.

To address these problems you should have older backups stored. It’s not uncommon in a corporate environment to have backups every day stored for a week, backups every week stored for a month, and monthly backups stored for some years.

For a home use scenario it’s more common to make backups every week or so and take backups to store off-site when it’s convenient.

Offsite Backups

One common form of off-site backup is to store backup devices at work. If you work in an office then you will probably have some space in a desk drawer for personal items. If you don’t work in an office but have a locker at work then that’s good for storage too, if there is high humidity then SD cards will survive better than hard drives. Make sure that you encrypt all data you store in such places or make sure that it’s not the secret data!

Banks have a variety of ways of storing items. Bank safe deposit boxes can be used for anything that fits and can fit hard drives. If you have a mortgage your bank might give you free storage of “papers” as part of the service (Commonwealth Bank of Australia used to offer that). A few USB sticks or SD cards in an envelope could fit the “papers” criteria. An accounting firm may also store documents for free for you.

If you put a backup on USB or SD storage in your waller then that can also be a good offsite backup. For most people losing data from disk is more common than losing their wallet.

A modern mobile phone can also be used for backing up data while travelling. For a few years I’ve been doing that. But note that you have to encrypt all data stored on a phone so an attacker who compromises your phone can’t steal it. In a typical phone configuration the mass storage area is much less protected than application data. Also note that customs and border control agents for some countries can compel you to provide the keys for encrypted data.

A friend suggested burying a backup device in a sealed plastic container filled with dessicant. That would survive your house burning down and in theory should work. I don’t know of anyone who’s tried it.

Testing

On occasion you should try to read the data from your backups and compare it to the original data. It sometimes happens that backups are discovered to be useless after years of operation.

Secret Data

Before starting a backup it’s worth considering which of the data is secret and which isn’t. Data that is secret needs to be treated differently and a mixture of secret and less secret data needs to be treated as if it’s all secret.

One category of secret data is financial data. If your accountant provides document storage then they can store that, generally your accountant will have all of your secret financial data anyway.

Passwords need to be kept secret but they are also very small. So making a written or printed copy of the passwords is part of a good backup strategy. There are options for backing up paper that don’t apply to data.

One category of data that is not secret is photos. Photos of holidays, friends, etc are generally not that secret and they can also comprise a large portion of the data volume that needs to be backed up. Apparently some people have a backup strategy for such photos that involves downloading from Facebook to restore, that will help with some problems but it’s not adequate overall. But any data that is on Facebook isn’t that secret and can be stored off-site without encryption.

Backup Corruption

With the amounts of data that are used nowadays the probability of data corruption is increasing. If you use any compression program with the data that is backed up (even data that can’t be compressed such as JPEGs) then errors will be detected when you extract the data. So if you have backup ZIP files on 2 hard drives and one of them gets corrupt you will easily be able to determine which one has the correct data.

Failing Systems – update 2016-08-22

When a system starts to fail it may limp along for years and work reasonably well, or it may totally fail soon. At the first sign of trouble you should immediately make a full backup to separate media. Use different media to your regular backups in case the data is corrupt so you don’t overwrite good backups with bad ones.

One traditional sign of problems has been hard drives that make unusual sounds. Modern drives are fairly quiet so this might not be loud enough to notice. Another sign is hard drives that take unusually large amounts of time to read data. If a drive has some problems it might read a sector hundreds or even thousands of times until it gets the data which dramatically reduces system performance. There are lots of other performance problems that can occur (system overheating, software misconfiguration, and others), most of which are correlated with potential data loss.

A modern SSD storage device (as used in a lot of the recent laptops) doesn’t tend to go slow when it nears the end of it’s life. It is more likely to just randomly fail entirely and then work again after a reboot. There are many causes of systems randomly hanging or crashing (of which overheating is common), but they are all correlated with data loss so a good backup is a good idea.

When in doubt make a backup.

Any Suggestions?

If you have any other ideas for backups by typical home users then please leave a comment. Don’t comment on expert issues though, I have other posts for that.

SSD and M.2

The Need for Speed

One of my clients has an important server running ZFS. They need to have a filesystem that detects corruption, while regular RAID is good for the case where a disk gives read errors it doesn’t cover the case where a disk returns bad data and claims it to be good (which I’ve witnessed in BTRFS and ZFS systems). BTRFS is good for the case of a single disk or a RAID-1 array but I believe that the RAID-5 code for BTRFS is not sufficiently tested for business use. ZFS doesn’t perform very well due to the checksums on data and metadata requiring multiple writes for a single change which also causes more fragmentation. This isn’t a criticism of ZFS, it’s just an engineering trade-off for the data integrity features.

ZFS supports read-caching on a SSD (the L2ARC) and write-back caching (ZIL). To get the best benefit of L2ARC and ZIL you need fast SSD storage. So now with my client investigating 10 gigabit Ethernet I have to investigate SSD.

For some time SSDs have been in the same price range as hard drives, starting at prices well below $100. Now there are some SSDs on sale for as little as $50. One issue with SATA for server use is that SATA 3.0 (which was released in 2009 and is most commonly used nowadays) is limited to 600MB/s. That isn’t nearly adequate if you want to serve files over 10 gigabit Ethernet. SATA 3.2 was released in 2013 and supports 1969MB/s but I doubt that there’s much hardware supporting that. See the SATA Wikipedia page for more information.

Another problem with SATA is getting the devices physically installed. My client has a new Dell server that has plenty of spare PCIe slots but no spare SATA connectors or SATA power connectors. I could have removed the DVD drive (as I did for some tests before deploying the server) but that’s ugly and only gives 1 device while you need 2 devices in a RAID-1 configuration for ZIL.

M.2

M.2 is a new standard for expansion cards, it supports SATA and PCIe interfaces (and USB but that isn’t useful at this time). The wikipedia page for M.2 is interesting to read for background knowledge but isn’t helpful if you are about to buy hardware.

The first M.2 card I bought had a SATA interface, then I was unable to find a local company that could sell a SATA M.2 host adapter. So I bought a M.2 to SATA adapter which made it work like a regular 2.5″ SATA device. That’s working well in one of my home PCs but isn’t what I wanted. Apparently systems that have a M.2 socket on the motherboard will usually take either SATA or NVMe devices.

The most important thing I learned is to buy the SSD storage device and the host adapter from the same place then you are entitled to a refund if they don’t work together.

The alternative to the SATA (AHCI) interface on an M.2 device is known as NVMe (Non-Volatile Memory Express), see the Wikipedia page for NVMe for details. NVMe not only gives a higher throughput but it gives more command queues and more commands per queue which should give significant performance benefits for a device with multiple banks of NVRAM. This is what you want for server use.

Eventually I got a M.2 NVMe device and a PCIe card for it. A quick test showed sustained transfer speeds of around 1500MB/s which should permit saturating a 10 gigabit Ethernet link in some situations.

One annoyance is that the M.2 devices have a different naming convention to regular hard drives. I have devices /dev/nvme0n1 and /dev/nvme1n1, apparently that is to support multiple storage devices on one NVMe interface. Partitions have device names like /dev/nvme0n1p1 and /dev/nvme0n1p2.

Power Use

I recently upgraded my Thinkpad T420 from a 320G hard drive to a 500G SSD which made it faster but also surprisingly quieter – you never realise how noisy hard drives are until they go away. My laptop seemed to feel cooler, but that might be my imagination.

The i5-2520M CPU in my Thinkpad has a TDP of 35W but uses a lot less than that as I almost never have 4 cores in use. The z7k320 320G hard drive is listed as having 0.8W “low power idle” and 1.8W for read-write, maybe Linux wasn’t putting it in the “low power idle” mode. The Samsung 500G 850 EVO SSD is listed as taking 0.4W when idle and up to 3.5W when active (which would not be sustained for long on a laptop). If my CPU is taking an average of 10W then replacing the hard drive with a SSD might have reduced the power use of the non-screen part by 10%, but I doubt that I could notice such a small difference.

I’ve read some articles about power use on the net which can be summarised as “SSDs can draw more power than laptop hard drives but if you do the same amount of work then the SSD will be idle most of the time and not use much power”.

I wonder if the SSD being slightly thicker than the HDD it replaced has affected the airflow inside my Thinkpad.

From reading some of the reviews it seems that there are M.2 storage devices drawing over 7W! That’s going to create some cooling issues on desktop PCs but should be OK in a server. For laptop use they will hopefully release M.2 devices designed for low power consumption.

The Future

M.2 is an ideal format for laptops due to being much smaller and lighter than 2.5″ SSDs. Spinning media doesn’t belong in a modern laptop and using a SATA SSD is an ugly hack when compared to M.2 support on the motherboard.

Intel has released the X99 chipset with M.2 support (see the Wikipedia page for Intel X99) so it should be commonly available on desktops in the near future. For most desktop systems an M.2 device would provide all the storage that is needed (or 2*M.2 in a RAID-1 configuration for a workstation). That would give all the benefits of reduced noise and increased performance that regular SSDs provide, but with better performance and fewer cables inside the PC.

For a corporate desktop PC I think the ideal design would have only M.2 internal storage and no support for 3.5″ disks or a DVD drive. That would allow a design that is much smaller than a current SFF PC.

802.1x Authentication on Debian

I recently had to setup some Linux workstations with 802.1x authentication (described as “Ethernet authentication”) to connect to a smart switch. The most useful web site I found was the Ubuntu help site about 802.1x Authentication [1]. But it didn’t describe exactly what I needed so I’m writing a more concise explanation.

The first thing to note is that the authentication mechanism works the same way as 802.11 wireless authentication, so it’s a good idea to have the wpasupplicant package installed on all laptops just in case you need to connect to such a network.

The first step is to create a wpa_supplicant config file, I named mine /etc/wpa_supplicant_SITE.conf. The file needs contents like the following:

network={
 key_mgmt=IEEE8021X
 eap=PEAP
 identity="USERNAME"
 anonymous_identity="USERNAME"
 password="PASS"
 phase1="auth=MD5"
 phase2="auth=CHAP password=PASS"
 eapol_flags=0
}

The first difference between what I use and the Ubuntu example is that I’m using “eap=PEAP“, that is an issue of the way the network is configured, whoever runs your switch can tell you the correct settings for that. The next difference is that I’m using “auth=CHAP” and the Ubuntu example has “auth=PAP“. The difference between those protocols is that CHAP has a challenge-response and PAP just has the password sent (maybe encrypted) over the network. If whoever runs the network says that they “don’t store unhashed passwords” or makes any similar claim then they are almost certainly using CHAP.

Change USERNAME and PASS to your user name and password.

wpa_supplicant -c /etc/wpa_supplicant_SITE.conf -D wired -i eth0

The above command can be used to test the operation of wpa_supplicant.

Successfully initialized wpa_supplicant
eth0: Associated with 00:01:02:03:04:05
eth0: CTRL-EVENT-EAP-STARTED EAP authentication started
eth0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=25
TLS: Unsupported Phase2 EAP method 'CHAP'
eth0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
EAP-MSCHAPV2: Authentication succeeded
EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed
eth0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully
eth0: CTRL-EVENT-CONNECTED - Connection to 00:01:02:03:04:05 completed [id=0 id_str=]

Above is the output of a successful test with wpa_supplicant. I replaced the MAC of the switch with 00:01:02:03:04:05. Strangely it doesn’t like “CHAP” but is automatically selecting “MSCHAPV2” and working, maybe anything other than “PAP” would do.

auto eth0
iface eth0 inet dhcp
  wpa-driver wired
  wpa-conf /etc/wpa_supplicant_SITE.conf

Above is a snippet of /etc/network/interfaces that works with this configuration.

Nexus 6P and Galaxy S5 Mini

Just over a month ago I ordered a new Nexus 6P [1]. I’ve had it for over a month now and it’s time to review it and the Samsung Galaxy S5 Mini I also bought.

Security

The first noteworthy thing about this phone is the fingerprint scanner on the back. The recommended configuration is to use your fingerprint for unlocking the phone which allows a single touch on the scanner to unlock the screen without the need to press any other buttons. To unlock with a pattern or password you need to first press the “power” button to get the phone’s attention.

I have been considering registering a fingerprint from my non-dominant hand to reduce the incidence of accidentally unlocking it when carrying it or fiddling with it.

The phone won’t complete the boot process before being unlocked. This is a good security feature.

Android version 6 doesn’t assign permissions to apps at install time, they have to be enabled at run time (at least for apps that support Android 6). So you get lots of questions while running apps about what they are permitted to do. Unfortunately there’s no “allow for the duration of this session” option.

A new Android feature prevents changing security settings when there is an “overlay running”. The phone instructs you to disable overlay access for the app in question but that’s not necessary. All that is necessary is for the app to stop using the overlay feature. I use the Twilight app [2] to dim the screen and use redder colors at night. When I want to change settings at night I just have to pause that app and there’s no need to remove the access from it – note that all the web pages and online documentation saying otherwise is wrong.

Another new feature is to not require unlocking while at home. This can be a convenience feature but fingerprint unlocking is so easy that it doesn’t provide much benefit. The downside of enabling this is that if someone stole your phone they could visit your home to get it unlocked. Also police who didn’t have a warrant permitting search of a phone could do so anyway without needing to compel the owner to give up the password.

Design

This is one of the 2 most attractive phones I’ve owned (the other being the sparkly Nexus 4). I think that the general impression of the appearance is positive as there are transparent cases on sale. My phone is white and reminds me of EVE from the movie Wall-E.

Cables

This phone uses the USB Type-C connector, which isn’t news to anyone. What I didn’t realise is that full USB-C requires that connector at both ends as it’s not permitted to have a data cable with USB-C at the device and and USB-A at the host end. The Nexus 6P ships with a 1M long charging cable that has USB-C at both ends and a ~10cm charging cable with USB-C at one end and type A at the other (for the old batteries and the PCs that don’t have USB-C). I bought some 2M long USB-C to USB-A cables for charging my new phone with my old chargers, but I haven’t yet got a 1M long cable. Sometimes I need a cable that’s longer than 10cm but shorter than 2M.

The USB-C cables are all significantly thicker than older USB cables. Part of that would be due to having many more wires but presumably part of it would be due to having thicker power wires for delivering 3A. I haven’t measured power draw but it does seem to charge faster than older phones.

Overall the process of converting to USB-C is going to be a lot more inconvenient than USB SuperSpeed (which I could basically ignore as non-SuperSpeed connectors worked).

It will be good when laptops with USB-C support become common, it should allow thinner laptops with more ports.

One problem I initially had with my Samsung Galaxy Note 3 was the Micro-USB SuperSpeed socket on the phone being more fiddly for the Micro-USB charging plug I used. After a while I got used to that but it was still an annoyance. Having a symmetrical plug that can go into the phone either way is a significant convenience.

Calendars and Contacts

I share most phone contacts with my wife and also have another list that is separate. In the past I had used the Samsung contacts system for the contacts that were specific to my phone and a Google account for contacts that are shared between our phones. Now that I’m using a non-Samsung phone I got another Gmail account for the purpose of storing contacts. Fortunately you can get as many Gmail accounts as you want. But it would be nice if Google supported multiple contact lists and multiple calendars on a single account.

Samsung Galaxy S5 Mini

Shortly after buying the Nexus 6P I decided that I spend enough time in pools and hot tubs that having a waterproof phone would be a good idea. Probably most people wouldn’t consider reading email in a hot tub on a cruise ship to be an ideal holiday, but it works for me. The Galaxy S5 Mini seems to be the cheapest new phone that’s waterproof. It is small and has a relatively low resolution screen, but it’s more than adequate for a device that I’ll use for an average of a few hours a week. I don’t plan to get a SIM for it, I’ll just use Wifi from my main phone.

One noteworthy thing is the amount of bloatware on the Samsung. Usually when configuring a new phone I’m so excited about fancy new hardware that I don’t notice it much. But this time buying the new phone wasn’t particularly exciting as I had just bought a phone that’s much better. So I had more time to notice all the annoyances of having to download updates to Samsung apps that I’ll never use. The Samsung device manager facility has been useful for me in the past and the Samsung contact list was useful for keeping a second address book until I got a Nexus phone. But most of the Samsung apps and 3d party apps aren’t useful at all.

It’s bad enough having to install all the Google core apps. I’ve never read mail from my Gmail account on my phone. I use Fetchmail to transfer it to an IMAP folder on my personal mail server and I’d rather not have the Gmail app on my Android devices. Having any apps other than the bare minimum seems like a bad idea, more apps in the Android image means larger downloads for an over-the-air update and also more space used in the main partition for updates to apps that you don’t use.

Not So Exciting

In recent times there hasn’t been much potential for new features in phones. All phones have enough RAM and screen space for all common apps. While the S5 Mini has a small screen it’s not that small, I spent many years with desktop PCs that had a similar resolution. So while the S5 Mini was released a couple of years ago that doesn’t matter much for most common use. I wouldn’t want it for my main phone but for a secondary phone it’s quite good.

The Nexus 6P is a very nice phone, but apart from USB-C, the fingerprint reader, and the lack of a stylus there’s not much noticeable difference between that and the Samsung Galaxy Note 3 I was using before.

I’m generally happy with my Nexus 6P, but I think that anyone who chooses to buy a cheaper phone probably isn’t going to be missing a lot.

Coalitions

In Australia we are about to have a federal election, so we inevitably have a lot of stupid commentary and propaganda about politics.

One thing that always annoys me is the claim that we shouldn’t have small parties. We have two large parties, Liberal (right-wing, somewhat between the Democrats and Republicans in the US) and Labor which is somewhat similar to Democrats in the US. In the US the first past the post voting system means that votes for smaller parties usually don’t affect the outcome. In Australia we have Instant Runoff Voting (sometimes known as “The Australian Ballot”) which has the side effect of encouraging votes for small parties.

The Liberal party almost never wins enough seats to make government on it’s own, it forms a coalition with the National party. Election campaigns are often based on the term “The Coalition” being used to describe a Liberal-National coalition and the expected result if “The Coalition” wins the election is that the leader of the Liberal party will be Prime Minister and the leader of the National party will be the Deputy Prime Minister. Liberal party representatives and supporters often try to convince people that they shouldn’t vote for small parties and that small parties are somehow “undemocratic”, seemingly unaware of the irony of advocating for “The Coalition” but opposing the idea of a coalition.

If the Liberal and Labor parties wanted to form a coalition they could do so in any election where no party has a clear majority, and do it without even needing the National party. Some people claim that it’s best to have the major parties take turns in having full control of the government without having to make a deal with smaller parties and independent candidates but that’s obviously a bogus claim. The reason we have Labor allying with the Greens and independents is that the Liberal party opposes them at every turn and the Liberal party has a lot of unpalatable policies that make alliances difficult.

One thing that would be a good development in Australian politics is to have the National party actually represent rural voters rather than big corporations. Liberal policies on mining are always opposed to the best interests of farmers and the Liberal policies on trade aren’t much better. If “The Coalition” wins the election then the National party could insist on a better deal for farmers in exchange for their continued support of Liberal policies.

If Labor wins more seats than “The Coalition” but not enough to win government directly then a National-Labor coalition is something that could work. I think that the traditional interest of Labor in representing workers and the National party in representing farmers have significant overlap. The people who whinge about a possible Green-Labor alliance should explain why they aren’t advocating a National-Labor alliance. I think that the Labor party would rather make a deal with the National party, it’s just a question of whether the National party is going to do what it takes to help farmers. They could make the position of Deputy Prime Minister part of the deal so the leader of the National party won’t miss out.

Sysadmin Skills and University Degrees

I think that a major deficiency in Computer Science degrees is the lack of sysadmin training.

Version Control

The first thing that needs to be added is the basics of version control. CVS (which is now regarded as obsolete) was initially released when I was in the first year of university. But SCCS and RCS had been in use for some time. I think that the people who designed my course were remiss in not adding any mention of version control (not even strategies for saving old versions of your work), one could say that they taught us about version control by letting us accidentally delete our assignments. :-#

If a course is aimed at just teaching programmers (as most CS degrees are) then version control for group assignments should be a standard part of the course. Having some marks allocated for the quality of comments in the commit log would also be good.

A modern CS degree should cover distributed version control, that means covering Git as it’s the most popular distributed version control system nowadays.

For people who want to work as sysadmins (as opposed to developers who run their own PCs) a course should have an optional subject for version control of an entire system. That includes tools like etckeeper for version control of system configuration and tools like Puppet for automated configuration and system maintenance.

Dependencies

It’s quite reasonable for a CS degree to provide simplified problems for the students to solve so they can concentrate on one task. But in the real world the problems are more complex. One of the more difficult parts of managing real systems is dependencies. You have issues of header files etc at compile time and library versions at deployment. Often you need a program to run on systems with different versions of the OS which means making it compile for both and deal with differences in behaviour.

There are lots of hacky things that people do to deal with dependencies in systems. People link compiled programs statically, install custom versions of interpreters in user home directories or /usr/local for daemons, and do many other things. These things can have bad consequences including data loss, system downtime, and security problems. It’s not always wrong to do such things, but it’s something that should only be done with knowledge of the potential consequences and a plan for mitigating them. A CS degree should teach the potential advantages and disadvantages of these options to allow graduates to make informed decisions.

Backups

I’ve met many people who call themselves computer professionals and think that backups aren’t needed. I’ve seen production systems that were designed in a way that backups were impossible. The lack of backups is a serious problem for the entire industry.

Some lectures about backups could be part of a version control subject in a general CS degree. For a degree that majors in Sysadmin at least one subject about backups is appropriate.

For any backup (even backing up your home PC) you should have offsite backups to deal with fire damage, multiple backups of different ages (especially important now that encryption malware is a serious threat), and a plan for how fast you can restore things.

The most common use of backups is to deal with the case of deleting the wrong file. Unfortunately this case seems to be the most rarely mentioned.

Another common situation that should be covered is a configuration error that results in a system that won’t boot correctly. It’s a very common problem and one that can be solved quickly if you are prepared but which can take a long time if you aren’t.

For a Sysadmin course it is important to cover backups of systems in remote datacenters.

Hardware

A good CS degree should cover the process of selecting suitable hardware. Programmers often get to advise on the hardware used to run their code, especially at smaller companies. Reliability features such as RAID, ECC RAM, and clustering should be covered.

Planning for upgrades is a very important part of this which is usually not taught. Not only do you need to plan for an upgrade without much downtime or cost but you also need to plan for what upgrades are possible. Next year will your system require hardware that is more powerful than you can buy next year? If so you need to plan for a cluster now.

For a Sysadmin course some training about selecting cloud providers and remote datacenter hosting should be provided. There are many complex issues that determine whether it’s most appropriate to use a cloud service, hosted virtual machines, hosted physical servers managed by the ISP, hosted physical servers purchased by the client, or on-site servers. Often a large system will involve 2 or more of those options, even some small companies use 3 or more of those options to try and provide the performance and reliability they need at a price they can afford.

We Need Sysadmin Degrees

Covering the basic coding skills takes a lot of time. I don’t think we can reasonably expect a CS degree to cover all that and also give good coverage to sysadmin work. While some basic sysadmin skills are needed by every programmer I think we need to have separate majors for people who want a career in system administration.

Sysadmins need some programming skills, but that’s mostly scripting and basic debugging. Someone who’s main job is as a sysadmin can probably expect to never make any significant change to a program that’s more than 10,000 lines long. A large amount of the programming in a CS degree can be replaced by “file a bug report” for a sysadmin degree.

This doesn’t mean that sysadmins shouldn’t be doing software development or that they aren’t good at it. One noteworthy fact is that it appears that the most common job among developers of the Debian distribution of Linux is System Administration. Developing an OS involves some of the most intensive and demanding programming. But I think that more than a few people who do such work would have skipped a couple of programming subjects in favour of sysadmin subjects if they were given a choice.

Suggestions

Did I miss anything? What other sysadmin skills should be taught in a CS degree?

Do any universities teach these things now? If so please name them in the comments, it is good to help people find universities that teach them what they want to learn and help them in their career.

I Just Ordered a Nexus 6P

Last year I wrote a long-term review of Android phones [1]. I noted that my Galaxy Note 3 only needed to last another 4 months to be the longest I’ve been happily using a phone.

Last month (just over 7 months after writing that) I fell on my Note 3 and cracked the screen. The Amourdillo case is good for protecting the phone [2] so it would have been fine if I had just dropped it. But I fell with the phone in my hand, the phone landed face down and about half my body weight ended up in the middle of the phone which apparently bent it enough to crack the screen. As a result of this the GPS seems to be less reliable than it used to be so there might be some damage to the antenna too.

I was quoted $149 to repair the screen, I could possibly have found a cheaper quote if I had shopped around but it was a good starting point for comparison. The Note 3 originally cost $550 including postage in 2014. A new Note 4 costs $550 + postage now from Shopping Square and a new Note 3 is on ebay with a buy it now price of $380 with free postage.

It seems like bad value to pay 40% of the price of a new Note 3 or 25% the price of a Note 4 to fix my old phone (which is a little worn and has some other minor issues). So I decided to spend a bit more and have a better phone and give my old phone to one of my relatives who doesn’t mind having a cracked screen.

I really like the S-Pen stylus on the Samsung Galaxy Note series of phones and tablets. I also like having a hardware home button and separate screen space reserved for the settings and back buttons. The downsides to the Note series are that they are getting really expensive nowadays and the support for new OS updates (and presumably security fixes) is lacking. So when Kogan offered a good price on a Nexus 6P [3] with 64G of storage I ordered one. I’m going to give the Note 3 to my father, he wants a phone with a bigger screen and a stylus and isn’t worried about cracks in the screen.

I previously wrote about Android device service life [4]. My main conclusion in that post was that storage space is a major factor limiting service life. I hope that 64G in the Nexus 6P will solve that problem, giving me 3 years of use and making it useful to my relatives afterwards. Currently I have 32G of storage of which about 8G is used by my music video collection and about 3G is free, so 64G should last me for a long time. Having only 3G of RAM might be a problem, but I’m thinking of trying CyanogenMod again so maybe with root access I can reduce the amount of RAM use.

Xen CPU Use per Domain again

8 years ago I wrote a script to summarise Xen CPU use per domain [1]. Since then changes to Xen required changes to the script. I have new versions for Debian/Wheezy (Xen 4.1) and Debian/Jessie (Xen 4.4).

Here’s a new script for Debian/Wheezy:

#!/usr/bin/perl
use strict;

open(LIST, "xm list --long|") or die "Can't get list";

my $name = "Dom0";
my $uptime = 0.0;
my $cpu_time = 0.0;
my $total_percent = 0.0;
my $cur_time = time();

open(UPTIME, "</proc/uptime") or die "Can't open /proc/uptime";
my @arr = split(/ /, <UPTIME>);
$uptime = $arr[0];
close(UPTIME);

my %all_cpu;

while(<LIST>)
{
  chomp;
  if($_ =~ /^\)/)
  {
    my $cpu = $cpu_time / $uptime * 100.0;
    if($name =~ /Domain-0/)
    {
      printf("%s uses %.2f%% of one CPU\n", $name, $cpu);
    }
    else
    {
      $all_cpu{$name} = $cpu;
    }
    $total_percent += $cpu;
    next;
  }
  $_ =~ s/\).*$//;
  if($_ =~ /start_time /)
  {
    $_ =~ s/^.*start_time //;
    $uptime = $cur_time – $_;
    next;
  }
  if($_ =~ /cpu_time /)
  {
    $_ =~ s/^.*cpu_time //;
    $cpu_time = $_;
    next;
  }
  if($_ =~ /\(name /)
  {
    $_ =~ s/^.*name //;
    $name = $_;
    next;
  }
}
close(LIST);

sub hashValueDescendingNum {
  $all_cpu{$b} <=> $all_cpu{$a};
}

my $key;

foreach $key (sort hashValueDescendingNum (keys(%all_cpu)))
{
  printf("%s uses %.2f%% of one CPU\n", $key, $all_cpu{$key});
}

printf("Overall CPU use approximates %.1f%% of one CPU\n", $total_percent);

Here’s the script for Debian/Jessie:

#!/usr/bin/perl

use strict;

open(UPTIME, "xl uptime|") or die "Can't get uptime";
open(LIST, "xl list|") or die "Can't get list";

my %all_uptimes;

while(<UPTIME>)
{
  chomp $_;

  next if($_ =~ /^Name/);
  $_ =~ s/ +/ /g;

  my @split1 = split(/ /, $_);
  my $dom = $split1[0];
  my $uptime = 0;
  my $time_ind = 2;
  if($split1[3] eq "days,")
  {
    $uptime = $split1[2] * 24 * 3600;
    $time_ind = 4;
  }
  my @split2 = split(/:/, $split1[$time_ind]);
  $uptime += $split2[0] * 3600 + $split2[1] * 60 + $split2[2];
  $all_uptimes{$dom} = $uptime;
}
close(UPTIME);

my $total_percent = 0;

while(<LIST>)
{
  chomp $_;

  my $dom = $_;
  $dom =~ s/ .*$//;

  if ( $_ =~ /(\d+)\.[0-9]$/ )
  {
    my $percent = $1 / $all_uptimes{$dom} * 100.0;
    $total_percent += $percent;
    printf("%s uses %.2f%% of one CPU\n", $dom, $percent);
  }
  else
  {
    next;
  }
}

printf("Overall CPU use approximates  %.1f%% of one CPU\n", $total_percent);