2

icmptx – Tunneling IP over ICMP Echo

I’ve just been playing with icmptx, a system for tunneling IP over ICMP Echo which could be handy if I ever find myself blocked by firewalls. Unfortunately the documentation is lacking. Below is a sample configuration that works for me, all you have to do is to put the correct IP address in for SERVERIP in both scripts and it should work. I’m not sure what the ideal value for the MTU is, 65535 is the largest possible. For transmission it usually won’t make any difference as the occasions when I need such things will usually be download-only sessions and the ACK packets will be quite small. For receiving data the server has an MTU of 1500 on the Ethernet port so nothing bigger than that will come in. Presumably when downloading data the packets will be transmitted in two ICMP fragments.

One interesting feature of the program is that it doesn’t match requests and replies. I presume this is because any firewall that only allows one reply per echo request will probably ensure that the reply contents match the request contents, so they just assume that a firewall will let all ICMP echo/reply traffic through. The upside of this is that it should give lower round trip times than any tunneling system that polls for return data.

I’ve filed some Debian bug reports about it, bug #609413 is a request for it to set icmp_echo_ignore_all when it’s running and also emulate the regular PING functionality. Bug #609412 is a request for it to assign the IP address to the tun0 interface. Bug #609414 is a request for the server side of it to call daemon(0,0).

I won’t leave this running. Having to run a virtual server with the regular ICMP functionality disabled is too much effort for the small benefit that using ICMP tunneling may offer over DNS tunneling.

My configuration scripts (with the IP address removed) are below.

Configuration

Server

#!/bin/sh
set -e
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all
icmptx -s SERVERIP &
sleep 0.5
ifconfig tun0 mtu 65535 10.10.10.1 netmask 255.255.255.0

Client

#!/bin/sh
set -e
killall icmptx || true
icmptx -c SERVERIP &
sleep 0.5
ifconfig tun0 mtu 65535 10.10.10.2 netmask 255.255.255.0
wait

7

Dynamic DNS

The Problem

My SE Linux Play Machine has been down for a couple of weeks. I’ve changed to a cheaper Internet access plan which also allows me to download a lot more data, but I don’t have a static IP address any more – and my ISP seems to change the IP a lot more often than I’ve experienced in the past (I’m used to having a non-static IP address not change for months rather than hours). So I needed to get Dynamic DNS working. Naturally I wasn’t going to use one of the free or commercial Dynamic DNS solutions, I prefer to do things myself. So my Play Machine had to remain offline until I fixed this.

The Solution

dyn    IN      NS      ns.sws.net.au.
        IN      NS      othello.dycom.com.au.
play    IN      CNAME  play.dyn.coker.com.au.

The first thing I did was to create a separate zone file, I put the above records in my main zone file to make play.coker.com.au be a CNAME for play. and dyn.coker.com.au is a dynamic domain. I have SE Linux denying BIND the ability to write to the primary zone file for my domain to make it slightly more difficult for an attacker to insert fake DNS records (they could of course change the memory state of BIND to make it serve bogus data). The dynamic zone file is stored where BIND can write it – and therefore a BIND exploit could easily replace it (but such an attack is out of the scope of the Play Machine project so don’t get any ideas).

Another reason for separating the dynamic data is that BIND journals changes to a dynamic zone and therefore if you want to manually edit it you have to delete the journal, stop BIND, edit the file, and then restart BIND. One of the things that interests me is setting up dynamic DNS for some of my clients, as a constraint is that my client must be able to edit the zone file themself I have to keep the editing process for the main zone file relatively simple.

dnssec-keygen -a hmac-md5 -b 128 -n host foo-dyn.key

For newer versions of BIND use the following command instead:

tsig-keygen -a hmac-sha512 foo-dyn

I used the above command to create the key files. It created Kfoo-dyn.key.+X+Y.key and Kfoo-dyn.key.+X+Y.private where X and Y are replacements for numbers that might be secret.

key "foo" { algorithm hmac-md5; secret "XXXXXXXX"; };
zone "dyn.coker.com.au" {
  type master;
  file "/var/cache/bind/dyn.coker.com.au";
  allow-update { key "foo"; };
allow-transfer { key ns; };
};

I added the above to the BIND configuration to create the dynamic zone and allow it to be updated by this key. The value which I replaced with XXXXXXX in this example came from Kfoo-dyn.key.+X+Y.key. I haven’t found any use for the .private file in this mode of operation. Please let me know if I missed something.

Finally I used the following shell script to take the IP address from the interface that is specified on the command-line and update the DNS with it. I chose a 120 second timeout because i will sometimes change IP address often and because the system doesn’t get enough hits for anyone to care about DNS caching.

#!/bin/bash
set -e
IP=$(ip addr list $1|sed -n -e "s/\/.*$//" -e "s/^.*inet //p")
nsupdate -y foo:XXXXXXXX << END
update delete play.dyn.coker.com.au A
update add play.dyn.coker.com.au 120 A $IP
send
END

Update

It is supposed to be possible to use the -k option to nsupdate to specify a file containing the key. Joey’s comment gives some information on how to get it working (it sounds like it’s buggy).

rhesa pointed out another way of doing it, so I’ve now got a script like the following in production which solves the security issue (as long as the script is mode 0700) and avoids using other files.

#!/bin/bash
set -e
IP=$(ip addr list $1|sed -n -e "s/\/.*$//" -e "s/^.*inet //p")
nsupdate << END
key foo XXXXXXXX
update delete play.dyn.coker.com.au A
update add play.dyn.coker.com.au 120 A $IP
send
END

Update

Added a reference to the tsig-keygen command for newer bind.

6

Is Pre-Forking any Good?

Many Unix daemons use a technique known as “pre-forking”. This means that to save the amount of time taken to fork a child process they will keep a pool of processes waiting for work to come in. When a job arrives then one of the existing processes is used and the overhead of the fork() system call is saved. I decided to write a little benchmark to see how much overhead a fork() really has. I wrote the below program (which is released under the GPL 3.0 license) to test this. It gives the performance of a fork() operation followed by a waitpid() operation in fork()s per second and also the performance of running a trivial program via system which uses /bin/sh to execute the given command.

On my Thinkpad T61 with a Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz I could get 2429.85 forks per second when running Linux 2.6.32 in 64bit mode. On a Thinkpad T20 with a 500MHz P3 CPU I could get 341.74 forks per second. In both cases is seems that the number of forks per second is significantly greater than the number of real-world requests. If each request on average took one disk seek then neither system would have the fork performance as any sort of bottleneck. Also if each request took more than a couple of milliseconds of CPU time on the T7500 or 10ms of CPU time on the 500MHz P3 then the benefits of pre-forking would be very small. Finally it’s worth noting that the overhead of fork() + waitpid() in a loop will not be the same as the overhead of just fork()ing off processes and calling waitpid() when there’s nothing else to do.

I had a brief look at some of my servers to see how many operations they perform. One busy front-end mail server has about 3,000,000 log entries in mail.log per day, that is about 35 per second. These log entries include calling SpamAssassin and Clamav, which are fairly heavy operations. The system in question averages one Intel(R) Xeon(R) CPU L5420 @ 2.50GHz core being used 24*7, I can’t do a good benchmark run on that system as it’s always busy but I think it’s reasonable to assume for the sake of discussion that it’s about the same speed as the T7500 (it may be 5* faster, but that won’t change things much). At 2429 forks per second (or 0.4ms per fork/wait) if that time is entirely reduced to zero that won’t make any noticeable difference to a system that has an average operation taking 1000/35= 28ms!

Now if a daemon was to use fork() + system() to launch a child process (which is a really slow way of doing it) then the T7500 gets 248.51 fork()+system() operations per second with bash and 305.63 per second with dash. The P3-500 gets 24.48 with bash and 33.06 with dash.

So it seems that if every log entry on my busy mail server involved using a fork()+system() operation and it was replaced to use pre-forked daemons then it might be possible to save almost 10% of the CPU time on that system in question.

Now it is theoretically possible that the setup of a daemon process can take more CPU time than fork()+system(). EG a daemon could have some really complex data structures to initialise. If the structures in question were initialised in the same way for each request then a viable design would be to have the master process initialise all the data which would then be inherited by the children. The only way I can imagine for a daemon child process to take any significant amount of time on modern hardware is for it to generate a session encryption key, and there’s really nothing stopping a single master process from generating several such keys in advance and then passing them to child processes as needed.

In conclusion I think that the meme about pre-forking is based on hardware that was used at a time when a 500MHz 32bit system (like my ancient Thinkpad T20) was unimaginably fast and when operating systems were less efficient than a modern Linux kernel. The only corner case might be daemons which do relatively simple CPU bound operations – such as serving static files from a web server where the data all fits into the system cache, but even then I expect that the benefit is a lot smaller than most people think and the number of pre-forked processes is probably best kept very low.

One final thing to note is that if you compare fork()+exec() with an operation to instruct a running daemon (via Unix domain sockets perhaps) to provide access to a new child (which may be pre-forked or may be forked on demand) then you have the potential to save a moderate amount of CPU time. The initialisation of a new process has some overhead that is greater than calling fork(), and when you fork() a new process there are usually lots of data structures which are not written after that time which means that on Linux they remain as shared memory and thus reduce the system memory use (and improve cache efficiency when they are read).

#include <unistd.h>
#include <stdio.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <stdlib.h>

#define NUM_FORKS 10000
#define NUM_SHELLS 1000

int main()
{
  struct timeval start, end;
  if(gettimeofday(&start, NULL) == -1)
  {
    fprintf(stderr, "Can't get time of day\n");
    return 1;
  }

  int i = 0;
  while(i < NUM_FORKS)
  {
    pid_t pid = fork();
    if(pid == 0)
      return 0;
    if(pid > 0)
    {
      int status;
      pid_t rc = waitpid(-1, &status, 0);
      if(rc != pid)
      {
        fprintf(stderr, "waidpid() failed\n");
        return 1;
      }
    }
    else
    {
      fprintf(stderr, "fork() failed\n");
      return 1;
    }
    i++;
  }

  if(gettimeofday(&end, NULL) == -1)
  {
    fprintf(stderr, "Can't get time of day\n");
    return 1;
  }

  printf("%.2f fork()s per second\n", double(NUM_FORKS)/(double(end.tv_sec – start.tv_sec) + double(end.tv_usec – start.tv_usec) / 1000000.0) );

  if(gettimeofday(&start, NULL) == -1)
  {
    fprintf(stderr, "Can't get time of day\n");
    return 1;
  }

  i = 0;
  while(i < NUM_SHELLS)
  {
    pid_t pid = fork();
    if(pid == 0)
    {
      if(system("id > /dev/null") == -1)
        fprintf(stderr, "system() failed\n");
      return 0;
    }
    if(pid > 0)
    {
      int status;
      pid_t rc = waitpid(-1, &status, 0);
      if(rc != pid)
      {
        fprintf(stderr, "waidpid() failed\n");
        return 1;
      }
    }
    else
    {
      fprintf(stderr, "fork() failed\n");
      return 1;
    }
    i++;
  }

  if(gettimeofday(&end, NULL) == -1)
  {
    fprintf(stderr, "Can't get time of day\n");
    return 1;
  }

  printf("%.2f fork() and system() calls per second\n", double(NUM_SHELLS)/(double(end.tv_sec – start.tv_sec) + double(end.tv_usec – start.tv_usec) / 1000000.0) );
  return 0;
}

9

Ethernet Interface Naming

As far as I recall the standard for naming Linux Ethernet devices has always been ethX where X is a number starting at 0. Until fairly recently the interface names were based on the order that device drivers were loaded or the order in which the PCI bus was scanned. This meant that after hardware changes (replacing network cards or changing the BIOS settings related to the PCI bus) it was often necessary to replace Ethernet cables or change the Linux network configuration to match the renumbering. It was also possible for a hardware failure to cause an Ethernet card to fail to be recognised on boot and thus change the numbers of all the others!

In recent times udev has managed the interface naming. In Debian the file /etc/udev/rules.d/70-persistent-net.rules can be edited to change the names of interfaces, so no matter the scanning order as long as an interface retains it’s MAC address it will get the correct name – or at least the name it initially had. One of the down-sides to the way this operates is that if you remove an old Ethernet card and replace it with a new one then you might find that eth1 is your first interface and there is no eth0 on the system – this is annoying for many humans but computers work quite well with that type of configuration.

I’ve just renamed the interfaces on one of my routers by editing the /etc/udev/rules.d/70-persistent-net.rules file and rebooting (really we should have a utility like /sbin/ip with the ability to change this on a running system).

I have decided to name the Ethernet port on the motherboard mb0. The PCI slots are named A, B, and C with A being the bottom one and when there are two ports on a PCI card the one closest to the left side of the system (when viewed from the front – the right side when viewed from the read) is port 0 on that card. So I have interfaces pcia0, pcia1, pcib0, pcib1, and pcic0. Now when I see a kernel message about the link going down on one of my ports I won’t have to wonder which port has the interface name eth4.

I did idly consider naming the Ethernet devices after their service, in which case I could have given names such as adsl and voip (appending a digit is not required). Also as the names which are permitted are reasonably long I could have used names such as mb0-adsl, although a hyphen character might cause problems with some of the various utilities and boot scripts – I haven’t tested out which characters other than letters and digits work. I may use interface names such as adsl for systems that run at client sites, if a client phoned me to report Internet problems and messages on the console saying things like “adsl NIC Link is Down” then my process of diagnosing the problem would become a lot easier!

Does anyone else have any good ideas for how to rename interfaces to make things easier to manage?

I have filed Debian bug report #592607 against ppp requesting that it support renaming interfaces. I have also filed Debian bug report #592608 against my Portslave package requesting that provide such support – although it may be impossible for me to fix the bug against Portslave without fixing pppd first (I haven’t looked at the pppd code in question for a while). Thanks to Rusty for suggesting this feature during the Q/A part of my talk about Portslave for the Debian mini-conf in LCA 2002 [1].

2

libcsoap/libnanohttp

Recently I have been doing a bit of work on libcsoap (the C library for making SOAP XML calls over http) and the libnanohttp library that it depends on. The most important part of my work on it was making it thread-safe with the technique I described in my post about finding thread unsafe code [1]. But I also did some work to make the code faster, reading data one byte at a time is very inefficient.

There has been no upstream release of this software for years, email to one of the maintainers bounced and the other one indicated that they are no longer involved in the project. So I’m thinking of taking over upstream development.

The previous Debian maintainer for the packages in question has recently resigned so I’ve taken over the packaging. But for this one I think I can do better work in an upstream capacity, so I’d like to get a co-maintainer for the Debian package and possibly someone who will help with upstream work. I would appreciate any offers of assistance with these things.

1

Mailing List Meta-Discussions

It seems that most mailing lists occasionally have meta-discussions about what is on-topic, the few that don’t are the ones that have very strong moderation – authoritarian moderators who jump on the first infraction and clearly specify the rules.

I don’t recall the list of acceptable topics for any mailing list including “also discussions about what is on-topic“. As this is the Internet I’m sure that someone will immediately point out an example of such a list, but apart from the counter-example that someone will provide it seems obvious that for the majority of mailing lists a meta-discussion is not strictly on topic.

Regardless of a meta-discussion not being on-topic I don’t think there’s anything wrong with such a discussion on occasion. But if a meta-discussion is to be based on the volume of off-topic messages it would be nice if the people who advocate such a position could try and encourage the discussion in a way that reduced the number of messages. Replying to lots of messages is not a good strategy if your position is that there are too many messages.

If a meta-discussion is going to be about moving off-topic discussions to other forums that are more appropriate then it would be nice to have the meta-discussion move to another forum if possible. My previous post which advocates creating a separate mailing list for chatty messages was an attempt to move a discussion to a different forum [1]. Anyone who believes that such discussions don’t belong on a list such as debian-private is free to commit their thoughts to some place that they consider more appropriate and provide the URL to any interested parties. I think that it’s worth noting that the only comment on my previous post is one that describes how to filter mail to split the traffic from the debian-private list into different mailboxes. I had hoped that other people would write blog posts advocating their positions which would allow us to consider the merits of various ideas without the he-said-she-said element of mailing list discussions.

Most mailing lists have a policy against profanity and some go further and ban personal abuse. Therefore it seems hypocritical to advocate a strict interpretation of the rules in regard to what is on-topic while also breaking the rules regarding profanity or personal abuse. I don’t think it’s asking a lot to suggest that the small minority of messages that someone writes on the topic of a list meta-discussion should obey the set of rules that they advocate – I’m not suggesting that someone should obey all the rules all the time, just when they are trying to enforce them. Also you can argue that a list policy against profanity doesn’t preclude sending profane messages off-list, but if the off-list messages are for the purpose of promoting the list rules it still seems hypocritical to use profanity.

It is a fair point that off-topic discussions and jokes can distract people from important issues and derail important discussions. It would be good if people who take such positions would implement them in terms of meta-discussions. If the purpose of a meta-discussion is to avoid distraction from important issues then it seems like a really good idea to try and avoid distraction in the meta-discussion thread.

I wonder whether a meta-discussion can provide anything other than a source of lulz for all the people who don’t care about the issue in question. The meta-discussions in the Debian project seem to always result in nothing changing, not even when the majority of people who comment agree that the current situation is not ideal. When an almost identical meta-discussion happens regularly it seems particularly pointless to start such a discussion for the purpose of reducing off-topic content. Revisiting an old discussion can do some good when circumstances change or when someone has some new insight. I know that it’s difficult to avoid being sucked into such discussions, when I was diagnosed with AS [2] I decided to try and minimise my involvement in such discussions – but I haven’t been as successful at doing so as I had hoped.

3

Does Every Serious Mailing List need a Non-Serious Counterpart?

One practice that seems relatively common is for an organisation to have two main mailing lists, one for serious discussions that are expected to be relatively topical and another for anything that’s not overly offensive. Humans are inherently incapable of avoiding social chatter when doing serious work. The people who don’t want certain social interactions with their colleagues can find it annoying to have both social and serious discussions on the same list. While the people who want social discussions get annoyed when people ask them to keep discussions on topic.

Organisations that I have been involved with have had mailing lists such as foo-chat and foo-talk for social discussions that involve the same people as the main list named “foo“, as well as having list names such as “memo-list” for random discussions that are separate from a large collection of lists which demand on-topic messages.

The Debian project has some similar issues with the debian-private mailing list which is virtually required reading for Debian Developers. One complication that Debian has is that the debian-private list has certain privacy requirements (messages will be declassified after 3 years unless the author requests that they remain secret forever) which make it more difficult to migrate a discussion. You can’t just migrate a discussion from a private list to a public list without leaking some information. So it seems to me that the best solution might be to have a list named debian-private-chat which has the same secrecy requirements but which is not required reading. As debates about what discussions are suitable for debian-private have been going on for more than 3 years I don’t think there’s any reason not to publish the fact that such discussions take place.

Also it seems that every organisation of moderate scale that has a similar use of email and for which members socialise with each other could benefit from a similar mailing list structure. Note that I use a broad definition of the word “socialise” – there’s a lot of people who will never meet in person and a lot of the discussions are vaguely related to the main topic.

I wonder whether it might be considered to be a best practice to automatically create a chat list at the same time as creating a serious discussion list.

New Portslave release after 5 Years

I’ve just uploaded Portslave version 2010.03.30 to Debian, it replaces version 2005.04.03.1. I considered waiting a few days to make the anniversary but I wanted to get the bugs fixed.

I had a bug report suggesting that Portslave should be removed from Debian because of being 5 years without a major release. It has been running well 24*7 on one of my servers for the last 5 years and hasn’t really needed a change. There were enough bugs to keep me busy for a few hours fixing things though.

The irony is that I started using dates as version numbers back when there were several forks of Portslave with different version numbering schemes. I wanted to show that my fork had the newer version and a recent date stamp was a good indication of that. But then when Portslave didn’t need an update for a while the version number showed it and people got the wrong idea.

The new project home page for Portslave is on my document blog [1].

2

3G Broadband for Home Use

I have just installed an old Three mobile phone with 3G broadband for my parents home network access for the reasons described in my cheap net access in Australia post [1].

The first problem I had was that the pre-paid Three SIM just wouldn’t work at all. I ended up phoning the Three support line and had a guy guess at which version of Windows I was running, after guessing every version of Windows from the last 10 years and Mac OS/X he finally asked what OS I use and then told me that Linux isn’t supported. I said “I HAVE TWO SIMS FROM THREE, ONE WORKS AND THE OTHER DOESN’T, IT’S ON THE SAME PC WITH THE SAME 3G ACCESS DEVICE, THE PROBLEM IS WITH THE SIM OR THE SERVER NOT MY OS“. When the support guy discovered that one sim was pre-paid he said that there is a configuration difference, instead of an APN of “3netaccess” for post-paid (contract) you have to use “3services” for pre-paid.

There are a bunch of web pages describing how to get Three 3G broadband working on Linux in Australia, some say to use 3netaccess and some say 3services. None of the pages I read stated correctly that 3netaccess is for when you are on a contract and 3services is for pre-paid. I’ve submitted a suggestion for the Ross Barkman’s GPRS Info Page (which seems to be the best reference for such things) [2].

After getting the pre-paid 3G SIM working for net access from the Huawei E1553 USB 3G modem I was unable to get it working from my LG U890 mobile phone. I never figured out how to solve this problem, I left my parents with the SIM that is connected to my $15 per month contract plan for 3G net access and am now using the pre-paid SIM for my own use. Of course this means that as I’m using a SIM registered to my mother and she’s using one registered to me I’ll surely have some problems getting the support center to help me with problems in future.

I found that the 3G net access got better reception when the phone was higher than the computer, so I used a USB extension cable to allow it to be placed on a shelf above the computer. The extension cable also allows it to be easily unplugged and plugged in again – I’ve already seen one situation where Linux got confused about the state of the USB device and replugging it was necessary to solve the problem. I was using Debian/Lenny.

Here is my chatscript for connecting to Three with my 3G modem on a pre-paid SIM – which also allows roaming to Telstra (I haven’t tested whether pre-paid allows roaming, I’ve only tested Telstra roaming with a contract SIM):

ABORT 'BUSY'
ABORT 'NO CARRIER'
ABORT 'ERROR'
'' AT
OK ATQ0V1E1S0=0&C1&D2+FCLASS=0
OK 'AT+COPS=0,0,"3TELSTRA",2'
OK AT+CGATT=1
#OK AT+CGDCONT=1,"IP","3netaccess"
OK AT+CGDCONT=1,"IP","3services"
OK ATDT*99**3#

Here is the ppp configuration for connecting via the USB 3G modem. For use as a permanent connection you want to also include persist and “maxfail 0“:
/dev/ttyUSB0
230400
noauth
defaultroute
logfile /var/log/ppp.log
connect "/usr/sbin/chat -v -f /etc/chatscripts/three"

For connecting with an LG U890 mobile phone you need to use “ATDT*99***1#” as the dial command and the device is /dev/ttyACM0 .

5

Choosing an Australian Mobile Telco for use with Android

Since playing with the IBM Seer augmented reality software [1] I’ve been lusting after a new mobile phone which can do such things. While the implementation of Seer that I tried was not of great practical use to me (not being a tennis fan I was only there to learn about computers) it was a demonstration of an exciting concept. It will surely be implemented by IBM in other venues that are of more immediate interest, and we can probably expect other vendors to write similar systems to compete with IBM.

So the question is how to get a phone that will run such things well. The answer is probably not to rely on a contract plan for this, currently Vodaphone [2] is currently the only Australian telco that sells a phone that can run Seer, it is offering a HTC Magic (which was released in April 2009) on a $29 per month plan. A phone that is 9 months old isn’t necessarily a bad thing, but the has been out for more than 6 months and has some significant benefits (such as a 5MP camera).

  • My current provider is Three [3] and their cheapest plan is $29 per month (with or without a phone) which allows 200 minutes ($160) of free calls to other Three phones every month as well as up to $150 of other calls per month and 1GB of data. Calls cost 40c per 30 seconds plus 35c connection fee. Currently I’m on a plan that gives me the same thing for the same price without data transfer but which includes a “free” phone. So it seems that the 1GB of data per month has an equal cost to a mobile handset (such as an LG Viewty).
  • Virgin [6] has a $25 per month plan that gives $60 worth of calls and 300MB of Internet data with unlimited talk and text between members with the added bonus of unused talk and text credit being rolled over to the next month. The cost for calls is 90c per minute plus 40c connection fee, video calls are the same cost as voice calls!
  • Vodaphone [2] has a $20 per month phone plan that allows up to $150 of calls per month with the option of either free calls to a single specified Vodaphone number or free calls in the evenings and weekends. They have a $4.95 special offer for 200MB of Internet data per month. Calls cost 44c per 30 seconds plus 35c connection fee.
  • Telstra [4] has a $20 per month plan that only includes $20 worth of calls and which has call fees of 47c per 30 seconds plus 27c connection fee. They clearly don’t compete on price, I think that there is no reason for using Telstra unless you live in some of the rural regions where they are the only provider to offer good service.
  • Savvytel [7] charges $3.07 for GPRS Internet data so they can’t be considered for an Internet enabled phone. But they do seem very economical for basic phone service.
  • Optus [5] has a hopelessly broken web site that wouldn’t give me any information on mobile phone pricing. My previous experience with Optus Internet makes me unlikely to do business with them again anyway.

So it seems that Three (my current provider) is probably the best option at this time. Virgin would save my $4 per month, but would only give me 300M of Internet data per month, and the Virgin limit of $60 per month of calls might not be enough for me. Vodaphone offers a deal for $25 that only includes 200MB of data, that might be enough for just phone use, but wouldn’t be enough for tethering for laptop net access.

I wonder how well tethering works on an Android phone, can you make a phone call while transmitting data from a tethered laptop? I find that with my Viewty when I receive an SMS or phone call it stops the net access. That makes a tethered Viewty impractical for some support tasks as it’s fairly common that I need to talk to someone while logging in to their server – I’m sure that most people who use mobile Internet services regularly need to phone someone while using them.

My current Three bill is $29 per month for the phone plan and $15 per month for Internet access. If I’m going to buy phones outright instead of getting them with the plan then I want to reduce the overall amount of money I spend on phone plans and using tethering instead of a 3G USB dongle would allow this. I think that I can get something that comes close to my ideal mobile phone [8] (apart from being able to connect a keyboard, mouse, and monitor) if I import it from overseas.

We really need more competition in the Australian mobile phone market. We have only two phone companies offering Android phones, Three is sold out of the obsolete model that they offer while Vodaphone has stock of an obsolete model.