|
I’ve recently been diagnosed with Asperger Syndrome (AS) [1]. Among other things this means that I am genetically predisposed to have an interest in solving technical problems and give lectures about how I solved them, but that I tend not to be a “people-person”.
AS is generally regarded as an Autism Spectrum Disorder (ASD), but there is a lot of debate among the experts about the exact relationship. Some people (such as the psychologist who assessed me) believe that AS is a synonym for High Functioning Autism (HFA), however one theory I’ve heard is that HFA people are more sensory oriented and Aspies differ by being information oriented – in that case I would not be regarded as HFA. I’m not bothered by this issue, I’m sure that in a few years time the experts will have some consistent definitions for such things that most people can agree on.
There is no Boolean assessment for AS, the assessment is based on a sliding scale of number of criteria. People who are almost Aspies but don’t quite pass the test (or more commonly don’t get assessed because they don’t think they would pass) are sometimes referred to as Asperger Cousins (AC), this is a slang term that is not formally recognised but is often used in online discussions. I’m sure that a significant portion of the readers of my blog would regard themselves as being at least ACs if they investigated the issue. The test at Glenn Rowe’s web site [2] can give you an idea of how you rate by some criteria – but note that this is not at all conclusive and it’s based on the theories of Professor Simon Baron-Cohen [3], not all of which have general agreement. Leif Ekblad is running a project to analyse long-term changes to the Aspie score of adults [4]. The main quiz for that project seems quite popular for self-diagnosis, but again it’s not conclusive.
I think that diagnosing oneself for an ASD is not nearly as crazy as most things which might fall in the category of being one’s own psychologist, but I still strongly recommend getting a formal assessment if you believe that you are an Aspie. In Australia it costs about $600 and there’s a waiting list of about 3 months. Chaotic Idealism has an insightful post about the pros and cons of self-diagnosis [5], if you suspect that you may be an Aspie then I recommend that you read it before doing the tests.
Update:
Thanks to Sven Joachim and Andrew Pollock for informing me about /etc/init.d/mountoverflowtmp which exists to mount a tmpfs named overflow if /tmp is full at boot time. It appears that the system was not compromised. But regular reinstalls are always a good thing.
On the 24th of August this year I noticed the following on my SE Linux Play Machine [1]:
root@play:/root# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda 1032088 938648 41012 96% /
tmpfs 51296 0 51296 0% /lib/init/rw
udev 10240 24 10216 1% /dev
tmpfs 51296 4 51292 1% /dev/shm
/dev/hdb 516040 17128 472700 4% /root
/dev/hdc 1024 8 1016 1% /tmp
overflow 1024 8 1016 1% /tmp
The kernel message log had the following:
[210511.546152] su[769]: segfault at 0 ip b7e324e3 sp bfa4b064
error 4 in libc-2.7.so[b7dbb000+158000]
[210561.527839] su[778]: segfault at 0 ip b7eb14e3 sp bfec84d4 error 4 in
libc-2.7.so[b7e3a000+158000]
[210585.270372] su[784]: segfault at 0 ip b7e044e3 sp bff1b534 error 4 in
libc-2.7.so[b7d8d000+158000]
[210595.855278] su[789]: segfault at 0 ip b7e014e3 sp bfd18324 error 4 in
libc-2.7.so[b7d8a000+158000]
[210639.496847] su[796]: segfault at 0 ip b7e874e3 sp bf99e7b4 error 4 in
libc-2.7.so[b7e10000+158000]
Naturally this doesn’t look good, the filesystem known as “overflow” indicates a real problem. It appears that the machine was compromised. So I’ve made archival copies of all the data and reinstalled it.
As the weather here is becoming warmer I’ve used new hardware for my new Play Machine. The old system was a 1.8GHz Celeron with 1280M of RAM and two IDE disks in a RAID-1 array. The new system is a P3-800 with 256M of RAM and a single IDE disk. It’s a Compaq Evo which runs from a laptop PSU and is particularly energy efficient and quiet. The down-side is that there is no space for a second disk and only one RAM socket so I’m limited to 256M – that’s just enough to run a Xen server with a single DomU.
I put the new play machine online on Friday the 23rd of October after almost two months of down-time.
Anand Kumria has an ongoing dispute with Exetel, the latest is that a director of Exetel has libeled him in a blog comment [1].
Having public flame-wars with customers generally isn’t a winning move for a corporation. But doing so in the context of the blog world is a particularly bad idea. The first issue is that almost everyone who regularly reads Anand’s blog will trust him instead of a corporation (Anand is well regarded in the free software community). So it’s not as if accusing Anand of lying will gain anything.
But when a director of the company starts doing this it makes the issue more dramatic and interesting to many people on the net. Now Anand’s side of the story will get even more readers, of course Anand’s side was always going to get more readers than Exetel – I’m sure that Anand’s blog is more popular than that of Steve Waddington. I wouldn’t be surprised if my blog was more popular than Anand’s and now my readers will be following the Exetel saga for the Lulz. I’m sure that I won’t be the last person to comment on this.
The most amazing thing is that Steve Waddington talks about having to pay to take the TIO complaint. So I guess that means I should start complaining whenever I get bad service from an ISP and cost them some money! I should have stayed with Optus and started complaining all the time when they caused me problems!
One thing that Steve and people like him should keep in mind is that members of our community are not only heavy users of the Internet, we generally recommend ISPs to other people, and many of us make money working for ISPs. If you want your ISP to get good reviews and to be able to hire good staff then attacking people like Anand is not the way to go.
I’ve just added the WordPress Minify [1] plugin to my blog. It’s purpose is to combine CSS and Javascript files and to optimise them for size and it’s based on the Minify project [2]. On my documents blog this takes the main page from 313KB uncompressed, 169KB compressed, and a total of 23 HTTP transfers to 306KB uncompressed, 117KB compressed, and 21 HTTP transfers. In each case 10 of the HTTP transfers are from Google for advertising. It seems that a major obstacle to optimising the web page load times is Google adverts – of course Google has faster servers than I do so I guess it’s not that much of a performance problem. The minify plugin caches it’s data files and I had to really hack at the code to make it use /var/cache/wordpress-minify – a subdirectory of the plugins directory was specified in many places.
deb http://www.coker.com.au lenny wordpress
I’ve added a wordpress-minify package to my repository of WordPress packages for Debian/Lenny with the above APT line. I’ve also got the following packages:
adman
all-in-one-seo-pack
google-sitemap-generator
openid
permalink-redirect
stats
subscribe-to-comments
yubikey
The Super Cache [3] plugin has some nice features. It generates static HTML files that are served to users who aren’t logged in and who haven’t entered a comment. This saves significant amounts of CPU time when there is high load. The problem is that installing this requires modifying the main .htaccess file, adding a new .htaccess file in the plugins directory, and lots of other hackery. The main reason for this is to avoid running any PHP code in the most common cases, it would be good for really heavy use. Also PHP “safe mode” has to be disabled for some reason, which is something I’d rather not do.
The Cache [4] plugin was used as the base for the Super Cache plugin. It seems less invasive, but requires the ability to edit the config file. Getting it into a shape that would work well in Debian would take more time than I have available at the moment. This combined with the fact that my blog will soon be running on a system with two quad-core CPUs that won’t be very busy means that I won’t be packaging it.
If anyone would like to Debianise the Cache or Super Cache plugin then I would be happy to give them my rough initial efforts as a possible starting point.
I’m not planning to upload any of these packages to Debian, it would just add too much work to the Debian security team without adding enough benefit.
I’ve just been setting up new virtual servers at Linode [1] and Slicehost [2]. I have previously written a review of both those services [3], based on that review (and some other discussions) one of my clients now has a policy of setting up pairs of virtual servers for various projects, one server at Linode and one at Slicehost.
Now both virtual hosting providers work very well and I’m generally happy with both of them.
But Linode seems to be a better offering.
Linode has graphs of various types of usage, I can look at graphs of disk IO, CPU use, and network IO for the last 24 hours, 30 days, or for previous months. The three graphs have the same scale of the X axis so I can correlate them. The stats on Slicehost just allow you to get the current raw numbers, which doesn’t help if I want to know what happened last night when performance sucked.
When I build a Linode instance I can have multiple filesystems configured (Slicehost can’t do any of this). I can use less disk space than is available to reserve space for other filesystems. Separating filesystems makes it easier to track IO performance and also allows some bounds to be set on the amount of disk space used for various tasks. Nowadays the use of multiple partitions is not as popular as it once was, but it’s still a real benefit. Of course one of the benefits of this is that I can have two partitions on Linode that are suitable for running as the root filesystem. If an upgrade fails then it would be an option to boot with the other filesystem (I haven’t actually done this but it’s good to have the option).
I believe that this feature of Linode could do with some improvements. Firstly when creating or resizing filesystem it should be possible to specify the number of Inodes when using Ext3. The fsck time for a large Ext3 filesystem that has the default number of Inodes is quite unreasonable. It would also be good if other filesystems such as XFS were supported, for some use cases XFS can significantly outperform Ext3 – and choice is always good. When BTRFS becomes stable I expect that every hosting provider will be compelled to support it (any provider that wants my continued business will do so).
Now Linode and Slicehost both allow sharing bandwidth allowances between virtual servers. So if you run one server that uses little bandwidth you can run a second server that needs a lot of bandwidth and reduce the potential for excess bandwidth use problems. The next logical extension to this is to allow sharing disk allocation between servers on the same physical system. So for example I might want to run a web server for the purpose of sending out large files, 360M of RAM as provided by the Linode 360 offering would be plenty. But the 128G of storage and 1600GB per month of bandwidth usage that is provided with the Linode 2880 plan would be really useful. At the same time a system that does computationally expensive tasks (such as a build and test server) might require a large amount of RAM such as 2880MB while requiring little disk space or bandwidth. Currently Linode allows sharing bandwidth arbitrarily between the various servers but not disk space or RAM. I don’t think that this would be a really difficult feature to implement.
Finally Linode has a “Pending Jobs Queue” that shows the last few requests to the management system and their status. It’s not really necessary, but it is handy to see what has been done and it gives the sysadmin a feeling of control over the process.
These management features provide enough value to me that if I was going to use a single virtual hosting provider then I would choose Linode. For certain reliability requirements it simply wouldn’t be a responsible decision to trust any single hosting company. In that case I’m happy to recommend both Linode and Slicehost as providers.
One thing I noticed when I got my new LG U990 Viewty [1] mobile phone is the way the core telephony functionality has suffered while features for web browsing etc have been added. It seems that the core phone functionality (making and receiving calls and maintaining a list of names and phone numbers) has generally decreased since about 2004 when I got my first camera-phone. The Nokia GSM phones that I used prior to getting a 3G phone seemed to have a combination of signal reception, voice quality, and basic telephony features that beat all the 3G phones I’ve used. The only way the 3G phones were better for core telephony features is in managing the list of recently called numbers. In my past two LG phones I’ve been able to easily call an alternate number of the person I last called – this feature was dropped in the Viewty.
Some of my relatives have camera-phones that have an extremely poor ability to get a signal, they can’t get a GSM signal in places where my Viewty can get 3G! Obviously making a usable phone was not a design priority for those devices!
Then there’s the issue of battery life. Early mobile phones had NiCd batteries that lasted a week, later mobile phones had Li batteries that lasted a week as a standard feature. Nokia sold phones with replacement batteries, so if you wanted to make lots of phone calls while on the move you could have a second battery charged and ready for use. Now the latest BlackBerry [5] apparently has batteries that only last for one day – I haven’t investigated the options for storing a second battery but a casual glance indicates that changing a battery will be a lot more difficult than on an old Nokia phone.
I’ve been wondering, why don’t they just sell some mobile phones that don’t support making phone calls? Smart phones that aren’t very good at telephony is really only going half way, do it properly and just rip out the phone functionality! Or they could use the word “phone” to apply to devices that already exist to do mobile stuff. You could have an Amazon Kindle phone [2] that allows you to read documents, and a Nokia N800 tablet phone [3] for general Internet access including web browsing and email – really the only “smart-phone” feature that is missing from the Nokia is a camera. For that matter my EeePC 701 [4] is probably about twice as heavy as my first mobile phone, maybe it could be called a phone too. If you have two phones, one for making phone calls and the other for doing smart-phone stuff then it won’t matter so much if the smart-phone (which can’t make phone calls) has it’s battery run out.
One likely objection to the idea of selling phones that can’t be used for making phone calls is that it might confuse the users. However the current situation is that there are significant differences in the signal reception ability of mobile phones, the people who sell them don’t know what the difference is, and the rare reviews that analyse signal strength (as done by Choice [6]) are become outdated rapidly and never cover all phones on the market. So I think it would be a great improvement if the phone sales people could say “don’t buy phone A if you want to make phone calls because it can’t do that” because currently anyone who just wants to make phone calls has a matter of luck to determine whether they get a phone that works well.
The really sad thing however, is that some people apparently have usage patterns that are similar to my satire above. I have heard of people having two phones, one for smart-phone functionality and another for making calls.
What we need is to have manufacturers put more effort into making hardware that can receive week signals, from now on I will consult the Choice review of this before making any mobile phone purchase or recommendation. If only a few million other people would do the same then the manufacturers would improve their products.
The next thing we need is to have better software to run the phones. The deficiencies in the software on my Viewty could easily be fixed if everyone had source code access. Benjamin Mako Hill writes about some of the problems with closed-source on mobile phones [7]. He mentions security (in terms of our trust in the phone manufacturers), and the general ideal of having control over your own device. One specific problem he doesn’t mention are the ways that mobile phones are deliberately crippled by the manufacturers, 3G phones have precious main menu space occupied by the services that are most profitable to the telephone company without regard to what the users desire. Another problem for people who desire free software is file format support. Camera-phones that save video to AVI format instead of OGG reduce our ability to use free software in other places – as a general rule every time you transcode a video you either lose some quality or increase the file size so the format that the phone uses will be carried through many other computers and devices. Smart-phones generally have the ability to view a range of data types, the ability to view MS file formats is common (which excludes free competitors). My Viewty has an entire menu section dedicated to Google services (Gmail, blogger, youtube, etc). That’s nice for Google who presumably paid well for that, but not so good for me as I don’t use any of the Google features on my phone. Now a menu that had a caching IMAP client, an RSS feed reader, a WordPress API client, a Jabber client, and a caching Wikipedia client would be really useful.
My current phone is just under a year old, so I won’t be buying a new phone until January 2011 (unless I break or lose my Viewty). Hopefully then there will be some better options. Before anyone suggests that I buy another phone to help with the coding, my current free software coding projects are all behind schedule…
http://mako.cc/copyrighteous/20091017-00
Garik Israelian gave an interesting TED talk about spectrography of stars and SETI [1]. He assumes that tectonic activity is a pre-requisite for the evolution of life (when discussing the search for elements that are needed for life) and that life which is based on solar energy will have a similar spectrographic signature to the chlorophyl based plants that we are familiar with. I doubt both those assumptions, but I still found the talk very interesting and I learned a lot.
Julian Dibbell wrote an interesting Wired article about an ongoing battle between the Cult of Scientology and 4chan [2]. I don’t often barrack for 4chan, but they seem to be doing some good things here – but of course they do it in their own unique manner. The article also links to a hilarious video of Tom Cruise being insane, among other things he claims that Scientologists are “only ones who can help” at an accident site. Has Tom Cruise ever provided assistance at a car crash?
The Independent has an article by Robert Fisk about the impending shift away from the US dollar for the oil trade [3]. This is expected to cause a significant loss in the value of the US dollar.
Robin Marantz Henig wrote an interesting article for the NY Times about the causes of anxiety [4]. It focusses on Jerome Kagan’s longitudal studies of babies and young people. One thing that I found particularly interesting were the research issues of recognising the difference between brain states, internal emotional state, and the symptoms of emotions that people display (including their own description of their emotions which may be misleading or false). The tests on teenage social interactions that involved fake myspace pages and an MRI were also interesting.
Juan Cole wrote an insightful Salon article titled “The top ten things you didn’t know about Iran” [5]. The Iranian government doesn’t seem to be a threat to anyone outside their country.
Clay Shirky wrote an insightful post about TV being a heat-sink for excess spare time and considers the number of projects of the scale of Wikipedia could be created with a small portion of that time [6]. It seems that the trend in society is to spend less time watching TV and more time doing creative things. In a related note Dan Meyer has an interesting blog post about trolls who say “You Have No Life” [7].
The Making Light blog post about Barack Obama’s Nobel Peace Prize has some insightful comments [8]. I doubted that he had achieved enough to deserve it, but the commentators provide evidence that he has achieved a lot. I wonder if he will receive a second Peace Prize sometime in the next 10 years.
The Making Light blog post about bullies and online disputes predictably got diverted to discussion school bullying [9]. The comments are interesting.
Making Light has a mind-boggling post about homosexuality and porn [10]. US Senator Tom Coburn’s (R-OK) chief of staff Michael Schwartz made the case against pornography. “All pornography is homosexual pornography”, said Schwartz, quoting an ex-gay friend of his. Among other things there are many good puns in the comments.
Cubicle Jungle is an amusing satire of the computer industry [11]. It’s shocking how long it goes before it gets to the part that’s obviously fiction.
The WikiReader is an interesting new device [12]. It costs $99US and has a copy of Wikipedia on an SD card, they have a subscription service that involves posting you a new SD card every 6 months, or you can download an image from their server. They state that they have a filtered version of Wikipedia for children, I wonder how they manage that, I also wonder whether they have an unfiltered version for adults. The device runs on two AAA batteries and is designed to be cheap and easy. Naturally it doesn’t support editing, but most of the time that you need Wikipedia you don’t need edit access – or access to content that is less than 6 months old.
Exetel are scum, customers who complain are cut off [13].
Making Light has an interesting post about a New Age scumbag who killed at least two of the victims who paid $10,000 for a sweat-lodge workshop (others are in hospital and may die in the near future) [14].
The NY Times has an interesting article about Jamie Oliver and his latest project [15]. He is trying to reform the eating habits of the unhealthiest area in the US. The 15 pound burger sounds interesting though, I wouldn’t mind sharing one of those with 14 friends…
Shortly before 9AM this morning I discovered that the IP address for my mail server was not being routed, according to my logs the problem started shortly after midnight. It’s on a TPG ADSL connection, there is one IP address for the PPPOE link and 6 addresses in a /29 routed to it – one of the addresses in the /29 is for my mail server.
It wasn’t until 3PM that I was able to visit the server to sort the problem out. It turned out that the main IP address was working but the /29 wasn’t being routed to it. So TPG had somehow dropped the route from my routing table. I pinged all the addresses from a 3G broadband connection on my EeePC while running tcpdump on the server, and no packets for the /29 came through – but the IP address for the PPP link worked fine. I was even able to ssh in to the server once I knew the IP address of the ppp0 device – for future use I need to keep ALL IP addresses of all network gear on my EeePC not just the ones used for providing services.
So I phoned the helpdesk, and naturally they asked me inane questions. My patience extended to telling them the broadcast address etc that was being used on the Ethernet device (actually a bridge for Xen but I wasn’t going to confuse them). The system been power-cycled before I got there in the hope that it might fix the problem – so I could honestly answer the question “have you rebooted it” (usually I lie – rebooting systems to fix network problems is a Windows thing). But my patience started to run out when they asked me to check my DNS settings, I explained very clearly that my problem was that IP packets couldn’t get through and that I wasn’t using DNS and demanded that they fix it.
I didn’t get anyone technical to look at the problem until I firmly demanded that the help-desk operator test the routing by pinging my systems. The help-desk people don’t have Internet access so that actually testing the connection required escalating the issue. It seems that the algorithm used for help-desk people is to just repeatedly tell people to check various things on their own system, and that continues until the customer’s patience runs out. Either the customer goes away or makes requests firmly enough to get something done about it.
So their technician did some tests and proclaimed that there was no problem. While said tests were being done things started working, so obviously their procedure is to fix problems and then blame it on the customer. It is not plausible to believe that a problem in their network which had persisted for more than 15 hours would accidentally disappear during the 5 minute window that the technician was investigating the problem.
In the discussion that followed the help-desk operator tried to trick me into admitting that it was my fault. They claimed that because I had used multiple IP addresses I must have reconfigured my system and had therefore fixed a problem on my end, my response was “I HAVE A HEAP OF MACHINES HERE RUNNING ALL THE TIME, I USE WHICHEVER ONE I FEEL LIKE, I CHANGED NOTHING“. I didn’t mention that the machines in question are DomUs on the same Xen server, someone who doesn’t understand how ping works or what routing is wouldn’t have been able to cope with that.
I stated clearly several times that I don’t like being lied to. Either the help-desk operator was lying to me or their technician was lying to them. In either case they were not going to trick me – I know more about how the Internet works than they do.
TPG was unable to give me any assurance that such problems won’t happen again. The only thing I can be sure of is that when they lie they will stick to their story regardless of whether it works.
NewServers.com [1] provides an interesting service. They have a cloud computing system that is roughly comparable to Amazon EC2, but for which all servers are physical machines (blade servers with real disks). This means that you get the option of changing between servers and starting more servers at will, but they are all physical systems so you know that your system is not going to go slow because someone else is running a batch job.
New Servers also has a bandwidth limit of 3GB per hour with $0.10 per GB if you transfer more than that. Most people should find that 3GB/hour is enough for a single server. This compares to EC2 where you pay $0.10 per GB to receive data and $0.17 to transmit it. If you actually need to transmit 2100GB per month then the data transfer fees from EC2 would be greater than the costs of renting a server from New Servers.
When running Linux the EC2 hourly charges are (where 1ECU is provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor):
NAME |
Cost |
Desc |
Small |
$0.10 |
1.7G, 160G, 32bit, 1ECU, 1core |
Large |
$0.20 |
7.5G, 850G, 64bit, 4ECU, 2core |
Extra Large |
$0.40 |
15G, 1690G, 64bit, 8ECU, 4core |
High CPU Medium |
$0.20 |
1.7G, 350G, 32bit, 5ECU, 2core |
High CPU Extra Large |
$0.80 |
7G, 1690G, 64bit, 20ECU, 5core |
The New Servers charges are:
NAME |
Cost |
Desc |
Small |
$0.11 |
1G, 36G, 32bit, Xeon 2.8GHz |
Medium |
$0.17 |
2G, 2*73G, 32bit, 2*Xeon 3.2GHz |
Large |
$0.25 |
4G, 250G, 64bit, E5405 Quad Core 2Ghz |
Jumbo |
$0.38 |
8G, 2*500G, 64bit, 2 x E5405 Quad Core 2Ghz |
Fast |
$0.53 |
4G, 2*300G, 64bit, E5450 Quad Core 3Ghz |
The New Servers prices seem quite competitive with the Amazon prices. One down-side to New Servers is that you have to manage your own RAID, the cheaper servers have only a single disk (bad luck if it fails). The better ones have two disks and you could setup your own RAID. Of course the upside of this is that if you want a fast server from New Servers and you don’t need redundancy then you have the option of RAID-0 for better performance.
Also I don’t think that there is anything stopping you from running Xen on a New Servers system. So you could have a bunch of Xen images and a varying pool of Dom0s to run them on. If you were to choose the “Jumbo” option with 8G of RAM and share it among some friends with everyone getting a 512M or 1G DomU then the cost per user would be a little better than Slicehost or Linode while giving better management options. One problem I sometimes have with virtual servers for my clients is that the disk IO performance is poorer than I expect. When running the server that hosts my blog (which is shared with some friends) I know the performance requirements of all DomUs and can diagnose problems quickly. I can deal with a limit on the hardware capacity, I can deal with trading off my needs with the needs of my friends. But having a server just go slow, not knowing why, and having the hosting company say “I can move you to a different physical server” (which may be better or worse) doesn’t make me happy.
I first heard about New Servers from Tom Fifield’s LUV talk about using EC2 as a cluster for high energy physics [2]. According to the detailed analysis Tom presented using EC2 systems on demand can compete well with the costs of buying Dell servers and managing them yourself, EC2 wins if you have to pay Japanese prices for electricity but if you get cheap electricity then Dell may win. Of course a major factor is the amount of time that the servers are used, a cluster that is used for short periods of time with long breaks in between will have a higher cost per used CPU hour and thus make EC2 a better option.
The NYT has an interesting article about research into treating insomnia over the internet [1]. I wonder how many other psychological issues can be effectively treated over the net.
From next year all Cadbury Dairy Milk Chocolate sold in Australia will be made from fair-trade cocoa [2]. Cadbury Dairy Milk Chocolate is the most popular type of chocolate sold in Australia so this is a significant market shift. For a long time Cadbury has sold fair trade chocolate under the name Green and Black. Of course we now have to wait for Cadbury to use fair trade cocoa in all their other chocolate varieties.
Mike Rowe gave an interesting TED talk about the value of manual labour [3]. He suggested that there should be a PR campaign for skilled manual labour jobs and noted that his observation (through his work on his “Dirty Jobs” TV series) was that the people who do some of the less popular jobs appear to be happier.
The GapMinder.org web site has some interesting analysis of statistical information on countries and regions [4]. It is based on the work of Hans Rosling who is well known for his high quality TED talks [5]. Unfortunately the web site requires Flash, I will probably try it out with Gnash some time.
Miru Kim gave a TED talk about her work photographing herself nude in abandoned buildings and industrial spaces [6]. Among other things she photographed herself lying naked on a pile of bones in the crypt underneath Paris which is fairly dangerous. I’ve visited the crypt, it’s an interesting experience but I was very careful to touch nothing – you never know which of the bones came from victims of smallpox and other nasty diseases. Strangely they have an ongoing problem of visitors stealing bones, when I visited there were several bones at the exit that had been confiscated from visitors – some of which had mummified flesh attached…
Steinar H. Gunderson wrote a good description of the basics of how the TCP protocol works [7]. He also links to a web page he wrote that will measure your potential TCP throughput and give you information on the link. This is really handy if you are behind some sort of firewall and want to know what is being done to your TCP stream when it’s in transit.
Apparently Christian couples tend to use a shared email account to reduce the risk of cheating [8]. It’s hardly a surprise that Christians have a much higher divorce rate than atheists and agnostics [9].
The NY Times has an interesting article about iPhones overloading the AT&T network [10]. Recently I’ve been having some problems sending MMS with my Three phone, some relatives who use Three have been having connection problems in certain areas with marginal signal quality, and the download speed of my Three data connection is significantly reduced (used to be ~70KB/s, now I’m lucky to get 20KB/s). I suspect that the new smart phones that are being sold are largely to blame. But the up-side is that when they engineer their network to work properly with the smart phones then my Internet use (ssh and basic web browsing) will work really well.
Michael Tieman wrote an interesting blog post about software patents which compares them to Land Mines [11]. Of course this analogy falls down badly while the US is still leading the world in manufacturing land mines.
Rebecca Saxe gave an interesting TED talk about how brains make moral judgeents [12]. In her research she did some tests with using magnetic pulses to decrease the function of the region of the brain that allows people to judge the others and she was able to significantly affect the results of judgement tests.
Brendan Scott analyses the netbook wars and concludes that it has been a significant loss for Microsoft [13]. ArsTechnica has an analysis of the real word-processing requirements [14], they suggest that in most cases MS-Word (and other word-processor) documents could be replaced with HTML or Wiki pages for a better end result.
|
|