|
I have just read an interesting post about Gear Acquisition Syndrome [1] as applied to the guitar industry. Apparently it’s common for people to spend a lot of time and money buying guitar equipment instead of actually playing a guitar. I think that this problem extends way beyond guitars and to most aspects of human endeavour, and that actively trying to avoid the problem is a key to getting things done. I believe that the author however makes a strategic error by then going on to advise people on how to buy gear that won’t become obsolete. Sure it’s good to have gear that will suit your future needs and not require replacement, but if you are repeatedly buying new gear then your problem usually is not that the gear doesn’t do the job but that you want to buy more.
I used to suffer from this problem to a degree with my computer work, and still have problems controlling myself when I see tasty kit going cheap on auction.
Here is a quick list of things to do to avoid GAS:
- Recognise the problems with getting new gear. It costs money (thus requiring you to do more paid work or skip something else that you enjoy). It needs to have the OS installed and configured which takes time away from other things (unless your job is related to installing software on new machines). Finally it might be flawed. Every time you buy a new computer you risk having a failure, if it’s a failure that happens some time after deploying the machine then it can cause data loss and down-time which is really annoying.
- Keep in mind what you do. I primarily do software work (programming and sys-admin). While some knowledge of hardware design is required for sys-admin work and the ability to make my own hardware work is required for my own software development I don’t need to be an expert on this. I don’t need to have the latest hardware with new features, the old stuff worked well when I bought it and still works well now. My main machine (which I am using to write this post) is a Thinkpad T41p, it’s a few years old and a little slow by today’s standards but for everything that really matters to me it performs flawlessly. If your job really requires you to have experience with all the latest hardware then you probably work in a computer store and get access to it for free!
- When you have a problem think about whether new gear is the correct solution. There are a couple of areas in which performance on my Thinkpad is lower than I desire, but they are due to flaws in software that I am using. As I am primarily a programmer and the software in question is free it’s better for me (and the world) if I spend my time fixing the software rather than buying new hardware.
- Buy decent (not hugely expensive) gear so that you don’t need to continually buy new stuff. EG if a machine is going to store a moderate amount of data then make sure it has space for multiple hard drives so you can easily add new drives.
- Don’t buy the biggest and baddest machine out there. New hardware is developed so quickly that the fastest gear available now will be slow by next-year’s standards. Buy the second-fastest machine and it’ll be a lot cheaper and often more reliable.
- Determine your REAL requirements that match what you do. As I do software it makes sense for me to have the most reliable hardware possible so I can avoid stuffing around with things that don’t interest me so much (and which I’m not so good at). So I need reliable machines, I will continue buying Thinkpads (I plan to keep my current one until it’s 5 years old and then buy another), I believe that the Thinkpad is the Rolls-Royce of laptops (see the Lenovo Blogs site for some interesting technical information [2]) and that continuing to use such hardware will keep me effectively using my time on software development rather than fooling with hardware. For desktop machines I have recently wasted unreasonable amounts of time due to memory errors which inspired me to write a post about what a company like Dell could do to address what I consider the real needs of myself and other small business owners [3] (note that Dell is actually producing more suitable hardware in this regard than most companies – they just don’t market it as such).
- Keep in mind the fact that most things you want to do don’t require special hardware. In fact for most tasks related to computers people were doing similar things 10 years ago with much less hardware. If you believe that it’s just the lack of hardware that prevents you from doing great work then your problem is self-confidence not hardware availability.
It’s interesting that a sports-shoe company has a slogan “Just Do It” while trying to convince people that having special shoes is required for sporting success. Most professional athletes started training with minimal equipment. Get some basic gear and Just Do It!.
References:
- http://www.harmony-central.com/Guitar/Articles/Avoiding_GAS/
- http://www.lenovoblogs.com/insidethebox – feed: http://feeds.feedburner.com/lenovoblogs/insidethebox
- http://etbe.coker.com.au/2007/08/25/designing-computers-for-small-business/
A common question is how to compare Fedora [1] and Debian [2] in terms of recent updates and support. I think that Fedora Rawhide and Debian/Unstable are fairly equivalent in this regard, new upstream releases get packaged quickly, and support is minimal. They are both aimed at developers only, but it seems that a reasonable number of people are running servers on Debian/Unstable.
Fedora releases (previously known as “Fedora Core” and now merely as “Fedora”) can be compared to Debian/Testing. The aim is that Fedora releases every 6 months and each release is supported until a release two versions greater is about to be released (which means that it’s about a year of support). The support however often involves replacing the upstream version of the program used to make a package (EG Fedora Core 5 went from kernel 2.6.15 to kernel 2.6.20). I believe that the delays involved in migrating a package from Debian/Unstable to Debian/Testing as well as the dependency requirements mean that you can get a similar experience running Debian/Testing as you might get from Fedora.
Stable releases of Debian are rare and the updates are few in number and small in scope (generally back-porting fixes not packaging new upstream versions). This can be compared to Red Hat Enterprise Linux (RHEL) [3] or CentOS [4] (a free re-compile of RHEL with minor changes).
Regarding stability and support (in terms of package updates) I think that Debian/Stable, RHEL, and CentOS are at about the same level. RHEL has some significant benefits in terms of phone support (which is of very high quality). But if you don’t want to pay for phone support then CentOS and Debian/Stable are both good choices. Recently I’ve been rolling out a bunch of CentOS 5 machines for clients who don’t want to pay for RHEL and don’t want to pay for extensive customisation of the installation (a quick kickstart install is what they want). The benefit of Fedora and Debian/Testing over RHEL, CentOS, and Debian/Stable is that they get newer packages sooner. This is significant when using programs such as OpenOffice which have a steady development upstream that provides features that users demand.
If you want to try new features then Fedora and Debian/Testing are both options that will work. One reason I had been avoiding serious use of Debian/Testing is that it had no strategy for dealing with security fixes, but it seems that there are now security updates for Testing [5] (I had not realised this until today).
References:
- http://fedoraproject.org/
- http://www.debian.org/
- http://www.redhat.com/rhel/
- http://www.centos.org/
- http://secure-testing-master.debian.net/
After dealing with Optus phone support [1] in regard to a routine request for a password change I have been thinking about better ways of managing password changes for a large ISP. The first criteria is that the user must have a password that is difficult to brute-force attack at all times. Changing a password to a supposedly temporary value that can be easily guessed (such as “changeme“) is never acceptable. The next criteria is that the help-desk operator should be trusted as little as possible and it would be ideal if they never knew the password.
One possibility that occurred to me is that each bill could have a six-digit pseudo-random number printed on it. This number could be used as an alternate password. When a customer calls up because they lost their password they almost always have their last bill (this is why they print the number for phone support on each bill). The help-desk operator could then push a button on a web based form that makes this pseudo-random number be their new password, thus the help-desk operator would not know the new password and the user would also have it printed out clearly which avoids the confusion from having a password read out in a foreign accent.
Another possibility is to have the password change infrastructure integrated with the CTI [2] system, then the help-desk operator could push a button and have the computer dictate the password without the help-desk operator being able to listen in.
There will always be corner cases where a help-desk operator has to change the password, but if these are rare because the automated system handles most cases then the potential for damage would be limited. Of course it would also be a really good idea to do some statistical tracking of the number of password change requests performed by each operator and investigate those who do significantly more than average. In the past AOL has experienced a variety of security problems related to trojans [3] which probably would have been discovered by such analysis.
Another possible option is that customers with mobile phones could have their new password sent to them by SMS. It’s quick, easy, cheap when done in bulk, and is much harder to intercept than most methods that might be used for transferring passwords.
References:
- http://etbe.coker.com.au/2007/09/03/optus-password-changeme/
- http://en.wikipedia.org/wiki/Computer_telephony_integration
- http://www.wired.com/techbiz/it/news/2003/02/57753
A fairly common request is to be able to duplicate a Xen instance. For example you might have a DomU for the purpose of running WordPress and want another DomU to run MediaWiki. The difference in configuration between two DomU’s for running web based services that are written in PHP and talking to a MySQL back-end is quite small, so copying the configuration is easier than a clean install.
It is a commonly held opinion that a clean install should be done every time and that Kickstart on Red Hat, FAI on Debian and comparable technologies on other distributions can be used for a quick automatic install. I have not yet got FAI working correctly or got Kickstart working on Xen (it’s on my todo list – I’ll blog about it when it’s done).
Regardless of whether it’s a good idea to copy a Xen DomU, there are often situations where clients demand it or when it’s impractically difficult to do a fresh install.
I believe that the most sensible way to store block devices with Xen is to use LVM. It is a requirement for a Xen system that you can easily create new block devices while the machine is running and that the size of block devices can be changed with minimal effort. This rules out using Linux partitions and makes it unreasonably difficult to use LUNs on a fiber-channel SAN or partitions on a hardware RAID. LVM allows creating new block devices and changing the size of block devices with minimal effort. Another option would be to use files on a regular filesystem to store the filesystem data for Xen DomU’s, if choosing this option I recommend using XFS [1] filesystem (which delivers good performance with large filesystems and large files).
If you use XFS to store the block devices for the DomU that you want to copy then you will need to halt the DomU for the duration of the copy as there is no other way of getting an atomic copy of the filesystem while it’s in use. The way of doing this would be to run the command “xm console foo ; cp /mnt/whatever/foo-root /mnt/whatever/bar-root ; xm create -c foo” where “foo” is the name of the DomU and “/mnt/whatever/foo-root” is the file that is used to store the root device for the DomU (note that multiple cp commands would be needed if there are multiple block devices). The reason for having the two xm commands on the one line is that you initially login to the DomU from the console and type halt and then the xm command will terminate when the DomU is destroyed. This means that there is no delay from the time the domain is destroyed to the time that the copy starts.
If you use LVM to store the block device then things are a little easier (and you get no down-time). You simply run the command “lvcreate -s -L 300m -n foo-snap /dev/V0/foo-root” to create a snapshot with the device name /dev/V0/foo-snap which contains a snapshot the of the LV (Logical Volume) /dev/V0/foo-root. The “-L 300m” option means to use 300Meg of storage space for the snapshot – if the writes to /dev/V0/foo-root exceed 300Meg of data then your snapshot breaks. There is no harm in setting the allocated space for the snapshot to be the same as the size of the volume that you are going to copy – it merely means that more disk space is reserved and unavailable for other LVM operations. Note that V0 needs to be replaced by the name of the LVM VG (Volume Group) Once you have created the snapshot you can create a new LV with the command “lvcreate -n new-root -L X /dev/V0” where X is the size of the device (must be at least as big as the device you are copying) and then copy the data across with a command similar to “dd if=/dev/V0/foo-snap of=/dev/V0/new-root bs=1024k“. After the copy is finished you must remove the snapshot with the command “lvremove /dev/V0/foo-snap” (please be very careful when running this command – you really don’t want to remove an LV that has important data). Note that in normal operation lvremove will always give a prompt “Do you really want to remove active logical volume“. If you made the new device bigger then you must perform the operations that are appropriate for your filesystem to extend it’s size to use the new space.
There is no need to copy a swap device, it’s easier to just create a new device and run mkswap on it.
After copying the data you will need to create the new Xen config (by copying /etc/xen/foo to the new name). Make sure that you edit the Xen config file to use the correct block devices and if you are specifying the MAC address [2] by a “vif” line in the config file make sure that you change them to unique addresses for your LAN segment (reference [2] has information on how to select addresses).
Now you must mount the filesystem temporarily to change the IP address (you really don’t want two DomU’s with the same IP address). If your Dom0 has untrusted users or services that are accessed by untrusted users (IE any Internet facing service) then you want to mount the filesystem in question with the options nosuid and nodev so that if the DomU has been cracked it won’t allow cracking of the Dom0. After changing the configuration files to change the IP address(es) of the DomU you can then umount the filesystem and start it with the xm create command.
If instead of creating the clone DomU on the same Dom0 you want to put it on a different system you can copy the block devices to files on a regular filesystem on removable media (EG an IDE disk with USB attachment). When copying the block devices you also need to copy the Xen configuration and edit it to reflect the new paths to block devices for the data once it’s copied to the new server, but you won’t necessarily need to change the MAC address if you are copying it to a different LAN segment.
References:
- http://en.wikipedia.org/wiki/XFS
- http://en.wikipedia.org/wiki/MAC_address
A significant problem with the old-fashioned media is that as a general rule they don’t cite references for anything. Some of the better TV documentaries and non-fiction books cite references, but this is the exception not the norm. Often documentaries only cite references in DVD extras which are good for the people who like the documentary enough to buy it but not for people who want to rebut it (few people will pay for a resource if they doubt the truth and accuracy of it’s claims).
I can understand newspapers not wanting to publish much in the way of background information in the paper version as every extra line of text in an article is a line of advertising that they can’t sell. So they have financial pressure to produce less content, and the number of people like me who want to check the facts and figures used in articles is probably a small portion of the readership. Another issue with newspapers is that they are often considered as primary authoritative sources (by themselves and by the readers). It is often the case that journalists will interview people who have first-hand knowledge of an issue and the resulting article will be authoritative and a primary source in which case all they need to do is to note that they interviewed the subject. However the majority of articles published will be sourced from elsewhere (news agencies [ http://en.wikipedia.org/wiki/News_agency ] such as Reuters are commonly used). Also articles will often be written based on press releases – it is very interesting to read press releases and see how little work is done by some media outlets to convert them to articles, through a well written press release a corporation or interest group can almost write it’s own articles for publication in the old media.
One way of partially addressing the problem of citing references in old media would be to create a web site of references, then every article could have a URL that is a permanent link to the references and calculations to support the claims and numbers used. Such a URL could be produced by any blogging software, and a blog would be an ideal way of doing this.
For bloggers however it’s much easier to cite references and readers have much higher expectations of links to other sites to support claims and of mathematical calculations shown to indicate how numbers are determined. But there is still room for improvement. Here are some of the most common mistakes that I see in posts by people who are trying to do the right thing:
- Indirect links. When you refer to a site you want to refer to it directly. In email (which is generally considered a transient medium) a service such as TinyURL [ www.TinyURL.com ] can be used to create short URLs to refer to pages that have long URLs. This is really good for email as there are occasions when people will want to write the address down and type it in to another computer. For blogging you should assume that your reader has access to browse the web (which is the case most of the time). Another possibility is to have the textual description of a link include a reference to the TinyURL service but to have the HREF refer to the real address. Any service on the net may potentially go away at some future time. Any service on the net may have transient outages, and any reader of your blog may have routing problems that make parts of the net unavailable to them. If accessing a reference requires using TinyURL (or a similar service) as well as the target site then there are two potential things that might break and prevent your readers from accessing it.
One situation where indirect links are acceptable is for the printed version. So you could have a link in the HTML code for readers to click on to get to the reference page directly and a TinuURL link for people who have a printed version and need to type it in.
Also when linking to a blog it’s worth considering the fact that a track-back won’t work via TinyURL and track-backs may help you get more readers…
- Links that expire. For example never say “there’s a good article on the front page of X” (where X is a blog or news site). Instead say “here’s a link to a good article which happens to be on the front page now” so that someone who reads your post in a couple of years time can see the article that you reference.
Another problem is links to transient data. For example if you want to comment on the features of a 2007 model car you should try to avoid linking to the car manufacturer page, next year they will release a new car and delete the old data from their site.
A potential problem related to this is the Google cache pages which translate PDF to HTML and high-light relevant terms and can make it much easier to extract certain information from web pages. It can provide value to readers to use such links but AFAIK there is no guarantee that they will remain forever. I suggest that if you use them you should also provide the authoritative link so that if the Google link breaks at some future time then the reader will still be able to access the data.
- Not giving the URLs of links in human readable form. Print-outs of blog pages will lose links and blog reading by email will also generally lose links (although it would be possible to preserve them). This counts for a small part of your readership but there’s no reason not to support their needs by also including links as text (either in the body or at the end of the post). I suggest including the URL in brackets, the most important thing is that no non-URL text touch the ends of the URL (don’t have it in quotes and have the brackets spaced from it). Email clients can generally launch a web browser if the URL is clear. Note that prior to writing this post I have done badly in this regard, while thinking about the best advice for others I realised that my own blogging needed some improvement.
I am not certain that the practice I am testing in this post of citing URLs inline will work. Let me know what you think via comments, I may change to numbering the citations and providing a list of links in the footer.
- Non-specific links. For example saying “Russell Coker wrote a good post about the SE Linux” and referring to my main blog URL is not very helpful to your readers as I have written many posts on that topic and plan to write many more (and there is a chance that some of my future posts on that topic may not meet your criteria of being “good”). Saying “here is a link to a good post by Russell Coker, his main blog URL is here” is more useful, it gives both the specific link (indicating which post you were referring to) and the general information (for people who aren’t able to find it themselves, for the case of deleted/renamed posts, and for Google). The ideal form would be “<a href=”http://etbe.coker.com.au/whatever”>here is a link to a good post by Russell Coker [ http://etbe.coker.com.au/whatever ]</A>, his main blog URL is <a href=”http://etbe.coker.com.au/”> [ http://etbe.coker.com.au ]</A>” (note that this is an example of HTML code as a guide for people who are writing their own HTML, people who use so-called WYSIWYG editors will need to do something different).
- Links that are likely to expire. As a rule of thumb if a link is not human readable then the chance of it remaining long-term is low. Companies with content management systems are notorious for breaking links.
- Referencing data that you can’t find. If you use data sourced from a web site and the site owner takes it down then you may be left with no evidence to support your assertions. If data is likely to be removed then you should keep a private copy off-line (online might be an infringement of copyright) for future reference. It won’t let you publish the original data but will at least let you discuss it with readers.
- Referencing non-public data. The Open Access movement [ http://en.wikipedia.org/wiki/Open_access ] aims to make scholarly material free for unrestricted access. If you cite papers that are not open access then you deny your readers the ability to verify your claims and also encourage the companies that deny access to research papers.
An insidious problem is with web sites such as the New York Times [ www.nytimes.com ] which need a login and store cookies. As I have logged in to their site at some time in the past I get immediate access to all their articles. But if I reference them in a blog post many readers will be forced to register (some readers will object to this). With the NYT this isn’t such a problem as it’s free to register so anyone who is really interested can do so (with a fake name if they wish). But I still have to keep thinking about the readers for such sites.
I should probably preview my blog posts from a different account without such cookies.
- Failing to provide calculations. My current procedure is to include the maths in my post, for example if you have a 32bit data type used to store a number of milliseconds then it can store 2^32/1000 seconds which is 2^32/1000/60/60/24 = 49.7 days, in this example you can determine with little guessing what each of the numbers represent. For more complex calculations an appendix could be used. A common feature of blogs is the ability to have a partial post sent to the RSS feed and the user has the ability to determine where the post gets cut. So you could cut the post before the calculations, the people who want to see them will find it’s only one click away, and the people who are happy to trust you will have a shorter post.
- Linking with little reason. Having a random word appear highlighted with an underline in a blog post is often not very helpful for a reader. It sometimes works for Wikipedia links where you expect that most readers will know what the word means but you want to link to a reference for the few who don’t (my link for the word Wikipedia is an example). In the case where most readers are expected to know what you are referring to then citing the link fully (with a description of the link and a human-readable form for an email client) is overkill and reduces the readability of the text.
The blogging style of “see here and here for examples” does not work via email and does not explain why a reader should visit the sites. If you want to include random links in a post then having a section at the footer of related links would probably be best.
- Linking to a URL as received. Many bloggers paste URLs from Google, email, and RSS feeds into their blog posts. This is a bad idea because it might miss redirection to a different site. If a Google search or an email gives you a URL that is about to go away then it might redirect to a different site. In that case citing the new URL instead of the old one is a service to your readers and will decrease the number of dead-links in your blog over the long-term. Also using services such as www.feedburner.com may cause redirects that you want to avoid when citing a blog post, see my previous post about Feedburner [ http://etbe.coker.com.au/2007/08/20/feedburner-item-link-clicks/ ].
Here are some less common problems in citing posts:
- Inappropriately citing yourself. Obviously if there is a topic that you frequently blog about then there will be benefit to linking to old posts instead of covering all the background material, and as long as you don’t go overboard there should not be any problems (links to your own blog are assumed to have the same author so there is no need for a disclaimer). If you write authoritative content on a topic that is published elsewhere then you will probably want to blog about it (and your readers will be interested). But you must mention your involvement to avoid giving the impression that you are trying to mislead anyone. This is particularly important if you are part of a group that prepares a document, your name may not end up on the list of authors but you have a duty to your readers to declare this.
Any document that you helped prepare can not be used by itself as a support of claims that you make in a blog post. You can certainly say “I have previously demonstrated how to solve this problem, see the following reference”. But links with comments such as “here is an example of why X is true” are generally interpreted to be partly to demonstrate the popular support for an idea.
- Citing secret data. The argument “if you knew what I know then you would agree with me” usually won’t be accepted well. There are of course various levels of secrecy that are appropriate. For example offering career advice without providing details of how much money you have earned (evidence of one aspect of career success) is acceptable as the readers understand the desire for some degree of financial secrecy (and of course in any game a coach doesn’t need to be a good player). Arguing the case for a war based on secret data (as many bloggers did) is not acceptable (IMHO), neither is arguing the case for the use of a technology without explaining the science or maths behind it.
- Not reading the context of a source. For example I was reading the blog of a well regarded expert in an area of computer science, and he linked to another blog to support one of his claims. I read the blog in question (more than just the post he cited) and found some content that could be considered to be racially offensive and much of the material that I read contained claims that were not adequately supported by facts or logic. I find it difficult to believe that the expert in question (for whom I have a great deal of respect) even casually inspected the site in question. In future I will pay less attention to his posts because of this. I expect a blogger to pay more attention to the quality of their links than I do as a reader of their blog.
While writing this post I realised that my own blogging can be improved in this regard. Many of my older posts don’t adequately cite references. If you believe that any of my future posts fail in this regard then please let me know.
I’ve found a reasonably good free spider for checking for dead links, www.dead-links.com. When I told it to spider my main site www.coker.com.au it worked well and informed me of several dead links that I fixed. When I told it to spider my blog it reported an error 404 and didn’t give any useful output. Strangely it was able to display a thumbnail picture of my main blog page that correctly had all data.
I was wondering if it didn’t like WordPress so I tested it out on Chris Samuel’s blog [ www.csamuel.org ]. It seemed to have no problems with his blog, so I suspected that it was the request time. I reduced the number of blog posts shown on the front page of my blog to 1 to speed things up (which resulted in my page loading faster than his) but still the dead-links.com spider didn’t like me.
I now suspect that it’s something related to me using WordPress 2.2. I would be interested in feedback from other people who try checking their blogs for dead links, whether they are using WordPress 2.2 or something else.
In any case it’s a useful service and I recommend using it.
I previously posted about Interesting Ideas from George Monbiot, one of which was to establish individual emissions trading.
Gyros Geier disagrees with this and cites the current emission trading schemes as evidence. There are several fundamental differences between George’s idea and the current implementations of emission trading.
The biggest flaw in current emission trading schemes is that the emission credits are assigned to the worst polluters. George is proposing that an equal amount be assigned to all citizens. Assigning credits to the worst polluters is another form of Rent Seeking by the polluting industries. The way to solve these problems through emission trading is to start by fairly assigning the credits (and what better way than to equally distribute them among all citizens) and to then reduce the amounts assigned over time.
Gyros claims that emission trading which allows people who use little emissions to get large credits will cause people to have resources used in their name which they would not otherwise use. The solution to this is to assign to each citizen in a country a set of credits that is equal to the use by someone on the median income. Note specifically that setting credits equal to average use is not the right thing to do, the vast majority of the population produce significantly less emissions than average. The result of such a policy would be that people who produce median emissions (and most of whom would be close to the median income) would reduce their emissions as much as possible so that they could sell the credits, they would even have an incentive to spend money to reduce their emissions (for example by installing better insulation in their home) as it would be an investment. Then people who produce more emissions than the median would be forced to buy credits to support their extravagant lifestyle. This would give a significant reduction in emissions (the median income is about half the average income and I presume that the emissions produced are in line with income).
Gyros also makes the startling claim that emissions trading increases emissions. I can’t imagine that being possible, in fact I can’t imagine how the coal industry could do more damage to the environment if they tried.
Finally, taking a positive approach to blogging is a really good idea. I welcome discussion with people who want to claim that my ideas (and the ideas that I quote) are bad, but if you are going to do this please describe something that you consider to be better.
World News Australia reports that Police forced three tourists to delete photos of a fence. Apparently the officers in question believed that such photos would be a threat to security.
It’s interesting to note that the first sentence of the World News Australia report is “Officials say police who forced three tourists to delete photos of a huge security fence erected in Sydney for the APEC summit were not over-reacting” while later in the article it says “The police action against the tourists may have been “over the top” but was necessary, said New South Wales state Transport Minister John Watkins“. So which was it? Was it “over the top” or was it “not over-reacting“.
Strangely the World News Australia web site shows a picture of the fence. If this fence is so secret then why are the pictures being published so that millions of people will see it?
For that matter why even try censoring the pictures. Censoring the pictures effectively issues a challenge to everyone who has a digital camera and plenty of spare time (which means most university students among others) to get the best photos of the fence (feel free to leave comments with the URLs for your best pics).
While this is happening protesters from Real Action On Climate Change have been protesting at the Loy Yang power station, according to their blog posts it seems that they were partially inspired by the APEC meeting.
It seems that APEC leaders are keen on nuclear power. If they really believe that nuclear power is safe then maybe they should have their meeting at Maralinga. There would be little effort or expense required to secure Maralinga and it wouldn’t disrupt a major city. :-# But seriously if they wanted a secure location for a meeting with no protesters then an aircraft carrier would make an ideal location. The people who need to attend the meeting could get flown to a carrier that’s off-shore in the territorial waters of one of the countries concerned with a full battle group to deter any other ships from entering the area.
It is reported that Sydney whores are expecting to do a lot of business during the APEC meeting. Maybe this is the reason why they wanted to have their meetings Sydney (which is rumoured to have the best brothels in Australia). I wonder if they are planning for a future APEC meeting in Bangkok…
I have just given my parents a new computer, and part of the upgrade process lost their email passwords (which were stored in KDE preferences – this seems to happen every time KDE is upgraded). The only password that is not under my control is the password for their Optus account. So I had to get the password changed, I got my mother to phone Optus but they decided that my father had to make the request (not that I would have had any difficulty impersonating him, any of the details that they asked for which I might not have known could have been provided by my mother). So my father requested that my mother be listed as someone who is authorised to make changes to the account.
It turned out that changing a password for a mailbox is a difficult operation and I needed to talk to the phone-support guy. He needed to get permission from my father for this (he didn’t seem to realise that he had just granted my mother full access so that she could authorise such things on future calls). After my father had given permission for the second time I got the password changed, the new password was “changeme“. The call-centre guy advised me to change it as “it’s a very common password“! An ISP with any clue in their call centre would get the password changed to a semi-random string.
Then there was the process of changing the password. The web site didn’t work at all with Konqueror (the call-centre guy told me that only IE is supported). I used Iceweasel (Firefox) and it allowed me to change the password. The Optus web site is one of the worst I’ve ever seen, and redirecting to the MS site live.com as part of the process was only part of it.
Another bad thing that Optus does is punish customers who do things that they don’t like. My original Optus contract specified unlimited uploads, the first time they cut me off for actually uploading a moderate amount of data (running Bittorrent) they disconnected me and claimed that it was a DOS attack. On a later occasion they disconnected me on a Wednesday because I had uploaded some files on the weekend. They alleged that the upload was continuing on the Sunday (although I had a very clear recollection of ending it on the Saturday), but we all agreed that the upload had ceased before the Monday. But apparently I needed to be disconnected on the Wednesday to teach me a lesson.
The lesson of course is that Optus sucks really badly. They do have reasonable prices and at the time their cable network offered download speeds significantly higher than any ADSL plans could match, which is why I have continued using them (and subscribed again after buying a new house).
Recently ADSL2+ has started to become common and prices have been falling. For my use and for most people who I know ADSL is now either cheaper than cable or close enough that it’s worth paying the extra to escape Optus. My parents are on an Optus plan that gives them 100MB per month of data transfer (which counts both download and upload) before they are limited to modem speed. This is enough for them, but I expect that when their Optus contract runs out there will be an ADSL plan that’s not much more expensive and they can cease using Optus too.
I just wish there was a real choice of providers for the base phone service. Currently there is only Telstra and Optus, both of which are expensive and suck. Fortunately Telstra has an insane CEO who is determined to make sure that everyone in Australia learns that Telstra is a nasty monopoly that needs to be broken up or severely constrained.
I’ve been having problems with one of my Xen virtual servers crashing with kernel error messages regarding OOM conditions. One thing I had been meaning to do is to determine how to make a core dump of a Xen domain and then get data such as the process list from it. But tonight I ended up catching the machine after the problem occurred but before the kernel gave OOM messages so I could log in to fix things.
I discovered the following issues:
- 10+ instances of spf-policy.pl (a Postfix program to check the Sender Policy Framework data for a message that is being received), most in D state.
- All my Planet news feeds being updated simultaneously (four of them taking 20M each takes a bite out of 256M for a virtual machine).
- Over 100 Apache processes running in D state.
I think that there is a bug with having so many instances spf-policy.pl, I’ve also been seeing warning messages from Postfix about timeouts when running it.
For the Planet feeds I changed my cron jobs to space them out. Now unless one job takes 40 minutes to run there will be no chance of having them all run at the same time.
For Apache I changed the maximum number of processes from 150 to 40 and changed the maximum number of requests that a client may satisfy to 100 (it used to be a lot higher). If more than 40 requests come in at the same time then the excess ones will wait in the TCP connection backlog (of 511 entries) until a worker process is ready to service the request. While keeping the connections waiting is not always ideal, it’s better than killing the entire machine!
Finally I installed my memlockd program so that next time I have paging problems the process of logging in and fixing them will be a little faster. Memlockd locks a specified set of files into RAM so that they won’t be paged out when memory runs low. This can make a dramatic difference to the time taken to login to a system that is paging heavily. It also can run ldd on executables to discover the shared objects that they need so it can lock them too.
|
|