|
It’s generally accepted that certain things need redundancy. RAID is generally regarded as essential for every server except for the corner case of compute clusters where a few nodes can go offline without affecting the results (EG the Google servers). Having redundant network cables with some sort of failover system between big switches is regarded as a good idea, and multiple links to the Internet is regarded as essential for every serious data-center and is gaining increasing acceptance in major corporate offices.
Determining whether you need redundancy for a particular part of the infrastructure is done on the basis of the cost of the redundant device (in terms of hardware and staff costs related to installing it), the cost of not having it available, and the extent to which the expected down-time will be reduced by having some redundancy.
It’s also regarded as a good idea to have more than one person with the knowledge of how to run the servers, jokes are often made about what might happen if a critical person “fell under a bus“, but more mundane things such as the desire to take an occasional holiday or a broken mobile phone can require a backup person.
One thing that doesn’t seem to get any attention is redundancy in the machine used for system administration. I’ve been using an EeePC [1] for supporting my clients, and it’s been working really well for me. Unfortunately I have misplaced the power supply. So I need to replace the machine (if only for the time taken to find the PSU). I have some old Toshiba Satellite laptops, they are quite light by laptop standards (but still heavier than the EeePC) and they only have 64M of RAM. But as a mobile SSH client they will do well. So my next task is to set up a Satellite as a backup machine for my network support work.
It seems that this problem is fairly widespread. I’ve worked in a few companies with reasonably large sysadmin teams. The best managed one had a support laptop that was assigned to the person who was on-call outside business hours. That laptop was not backed up (to the best of my knowledge, it was never connected to the corporate LAN so it seems that no-one had an opportunity to do so) and there was no second machine.
One thing I have been wondering is what happens to laptops with broken screens when the repair price exceeds the replacement cost. I wouldn’t mind buying an EeePC with a broken screen if it comes with a functional PSU, I could use it as a portable server.
I was doing some routine sysadmin work for a client when I had to read mail in the system administration mailbox. This mailbox is used for cron job email, communication with ISPs that run servers for the company, and other important things. I noticed that the account was subscribed to some mailing lists related to system administration, the following is from one of the monthly messages from a list server:
Passwords for sysadmin@example.com:
List Password // URL
---- --------
whatever-users@example.org victoria3
That doesn’t seem terribly exciting, unless you know that the password used for the list server happens to be the same as the one used for POP and IMAP access to the account in question, and that it is available as webmail… Of course I didn’t put the real password in my blog post, I replaced it with something conceptually similar and equally difficult to guess (naturally I’ve changed the password). The fact that the password wasn’t a string of 8 semi-random letters and digits is not a good thing, but not really bad on it’s own. It’s only when the password gets used for 3rd party servers that you have a real problem.
I wonder how many list servers are run by unethical people who use the passwords to gain access to email accounts, and how many hostile parties use such lists of email addresses and passwords when they compromise servers that run mailing lists.
Now there would be an obvious security benefit to not having the list server store the password in clear-text or at least not send it out every month. Of course the down-side to doing that is that it doesn’t give someone like me the opportunity to discover the problem and change the password.

The people who made the above magazine advert gave it two top-halves to the burger bun. But I think that there is actually a demand for such buns, and that it is possible to make them!
Traditional buns have a flat bottom where they rest on a baking tray. One solution to this problem would be to bake in outer space, another possible solution would be to develop a rapid baking process that allows baking in a free-fall aeroplane, but both of these would be unreasonably expensive.
It seems that it would be viable to bake double-ended buns by having a rapidly rising column of hot air to suspend the bun. The terminal velocity of a bun would probably not be that high (maybe 60Km/h) and it should be quite easy to have a pipe full of hot air that bakes the buns. As slight variations in the density and shape of the bun would affect the air-flow it would be necessary to closely monitor the process and adjust the air speed to keep the bun afloat. Manufacturing cheap ovens that use LASERs to monitor the position of the bun should not be difficult.
This might blow the sesame seeds off the bun, but this problem may also be solvable through careful design of the bun shape to make it less aerodynamic and by strongly attaching the seeds. I’m not sure how you would do this.
P. W. Singer gave an interesting TED talk about the use of robots in war [1]. He briefly covered some of the ethical and social issues related to robot soldiers as well as showing many pictures of existing robots.
Since November 2007 there has been a request for Google Gears to support “Iceweasel” (the Debian name for Firefox due to trademark issues)[2]. Apparently supporting this different name is not easy for the Google people. If you visit the Google Gears Terms and Conditions page [3] then it will work with Iceweasel on the i386 platform – but not for AMD64 (or at least not my Debian/Lenny AMD64 system).
Charles Moore gave a disturbing TED talk about the “Great Pacific Garbage Patch” [4]. Pollution in the oceans from waste plastic is worse than I realised.
Ressuka documented how to solve the Time went backwards problem on Xen DomUs [5]. Run “echo “jiffies”> /sys/devices/system/clocksource/clocksource0/current_clocksource” or use “clocksource=jiffies” in your DomU kernel boot parameter list.
Nassim Taleb [6] has written Ten principles for a Black Swan-proof world [7], this is in regard to the current US financial crisis. It’s worth noting that he made a significant amount of money due to successfully predicting some aspects of the crisis.
James Duncan Davidson has some good advice for speakers based on his experience in filming presentations [8]. Some of the ones that were not obvious to me were:
Take off your name-tag – it doesn’t look good
Stay in the part of the stage with the best light
Almost two years ago I blogged about a strange performance problem with SATA disks [1]. The problem was that certain regions of a disk gave poor linear read performance on some machines, but performed well on machines which appeared to be identical. I discovered what the problem was shortly after that but was prevented from disclosing the solution due to an SGI NDA. The fact that SGI now no longer exists as a separate company decreases my obligations under the NDA. The fact that the sysadmins of the University of Toronto published all the most important data entirely removes my obligations in this regard [2].
In their Wiki they write “after SGI installed rubber grommits around the 5 or 6 tiny fans in the xe210 nodes, the read and write plots now look like” and then some graphs showing good disk performance appear.
The problem was that a certain brand and model of disk was particularly sensitive to vibrations. When that model of disk was installed in some machines then the vibrations would interfere with disk reads. It seems that there was some sort of harmonic frequency between the vibration of the disk and that of the cooling fans which explains why some sections of the disk were read slowly and some gave normal performance (my previous post has the graphs which show a pattern). Some other servers of the same make and model didn’t have that problem, so it seemed that some slight manufacturing differences in the machines determined whether the vibration would affect the disk performance.
One thing that I’ve been meaning to do is to test the performance of disks while being vibrated. I was thinking of getting a large bass speaker, a powerful amplifier, and using the sound hardware in a PC to produce a range of frequencies. Then having the hard disk securely attached to a piece of plywood which would be in front of the speaker. But as I haven’t had time to do this over the last couple of years it seems unlikely that I will do it any time soon. Hopefully this blog post will inspire someone to do such tests. One thing to note if you want to do this is that it’s quite likely to damage the speaker, powerful bass sounds that are sustained can melt parts of the coil in a speaker. So buy a speaker second-hand.
If someone in my region (Melbourne) wants to try this then I can donate some old IDE disks. I can offer advice on how to run the tests for anyone who is interested.
Also it’s worth considering that systems which make less noise might deliver better performance.
While shopping at Highpoint [1] today I noticed that they had a new loyalty system. It’s called Mo Rewards [2] (for which the real web site is at MoCoMedia.net [3] which has no link from the main site because they didn’t care enough about their web presence).

The way that Mo works is that everyone gets a free RFID token similar to the two in the above photograph. The token comes with a pseudo-random seven letter code that you have to SMS to register it to your phone. You SMS the code and then receive a confirmation SMS. After that you can wave your token near a detector any time you visit the shopping center and you will receive three SMS messages with discount offers. You can send an SMS with your gender and birth-year to receive more targeted offers. To redeem offers you have to wave your token near a detector at the store so they know who is using the offers.
Then of course once the database knows that you are a regular customer at a certain shop they can send you targeted advertising to entice you to buy from that shop on every visit. I presume that they have some sort of bidding system for adverts from the shops of a similar nature to the Google advertising.
It’s an interesting system and a lot better than most loyalty programs.
One interesting thing about this is that high quality RFID devices are being given out for free. The tokens are quite solidly constructed and could be used for a variety of other purposes. I couldn’t find anyone offering RFID tags at a reasonable price with a quick Google search (the cheapest was $75 for 100 tags – and they were the fragile ones used for marking stock in shops). So a hobbyist who wanted to do some RFID stuff could buy a cheap reader under one of the demo offers (where you get a reader and a small quantity of keys for a reasonable price) and then collect free RFID tokens from shopping centers. I expect that the number of people who would do such things is small enough to not be statistically significant and therefore not affect the business model. The tags are given out freely with no requirement that you must use them for the expected purpose (Mo Rewards) instead of using them for your own RFID work.
My SE Linux Play Machine [1] has a file named thanks.txt for users to send messages to me [2].
On a number of occasions people have offered to give me things in exchange for the password for the bofh account (the one with sysadm_r privileges). I’ve been offered stolen credit cards, a ponzi scheme of root access to servers on the net, and various other stuff. Today I received an amusing joke entry:
Hello Kind Sir,
I am Dr. Adamu Salaam, the the bank manager of bank of africa (BOA) Burkina Faso West
I am sending you this message about the $3.14159 million dollars in bank account number 2718281828450945. I will give you this money in exchange for the password to the ‘bofh’ account.
The amount of money is based on the value of Pi. The account number is based on the mathematical constant e [3].
It’s a pity that the author of that one didn’t sign their real name. Whoever created that should have claimed credit for their work.
TED published an interesting interview with Shai Agassi about electric cars [1]. One idea that I hadn’t heard before is that of moving car batteries between regions as they lose capacity. An old battery for an electric car that can only handle short journeys may be useful in a region where journeys are typically short. On a similar note I expect that in a few decades the less prosperous countries will import old electric vehicles and fit them with 4 or more batteries. Last time I checked the Prius battery pack weighed about 120Kg, so the car would be usable with 4 battery packs if driven at low speeds.
Shai Agassi also gave a TED talk on this topic [2]. The real solution for the problem of providing convenient and affordable electric vehicles is to start by recharging the batteries whenever the vehicle is parked (at the office, shopping center, home, etc). Then on the rare occasions when the car is being driven for longer distances and the battery gets flat it can be swapped for a charged battery. They have apparently designed a robot for changing car batteries, so changing the battery would be like driving through a car-wash. He describes this as an economic model that decouples the expensive battery from the car, so you pay for the use of the battery not the ownership – just as with a petrol car you pay for the petrol you use not for a portion of the ownership of an oil well.
He also pointed out that cars produce 25% of the world’s CO2 emissions, so his plan for all electric cars everywhere seems to be an essential part of solving the environmental problems. He then compared this to the UK parliamentary discussion on ending slavery, at the time slaves provided 25% of the energy used by the UK. After a month of discussion the decision was made to make the moral choice and end slavery regardless of the cost.
The New York Times has an article about the Associated Press (AP) trying to gain more control over material that it distributes [1]. The article is not clear on the details.
One noteworthy fact is that the AP apparently don’t like search engines showing snippets of their articles. This should however be an issue for the organisations that license the AP content and redistribute it (newspapers etc), they can use a robots.txt file on their web server to prevent search engines from showing snippets of their content – then once their traffic drops dramatically they can threaten to boycott AP if they can’t do things properly. Speaking for myself the majority of the articles I read on major news sites come from Google results, if they stop Google from indexing AP content then I will read a lot less of it. The end of the article says that there is some sort of battle in Europe between Google and newspapers. Has Google stopped respecting robots.txt? How can this be a problem, if someone copies the entire article you can sue them and you can ask Google not to index your site. That should cover it.
The AP will be going after sites that copy large portions of articles, but this is not news at all. I often see web sites copy my blog content in ways that breach the license terms. As I’m not well equipped to deal with such people I usually try to find an instance of the same splog (spam blog) copying articles from a major news site and report it to employees of the news organisation. They can often get the splogs shut down, sometimes rather quickly.
They are apparently after SEO, they want to get the top entries in search engines for their articles and not have a site that paraphrases the article or quotes it. I don’t think that my blog posts which paraphrase and quote from mainstream media articles are likely to do that, but the newspapers have to deal with the fact that when Slashdot and other popular sites reference their articles then they will lose on SEO. They should be happy that they can win most of the time.
Brendan Scott has a rather harsh take on this [2] which he unfortunately has not explained in any detail. The people who write the news articles for AP get paid for their work and then AP needs to get paid to run a viable business – which is in the public interest. It may be that the AP are doing something really bad, but the New York Times article that Brendan cites doesn’t seem to support any such claim.
The latest news in the Australian IT industry is the new National Broadband Network (NBN) plan [1]. It will involve rolling out Fiber To The Home for 90% of the population, the plan is that it will cost the government $43,000,000,000 making it the biggest government project. Kevin Rudd used Twitter to say “Just announced biggest ever investment in Australian broadband – really exciting, infrastructure for the future” [2].
Now whenever someone says that a certain quantity of a resource is enough you can expect someone to try and refute that claim by mentioning that Bill Gates supposedly stated that “640K is enough” when referring to the RAM limits of the original IBM PC. As an aside, it’s generally believed that Bill Gates actually didn’t claim that 640K would be enough RAM, Wikiquote has him claiming to have never said any such thing [3]. He did however say that he had hoped that it would be enough for 10 years. I think that I needed that disclaimer before stating that I think that broadband speeds in Australia are high enough at the moment.
In any computer system you will have one or more resources that will be limited and will be bottlenecks that limit the overall performance. Â Adding more of other resources will often make no difference to performance that a user might notice.
On the machine I’m using right now to browse the web the bottleneck is RAM. Â A combination of bloated web pages and memory inefficient web browsers uses lots of memory, I have 1.5G of RAM and currently there is 1.3G of swap in use and performance suffers because of it. Â It’s not uncommon for the machine to page enough that the mouse cursor is not responsive while browsing the web.
My options for getting faster net access on this machine are to add more RAM (it can’t take more than 2G – so that doesn’t gain much), to use more memory efficient web browsers and X server, and to simply buy a new machine. Dell is currently selling desktop machines with 2G of RAM, as they are 64bit systems and will therefore use more memory than 32bit systems for the same tasks they will probably give less performance than my 32bit machine with 1.5G of RAM for my usage patterns.
Also the latest EeePC [4] ships with 1G of RAM as standard and is limited to a maximum of 2G, I think that this is typical of Netbook class systems. I don’t use my EeePC for any serious work, but I know some people who do.
Does anyone have suggestions on memory efficient web browsers for Linux? I’m currently using Konqueror and Iceweasel (Firefox). Maybe the government could get a better return on their investment by spending a small amount of money sponsoring the development of free web browsers. A million dollars spent on optimising Firefox seems likely to provide good performance benefits for everyone.
My wife’s web browsing experience is bottlenecked by the speed of the video hardware in her machine (built-in video on a Dell PowerEdge T105 which is an ATI ES1000). The recent dramatic price reductions of large TFT monitors seem likely to make video performance more of an issue, and also increases the RAM used by the X server.
Someone who has reasonably good net access at the moment will have an ADSL2+ connection and a computer that is equivalent to a low-end new Dell machine (which is more powerful than the majority of systems in use). In that case the bottleneck will be in the PC used for web browsing if you are doing anything serious (EG having dozens of windows open, including PDFs and other files that are commonly loaded from the web). If however a machine was used for simply downloading web pages with large pictures in a single session then FTTH would provide a real benefit. Downloading movies over the net would also benefit a lot from FTTH. So it seems to me that browsing the web for research and education (which involves cross-referencing many sites) would gain more of a benefit from new hardware (which will become cheap in a few years) while porn surfing and downloading movies would gain significantly from FTTH.
The NBN will have the potential to offer great bi-directional speeds. The ADSL technology imposes a limit on the combination of upload and download speeds, and due to interference it’s apparently technically easier to get a high download speed. But the upload speeds could be improved a lot by using different DSLAMS. Being able to send out data at a reasonable speed (20Mbit/s or more) has the potential to significantly improve the use of the net in Australia. But if the major ISPs continue to have terms of service prohibiting the running of servers then that won’t make much difference to most users.
Finally there’s the issue of International data transfer which is slow and expensive. This is going to keep all affordable net access plans limited to a small quota (20G of downloads per month or less).
It seems to me that the best way of spending taxpayer money to improve net access would be to provide better connectivity to the rest of the world through subsidised International links.
Brendan makes an interesting point that the NBN is essentially a subsidy to the entertainment industry and that copyright law reform should be a higher priority [5].
|
|