|
|
I previously posted about Interesting Ideas from George Monbiot, one of which was to establish individual emissions trading.
Gyros Geier disagrees with this and cites the current emission trading schemes as evidence. There are several fundamental differences between George’s idea and the current implementations of emission trading.
The biggest flaw in current emission trading schemes is that the emission credits are assigned to the worst polluters. George is proposing that an equal amount be assigned to all citizens. Assigning credits to the worst polluters is another form of Rent Seeking by the polluting industries. The way to solve these problems through emission trading is to start by fairly assigning the credits (and what better way than to equally distribute them among all citizens) and to then reduce the amounts assigned over time.
Gyros claims that emission trading which allows people who use little emissions to get large credits will cause people to have resources used in their name which they would not otherwise use. The solution to this is to assign to each citizen in a country a set of credits that is equal to the use by someone on the median income. Note specifically that setting credits equal to average use is not the right thing to do, the vast majority of the population produce significantly less emissions than average. The result of such a policy would be that people who produce median emissions (and most of whom would be close to the median income) would reduce their emissions as much as possible so that they could sell the credits, they would even have an incentive to spend money to reduce their emissions (for example by installing better insulation in their home) as it would be an investment. Then people who produce more emissions than the median would be forced to buy credits to support their extravagant lifestyle. This would give a significant reduction in emissions (the median income is about half the average income and I presume that the emissions produced are in line with income).
Gyros also makes the startling claim that emissions trading increases emissions. I can’t imagine that being possible, in fact I can’t imagine how the coal industry could do more damage to the environment if they tried.
Finally, taking a positive approach to blogging is a really good idea. I welcome discussion with people who want to claim that my ideas (and the ideas that I quote) are bad, but if you are going to do this please describe something that you consider to be better.
World News Australia reports that Police forced three tourists to delete photos of a fence. Apparently the officers in question believed that such photos would be a threat to security.
It’s interesting to note that the first sentence of the World News Australia report is “Officials say police who forced three tourists to delete photos of a huge security fence erected in Sydney for the APEC summit were not over-reacting” while later in the article it says “The police action against the tourists may have been “over the top” but was necessary, said New South Wales state Transport Minister John Watkins“. So which was it? Was it “over the top” or was it “not over-reacting“.
Strangely the World News Australia web site shows a picture of the fence. If this fence is so secret then why are the pictures being published so that millions of people will see it?
For that matter why even try censoring the pictures. Censoring the pictures effectively issues a challenge to everyone who has a digital camera and plenty of spare time (which means most university students among others) to get the best photos of the fence (feel free to leave comments with the URLs for your best pics).
While this is happening protesters from Real Action On Climate Change have been protesting at the Loy Yang power station, according to their blog posts it seems that they were partially inspired by the APEC meeting.
It seems that APEC leaders are keen on nuclear power. If they really believe that nuclear power is safe then maybe they should have their meeting at Maralinga. There would be little effort or expense required to secure Maralinga and it wouldn’t disrupt a major city. :-# But seriously if they wanted a secure location for a meeting with no protesters then an aircraft carrier would make an ideal location. The people who need to attend the meeting could get flown to a carrier that’s off-shore in the territorial waters of one of the countries concerned with a full battle group to deter any other ships from entering the area.
It is reported that Sydney whores are expecting to do a lot of business during the APEC meeting. Maybe this is the reason why they wanted to have their meetings Sydney (which is rumoured to have the best brothels in Australia). I wonder if they are planning for a future APEC meeting in Bangkok…
I have just given my parents a new computer, and part of the upgrade process lost their email passwords (which were stored in KDE preferences – this seems to happen every time KDE is upgraded). The only password that is not under my control is the password for their Optus account. So I had to get the password changed, I got my mother to phone Optus but they decided that my father had to make the request (not that I would have had any difficulty impersonating him, any of the details that they asked for which I might not have known could have been provided by my mother). So my father requested that my mother be listed as someone who is authorised to make changes to the account.
It turned out that changing a password for a mailbox is a difficult operation and I needed to talk to the phone-support guy. He needed to get permission from my father for this (he didn’t seem to realise that he had just granted my mother full access so that she could authorise such things on future calls). After my father had given permission for the second time I got the password changed, the new password was “changeme“. The call-centre guy advised me to change it as “it’s a very common password“! An ISP with any clue in their call centre would get the password changed to a semi-random string.
Then there was the process of changing the password. The web site didn’t work at all with Konqueror (the call-centre guy told me that only IE is supported). I used Iceweasel (Firefox) and it allowed me to change the password. The Optus web site is one of the worst I’ve ever seen, and redirecting to the MS site live.com as part of the process was only part of it.
Another bad thing that Optus does is punish customers who do things that they don’t like. My original Optus contract specified unlimited uploads, the first time they cut me off for actually uploading a moderate amount of data (running Bittorrent) they disconnected me and claimed that it was a DOS attack. On a later occasion they disconnected me on a Wednesday because I had uploaded some files on the weekend. They alleged that the upload was continuing on the Sunday (although I had a very clear recollection of ending it on the Saturday), but we all agreed that the upload had ceased before the Monday. But apparently I needed to be disconnected on the Wednesday to teach me a lesson.
The lesson of course is that Optus sucks really badly. They do have reasonable prices and at the time their cable network offered download speeds significantly higher than any ADSL plans could match, which is why I have continued using them (and subscribed again after buying a new house).
Recently ADSL2+ has started to become common and prices have been falling. For my use and for most people who I know ADSL is now either cheaper than cable or close enough that it’s worth paying the extra to escape Optus. My parents are on an Optus plan that gives them 100MB per month of data transfer (which counts both download and upload) before they are limited to modem speed. This is enough for them, but I expect that when their Optus contract runs out there will be an ADSL plan that’s not much more expensive and they can cease using Optus too.
I just wish there was a real choice of providers for the base phone service. Currently there is only Telstra and Optus, both of which are expensive and suck. Fortunately Telstra has an insane CEO who is determined to make sure that everyone in Australia learns that Telstra is a nasty monopoly that needs to be broken up or severely constrained.
I’ve been having problems with one of my Xen virtual servers crashing with kernel error messages regarding OOM conditions. One thing I had been meaning to do is to determine how to make a core dump of a Xen domain and then get data such as the process list from it. But tonight I ended up catching the machine after the problem occurred but before the kernel gave OOM messages so I could log in to fix things.
I discovered the following issues:
- 10+ instances of spf-policy.pl (a Postfix program to check the Sender Policy Framework data for a message that is being received), most in D state.
- All my Planet news feeds being updated simultaneously (four of them taking 20M each takes a bite out of 256M for a virtual machine).
- Over 100 Apache processes running in D state.
I think that there is a bug with having so many instances spf-policy.pl, I’ve also been seeing warning messages from Postfix about timeouts when running it.
For the Planet feeds I changed my cron jobs to space them out. Now unless one job takes 40 minutes to run there will be no chance of having them all run at the same time.
For Apache I changed the maximum number of processes from 150 to 40 and changed the maximum number of requests that a client may satisfy to 100 (it used to be a lot higher). If more than 40 requests come in at the same time then the excess ones will wait in the TCP connection backlog (of 511 entries) until a worker process is ready to service the request. While keeping the connections waiting is not always ideal, it’s better than killing the entire machine!
Finally I installed my memlockd program so that next time I have paging problems the process of logging in and fixing them will be a little faster. Memlockd locks a specified set of files into RAM so that they won’t be paged out when memory runs low. This can make a dramatic difference to the time taken to login to a system that is paging heavily. It also can run ldd on executables to discover the shared objects that they need so it can lock them too.
One item on my todo list is to set up a bunch of email addresses on sub-domains of domains that I am responsible for (with the consent of all people involved of course) and perform various actions to get the addresses noticed by spammers and measure how effective the various anti-spam measures are. As part of such tests I would click on every URL in every message sent to some accounts and see what difference it makes. My plan is to run a set of Xen virtual machines with different configurations of some common anti-spam measures used in MTAs and see how they fare with sets of accounts with similar publicity. I am not aware of any work having been done in this area (a quick Google search turned up nothing). There are many honeypots for tracking spam sources, matching email address harvesting to spamming, etc. But I’m not aware of any research into the effectiveness of various methods of combatting spam by setting up multiple honeypots. Please inform me via comments if I have missed something!
The most common advice about spam is to NEVER click on the URL that supposedly removes you from a list. By clicking on such a URL the spammer can recognise that you actually read the email and therefore know that it’s a live address and a good target for more spam. I am not aware of any good studies proving this, which is why it’s one of the things I’d like to investigate. A counter theory (for which there is also a lack of evidence AFAIK) is that spammers used to measure delivery etc but now that bot-nets are large and cheap it’s easier to just send mail to all possible addresses.
Even though I am not aware of any great evidence to support the idea I avoid clicking on URLs in spam messages. Refraining from hitting the spam web-sites can’t do any harm (it’s not as if the meager contribution to their system load caused by my web browser will cause them a problem).
But today I was tricked. A spammer subscribed me to a mailman mailing list, as I am subscribed to many lists (about half of which use mailman) the fact that I didn’t recognise the list name didn’t necessarily mean that I hadn’t signed up to it. After signing in I saw the list archives which had only one post concerning spam. I unsubscribed (there was no other reasonable option open to me) and sent the mailman message to SpamCop.
This technique will probably be effective for a while. People will think that they subscribes to a list and forgot about it and that it’s just another list that doesn’t have strong anti-spam measures. That should greatly increase the amount of time taken to black-list the spam server.
So from now on if I receive a spam via a mailing list that I am not familiar with then I’ll send it to SpamCop immediately. Also this is yet another good reason for not subscribing people to mailing lists without their consent (a practice that is far too common – it’s really not difficult to send someone an email asking whether they would like to join the list). If you subscribe me to a list without prior discussion and the first post I receive on the list is a spam then it will be sent to SpamCop and this might result in you being black-listed.
I recently had to decommission an old Linux server and replace it with a new machine. When I was about to turn it off I noticed a power cable of the type used for IDE hard drives leaving the Linux server and entering an NT server that was in the same rack! It turned out that a DAT tape drive used for backup had been shipped without a power cable and they had been forced to take power from another machine. Incidentally is this likely to risk hardware damage?
So I had to take the NT machine down to fix it. The new cable had arrived so all I had to do was install it. One thing that wasn’t mentioned on the documentation was that the cable was designed to operate as a double-adapter and replace an existing cable. Once the phone support people had explained this (IBM support is really good – they solved the problem well within the SLA) I was able to correctly wire it.
However correct wiring in this case meant having a power cable go through the side of the storage bay and a SCSI cable came from the back of the case underneath the cooling fan assembly (something like 16 separate hot-swap fans in one assembly that can be removed for maintenance). The DAT drive took up space that could otherwise have been used for three hot-swap SCSI hard drives.
What I would like to know is, why can’t they make hot-swap DAT drives that use the same power and SCSI connectors as the hard drives? I don’t expect a DAT drive to be any more reliable than a hard drive, and when the system backup is mission-critical then down-time is required for a replacement. Not to mention the effort involved in the installation, my fingers are significantly longer than average, I can’t imagine how anyone with average size hands could complete the job!
So IBM, congratulations on the great phone support. But please try and make everything hot-swap when designing servers. Also while on the topic, I think that servers should be designed with external DVD drives connected via USB. I really hate it when I’ve got 10 * 2U servers in a rack, my system performance is limited by the number of disks and every single server has space that could be used for at least one disk sitting idle because there is a DVD drive gathering dust. For the IBM 2U servers in question, they could design them with space for 12 disks or 9 disks and one DAT drive which were all hot-swappable if they were smart about it, the current design supports 6 disks or 3 disks and a DAT.
My post about Why Hydrogen Powered Cars Will Never Work has received a record number of comments. Some of them suggested that carbon geo-sequestration (storing carbon-dioxide at high pressure under-ground) is the solution to the climate change problem. The idea is that you can mix natural gas or coal gas with steam at high temperature to give carbon-dioxide and hydrogen. Then the carbon dioxide gets stored under-ground while the hydrogen is used for relatively clean fuel.
Beyond Zero Emissions has produced a media release about the fallacies expressed in the FutureGen document promoting so-called “clean-coal”, the best content is in their PDF document titled FutureGen Conceptual Design Retort. Note that I did some research to support the preparation of the retort, I am not referencing them to support my arguments but as background information.
One overwhealming problem with geo-sequestration for coal based power plants is that it is significantly more expensive than the current coal-fired power plant design. Currently the price difference between coal power and wind power is quite small and there are several technologies that are almost ready for production which will decrease the cost of wind power, it is expected that before so-called “clean coal” becomes viable (they are planning for the first production plants to go live in 2022) the cost of renewable energy will be lower than the current cost of coal power. There is no reasonable possibility of “clean coal” being cheaper than renewable energy.
The underground reservoirs that could be used for storing CO2 currently contain brine, which can contain toxic metals and radioactive substances (according to the Bureau of Land and Water Quality in the US). If toxic and radioactive substances need to be pumped out to make room for CO2 then it’s hardly a clean process!
The US Geological Survey has an interesting page about volcanic gas. Apparently it’s not uncommon for small animals to be killed when CO2 forms pools in low lying areas. If (when?) CO2 escapes from geo-sequestration the same might happen with humans. They also have a page about CO2 killing trees at Mammoth Mountain! Before I read this I never realised that plants could be killed by excessive CO2. Apparently tree roots need oxygen and CO2 in the ground will kill them. The release of 300 tons of CO2 per day killed 100 acres of trees. The FutureGen trial power plant is designed to support sequestration of over 1,000,000 tons of CO2 per year (that is over 2,700 tons per day). If it leaked at 1/9 that rate then damage comparable to Mammoth Mountain would be the result. Note that the FutureGen trial plant will be a fraction of the size of a real coal power station so an escape of significantly less than 1/9 of the CO2 from a real sequestration plant would have such a bad result. It’s interesting to note that tents and basements are documented as CO2 risks, so I guess we have to avoid camping in areas near power plants!
What would happen if a large geo-sequestration project had a sudden failure? IE if the reservoir broke and all the CO2 erupted suddenly? We already have an answer to this question because such things have happened in the past. In 1986 in Cameroon 1.2 cubic kilometers of CO2 gas was released from a volcanic lake, that is 2,400,000 tons (or just over two years of output from the proposed FutureGen plant). It killed over 2000 people. What might happen if 10 years of output from a commercial scale coal power plant was suddenly released into the atmosphere?
As far as I know there has been no research on de-sequestration of CO2. If a reservoir is discovered to be unstable after 20,000,000 tons of CO2 have been stored in it, what will we do?
Geo-sequestration of CO2 makes nuclear power plants seem safe by comparison.
Here is a transcript of a lecture by George Monbiot about climate change and what we need to do. The latest scientific evidence suggests that we need to cut emissions to zero by 2030 to avoid significant increases in the sea level over the next century, George describes some options that will form part of a solution to this problem. Below are my comments on what I consider the most interesting (the ideas that I hadn’t heard of before), I recommend reading the full article for the rest.
- Have a carbon ration for each citizen. Wealthy people who want to use more resources could buy carbon rations from poorer people on an open market. That way people who use less than their ration still have an incentive to save more because the extra savings are worth money! As everyone would then have a financial incentive to reduce emissions there would be a lot of new development of methods and technologies for eliminating or compensating for carbon emissions, capitalism works!
- Build battery powered cars with interchangeable batteries. The idea is that you rent a battery from a fuel company, and whenever it runs low you go to a service station and swap it for a fully charged battery (for a small fee). If doing this the service station could use cheap night-time electricity to charge the batteries, and the batteries that are charged could be used to put electricity back into the grid at times of peak demand. A common idea is to have Prius+ type vehicles charge from the grid when not being used and then sell electricity back to the grid at peak times. Implementing such a system for millions of homes is technically challenging and expensive. But having a much smaller number of service stations sell larger quantities of electricity back to the grid is easier to manage.
- Reduce air travel by 90%. I wonder how much of this can be achieved by using high-speed trains for all national travel systems and for most travel within the EU. I have often travelled between Amsterdam and London by train, it’s much more civilised than flying.
- Classic quote from George on John Howard: “if Howard believes a slight reduction in consumption is a recessionary measure he ought to see what a total reduction of land area would be as a result of the melting of the west Antarctic ice sheet. The two things are just completely out of proportion..”
George Monbiot also has recently released a new book Heat: How to Stop the Planet from Burning .
I have just been asked for advice on whether SE Linux is Linux specific, and therefore whether code related to SE Linux should always be stored with other Linux specific code instead of being in the main branch of certain free software projects.
One example of SE Linux access controls being implemented on a different OS is the work to port SE Linux to Mac OS/X. Here is a paper on the topic presented at the SE Linux Symposium 2007, and the main site is at http://sedarwin.org. One thing I have been doing is trying to get some friends interested in doing similar work for GNU Hurd (there are some similarities between Darwin and HURD so the work done on Mac OS/X “Darwin” will help the HURD effort). I believe that The HURD has the potential to offer significant security benefits due to the micro-kernel design. One significant problem area in computer security is kernel security flaws, if the kernel can be split into a set of independent processes that run with minimal privileges then the scope of such problems is dramatically decreased – and the possibility of upgrading parts of a kernel on a live machine is provided. As people such as Linus point out there is a performance overhead to micro-kernels, but most machines are idle most of the time anyway. I believe that reliability and security are more important than getting the last 10% of system performance for most machines. The success of Xen is evidence that features other than maximum performance are desired.
Another example of SE Linux access controls on a non-Linux platform is the MAC framework in the TrustedBSD project. This implements SE Linux access controls on top of FreeBSD. From reading the documentation it seems that the amount of changes required to the SE Linux code base for implementation on TrustedBSD was significantly smaller than the changes required for Darwin.
Sun is also apparently considering adding type-enforcement to Solaris. It’s yet to be seen whether this happens and if so whether it is compatible with SE Linux.
So it seems that a significant portion of the SE Linux code base is portable, and in particular the user-space code should port well. The interfaces for and methods labelling files etc should port well between platforms. Therefore I recommend not having SE Linux code split into Linux specific trees and instead having a compile option to enable SE Linux support.
I just needed to test something so I mounted the filesystem of one of my Xen domains in the Dom0 and chroot’d into it (I didn’t need the overhead of running a DomU for a quick test). Then strangely I found that my chroot environment had no apt-get and no dpkg installed.
After a small amount of thought I realised that I had accidentally mounted the filesystem for a Fedora image instead of a Debian image.
It seems to me that it might be a good idea for distributions to have shell scripts replacing the package tools for other distributions. For example dpkg on a Fedora box could be a shell script that runs echo “This system is running Fedora, you want to use rpm instead of dpkg”. Such a command would have saved me a couple of minutes of thinking.
This may sound trivial, but if you consider the number of people who make such mistakes (and the even larger number of people who don’t realise that there is even more than one package tool in existence) then wasting a few K of disk space on every system to help them seems like a good idea.
The minimum size of new hard drives that you can purchase seems to be 36G nowadays. So it’s not as if this would really cost anything.
Please leave comments to tell me if I’m being sensible or silly. If the idea is regarded as good I’ll start filing bug reports.
|
|