Archives

Categories

Links July 2009

Katherine Fulton gave a TED talk about the future of philanthropy [1]. She started out well with an overview of some of the technical methods, but I felt that the ending was lacking. At the end she made an emotional appeal for people to be philanthropic, it seems to me that you can’t easily convince people to do such things so it’s best to try to develop the tendencies that people already have in that regard.

My friend Rik van Riel is a member of the Atheists, Agnostics, Skeptics, Freethinkers, Secular Humanists, and the Non-Religious lending team in Kiva [9]. Kiva is one of the philanthropic organisations that Katherine Fulton mentions.

Seth Godin gave an inspiring TED talk about “tribe” leadership [2]. His use of the term “tribe” differs from the common use in that he only refers to a cultural group rather than kinship (the more common definition of a tribe). So his definition seems to be best described as followers of a meme – although it doesn’t sound cool that way. He has some great ideas for how to motivate groups of people which should be useful for anyone who wants to influence people to get things done.

Phil Zimbardo (who is well known for the infamous Stanford Prison experiment) gave an insightful TED talk about the psychology of evil [3]. His main point is that largely good or evil actions are determined by the environment. A secondary point he makes is that people should be taught that they can be heros by merely deciding to refrain from following a crowd into evil or doing small things to help others. He suggests that “super heros” give children bad ideas about heroism. He also co-wrote an interesting paper the Banality of Heroism [4]. It’s published on the EveryDayHeroism.org site.

Steven Pinker gave an informative TED talk about the decline of violence throughout recorded history [5]. This is clear evidence that people who are nostalgic are wrong. One interesting part of his lecture concerned changing standards in society towards violence. One issue is that the claims that society has a problem with violence are in part based on changing standards, it used to be that genocide was well regarded by most people (he cites the Bible for an example of this) but now society is increasingly intolerant of violence so things generally seem worse.

Steven Pinker also gave an interesting TED talk about his book “The Blank Slate” [6]. He describes some research that reveals the innate traits that are programmed into people and discusses some of the implications.

CNN has an interesting article about a drug that prevents the cravings that recovering alcoholics experience [7], of course that is only part of the solution to the problem of alcoholism – but it is a significant part. According to research by Canada’s Centre for Addiction and Mental Health and published in Lancet 1 in 25 deaths worldwide can be attributed to alcohol [8], so it seems that many lives could be saved by this new drug use (the drug was previously prescribed for other ailments).

Google is about to release a new distribution of Linux with a new GUI that is designed for running the Chrome browser on Netbooks [10]. This will be interesting to see, hopefully they will have developed some new way of making a GUI take good advantage of low resolution screens such as 800*480 in the EeePC 701.

Cory Doctorow writes about the issues relating to unauthorised distribution of pre-release movies [11]. When movies are shown for review cinemas are being forced to confiscate mobile phones from the viewers which will supposedly prevent unauthorised copies being made. But apparently most movies are leaked by insiders before the reviewers even get to see them. Also for the duration of the movie the phones are not stored in a secure manner which allows a variety of personal data to be accessed by security guards and anyone else who gets to play with the phones.

The Pope has written an encyclical criticising “excessive zeal for protecting knowledge through an unduly rigid assertion of the right to intellectual property, especially in the field of health care” [12]. It’s good to see a religious leader take a stand on a moral issue.

Hating Microsoft

In mailing list discussions I’ve seen Windows users get rather unhappy when people talk about “Hating Microsoft“, this often includes claims that it’s supposedly “unprofessional” to hate one vendor. Some go as far as to claim that it’s a good idea to avoid hiring someone who says that they Hate Microsoft – not that I would want to work for anyone who would reject someone’s CV based on a mailing list discussion.

The thing that they need to understand is that when someone says “I Hate Microsoft” it’s usually in a similar manner to someone saying “I Hate Broccoli“, it’s more of an expression of distaste than real hatred. The IHateMicrosoft.com site has animated pictures resembling nuclear explosions [1], which is good for a laugh (the site also lists some real reasons for avoiding MS). But there doesn’t seem to be any evidence of real hatred for MS, even in the US there doesn’t appear to be anyone wanting to use violence to solve the MS problem.

Abortion doctors are hated, MS isn’t.

The next thing that people need to know is that a significant portion of the “I Hate Microsoft” sentiment comes from people who spend about 40 hours a week being paid to use MS software. I am fortunate that it’s been a few years since I have had to use MS software in any way and many years since I was forced to use it in any serious way (IE anything other than using Windows as a SSH and email client), so I have little immediate need to get angry at them. But people who are forced to use or support MS software on a daily basis will often get unhappy about the situation.

It’s little things like an ActiveX bug that exposes Outlook and Internet Explorer to remote comprose [2] that can really annoy people, there was never a need for ActiveX and certainly never a need to have it work via email or be enabled by default. But MS released their software to work in that way and now all the users have to wait patiently for a fix (or scramble for a work-around).

Another issue that seems to get some complaints is the use of terms such as “M$” and “Microsloth” to refer to Microsoft. If that annoys you then please get a grip on yourself! It’s a software company not a religion! Official company documents should have all trademarks spelled correctly, but for casual discussion on a mailing list I think that such slang terms are appropriate. If nothing else you can take it as a declaration of possible bias.

I don’t use such terms, but again that may be because I am fortunate enough to not use MS software. When someone is unable to avoid using inferior software due to the anti-competitive actions of MS it is understandable that they may vent their frustration by misusing trademarked names.

Remember that English is a lot different from any language to use when programming computers. Using “M$” instead of “Microsoft” will not give a syntax error or an error about using an undeclared variable. The word “hate” has different meanings depending on context.

Sex and Lectures about Computers

I previously wrote about the appropriate references to porn in lectures about Computer Science [1]. It seemed that by providing a short list of all the appropriate ways that porn could be mentioned in a lecture some people might get the idea that the infinite variety of other potential ways that porn could be mentioned are mostly wrong.

In a separate response to the same incident Matt Bottrell wrote a list of the reasons why he thinks that porn is inappropriate for a conference [2]. One of Matt’s weaker points in that post was “As a parent, I would be outraged if my teenage child attended such a conference to be subjected to pornographic images“. I considered writing a post in response to that pointing out that I believe that the social pressures on teenagers to perform various sex acts appears to be a much greater problem than the risk of occasionally seeing porn. But apart from rumors I heard at one conference regarding a distasteful incident at a party I couldn’t tie that issue to a free software conference, and I was not well enough connected into the gossip network to determine the facts of the party in question.

The free software community seems much more enlightened than the proprietary software community. The conference environment sets higher standards, I believe that the general reaction to the incidents of porn demonstrates the character of the community. But surely no-one would give a lecture at a conference and advocate “relieving people of their virginity“. If such a thing was to happen then surely it would come from someone who is little known and who lacks experience in giving public lectures.

But it turns out that my expectations were not correct, Richard Stallman (RMS) seriously offended many people by such antics [3]. It’s even more disappointing that people who admire him can’t admit to the fact that he stuffed up. I personally have great admiration for all the good work that RMS has done over the course of decades. But I have to say that he’s gone too far this time.

Matthew Garrett suggests either not inviting RMS to give a keynote speech or giving an apology to the audience beforehand [4]. I don’t think it’s a viable option to give an apology for allowing someone to speak at a conference, so I take Matthew’s post as a call to stop inviting RMS to speak at conferences.

Update: Matthew has updated his post to explain that he meant that RMS should give an apology before he is offered any future invitations – not that the conference organisers should apologise to the audience for any offense that he might cause. But as it seems extremely unlikely that RMS will ever back down I don’t think this makes a difference in the end.

I think that this is a very strong measure to take, refraining from inviting someone so influential who has contributed so much is unheard of. But one thing we know about RMS is that he is particularly stubborn. The positive side of this is that he has done a huge amount of work over 30+ years that has benefited many people. The negative side of his obstinacy is that it seems extremely unlikely that he will apologise or agree to amend his behavior. So it seems that there is no reasonable option other than to refrain from inviting him.

A major benefit that a keynote speaker provides to a conference is prestige. It seems to me that many people now regard RMS as a negative reference for the value of a conference. So even conference organisers who don’t think that RMS did anything wrong will probably be less likely to invite him.

I don’t think that I will ever attend another lecture by RMS.

PS If we are going to mention teenagers in regard to such issues, it would be best to mention the age – there is a huge difference between a 13yo and a 19yo, both socially and legally.

Journalism, Age, and Mono

Daniel Stone has criticised the IT journalist Sam Varghese for writing a negative article about a college student [1].

The student in question is 21 years old, that means he is legally an adult in almost every modern jurisdiction that I am aware of (the exception being Italy where you must be 25 years old to vote in senatorial elections [2]). It’s well known that college students often do stupid things, it’s not at all uncommon for college parties to end up involving the police. When 21yo college students do foolish things that involve breaking the law is it common for people to defend them because they are only students? I’m pretty sure that the legal system won’t accept such a defense. So while Sam was rather harsh in his comments (and did go a bit far with implying links to GNOME/Mono people), I don’t think it’s inappropriate on the basis of age. That said, a casual glance at life insurance premium tables by age will show that men who are less than 25 years old are prone to doing silly things, so I won’t hold this against the student in question as I’m sure he will be more sensible in future – I haven’t included his name in this post.

It’s often said that “you shouldn’t do anything that you would be ashamed of if it was described on the front page of a newspaper“, while I think that statement is a little extreme I do think it’s reasonable to try and avoid writing blog posts that you would be ashamed of if a popular blogger linked to it with a negative review. You have to expect that a post titled “Fuck You X” where X is the name of some famous person will get a significant reaction, and no-one can reasonably claim to have not wanted to offend anyone with such a post. Personally I would prefer that when people disagree with me they provide a list of reasons (as Sam did) rather than just a short negative comment with no content (as is more often the case).

Here is Sam’s article [3].

Here is the original version of “Fuck you, Richard Stallman and other GNU/Trolls” [4].

Here is an updated version titled “On Mono and the GPL” [5].

Here is a good rebuttal of the points made in the article [6] by “Cranky Old Nutcase”. Note that this rebuttal is linked from reference [5], it is a positive sign when someone links to documents that oppose their ideas to allow the reader to get all the facts. One significant fact that Cranky Old Nutcase pointed out and which was missed by Sam is that the Indian student wrote “A mentor of mine told me that patents are to prevent companies from getting sued, not to sue companies” while the Microsoft case against Tomtom is conclusive proof that patents ARE for the purpose of suing other companies and they ARE used in such a manner by Microsoft! I wonder whether the “mentor” in question is a Microsoft employee…

On the topic of Mono, I think that Alexander Reichle-Schmehl has the most reasonable and sensible description of the situation regarding Mono in Debian [7].

In spite of the nice dinner they gave me I still don’t trust Microsoft [8].

Released Bonnie++ 1.96

I have released version 1.96 of Bonnie++ in the experimental branch [1].

The main changes are:

  1. Made it compile on Solaris again (version 1.95 broke that)
  2. Now supports more files for the small file creation test (16^10 files is the limit), and it handles an overflow better. Incidentally this will in some situations change the results so I changed the result version in the CSV file.
  3. Fixed some bugs in bon_csv2html and added some new features to give nicer looking displays and correct colors

I still plan to add support for semi-random data and validation of data when reading it back before making a 2.0 release. But 2.0 is getting close.

DomainKeys and OpenSSL have Defeated Me

I have previously written about an error that valgrind reported in the STL when some string operations were performed by the DKIM library [1]. This turned out to be a bug, Jonathan Wakely filed GCC bug report #40518 [2] about it, Jonathan is one of many very skillful people who commented on that post.

deb http://www.coker.com.au lenny gcc

I’m still not sure whether that bug could actually harm my program, Nathan Myers strongly suggested that it would not impact the correct functionality of the program but mentioned a possible performance issue (which will hurt me as the target platform is 8 or 12 core systems). Jaymz Julian seems to believe that the STL code in question can lead to incorrect operation and suggested stlport as an alternative. As I’m not taking any chances I built GCC with a patch from Jonathan’s bug report for my development machines and then built libdkim with that GCC. I created the above APT repository for my patched GCC packages. I also included version 3.4.1 of Valgrind (back-ported from Debian/Unstable) in that repository.

Nathan Myers also wrote: “Any program that calls strtok() even once may be flagged as buggy regardless of any thread safety issues. Use of strtok() (or strtok_r()) is a marker not unlike gets() of ill thought out coding.” I agree, I wrote a program to find such code and have eliminated all such code where it is called from my program [3].

I think it’s unfortunate that I have to rebuild all of GCC for a simple STL patch. My blog post about the issue of the size and time required to rebuild those packages [4] received some interesting comments, probably the most immediately useful one was to use --disable-bootstrap to get a faster GCC build, that was from Jonathan Wakely. Joe Buck noted that the source is available in smaller packages upstream, this is interesting, but unless the Debian developers package it in the same way I will have to work with the large Debian source packages.

I have filed many bug reports against the OpenSSL packages in Debian based on the errors reported by Valgrind [5]. I didn’t report all the issues related to error handling as there were too many. Now my program is often crashing when DomainKeys code is calling those error functions, so one of the many Valgrind/Helgrind issues I didn’t report may be the cause of my problems. But I can’t report too many bugs at once, I need to give people time to work on the current bug list first.

Another problem I have is that sometimes the libdkim code will trigger a libc assertion on malloc() or free() if DomainKeys code has been previously called. So it seems that the DomainKeys code (or maybe the OpenSSL code it calls) is corrupting the heap.

So I have given up on the idea of getting DomainKeys code working in a threaded environment. Whenever I need to validate a DomainKeys message my program will now fork a child process to do that. If it corrupts the heap while doing so it’s no big deal as the child process calls exit(0) after it has returned the result over a pipe. This causes a performance loss, but it appears that it’s less than 3 times slower which isn’t too bad. From a programming perspective this was fairly easy to implement because a thread of the main program prepares all the data and then the child process can operate on it – it would be a lot harder to implement such things on an OS which doesn’t have fork().

DomainKeys has been obsoleted by DKIM for some time, so all new deployments of signed email should be based on DKIM and systems that currently use DomainKeys should be migrating soon. So the performance loss on what is essentially a legacy feature shouldn’t impact the utility of my program.

I am considering uploading my libdomainkeys package to Debian. I’m not sure how useful it would be as DomainKeys is hopefully going away. But as I’ve done a lot of work on it already I’m happy to share if people are interested.

Thanks again for all the people who wrote great comments on my posts.

Web Hosting After Death

Steve Kemp writes about his concerns for what happens to his data after death [1]. Basically everything will go away when bills stop being paid. If you have hosting on a monthly basis (IE a Xen DomU) then when the bank account used for the bill payment is locked (maybe a week after death) the count-down to hosting expiry starts. As noted in Steve’s post it is possible to pay for things in advance, but everything will run out eventually.

One option is to have relatives keep the data online. With hard drives getting bigger all the time it wouldn’t be difficult to backup the web sites for everyone in your family to a USB flash device and then put it online at a suitable place. Of course that relies on having relatives with the skill and interest necessary.

The difficult part is links, if the domain expires then links will be broken. One way of alleviating this would be to host content with Blogger, Livejournal, or other similar services. But then instead of the risk of a domain being lost you have the risk of a hosting company going bankrupt.

It seems to me that the ideal solution would be to have a hosting company take over the web sites of deceased people and put adverts on them to cover the hosting costs. As the amount of money being spent on Internet advertising will only increase while the costs of hosting steadily go down it seems that collecting a lot of content for advertising purposes would be a good business model. If the web sites of dead people are profitable then they will remain online.

It wouldn’t be technically difficult to extract the data from a blog server such as WordPress (either from a database dump or crawling the web site), change the intra-site links to point to a different domain name, and then put it online as static content with adverts. If a single company (such as Google) had a large portion of the market of hosting the web sites of dead people then when someone died and had their web site transferred the links on the other sites maintained by the same company could be automatically adjusted to match. A premium service from such a company could be to manage the domain. If they were in the domain registrar business it would be easy to allow someone to pay for 10 or 20 years after their death. Possibly with a portion of the advertising revenue going towards extending the domain registration. I think that this idea has some business potential, I don’t have the time or energy to implement it myself and my clients are busy on other things so I’m offering it to the world.

Cory Doctorow has written an article for the Guardian about a related issue – how to allow the next of kin to access encrypted data when someone is dead [2]. One obvious point that he missed is the possibility that he might forget his own password, a small injury from a car accident could cause that problem.

It seems strange to me that someone would have a great deal of secret data that needs strong encryption but yet has some value after they are dead. Archives of past correspondence to/from someone who is dead is one category of secret data that is really of little use to anyone unless the deceased was particularly famous. Probably the majority of encrypted data from a dead person would be best wiped.

For the contents of personal computers the best strategy would probably be to start by dividing the data into categories according to the secrecy requirements. Publish the things that aren’t secret, store a lot of data unencrypted (things that are not really secret but you merely don’t want to share them with the world), have a large encrypted partition that will have it’s contents lost when you die, and have a very small encrypted device that has bank passwords and other data that is actually useful for the executors of the will.

One thing that we really need is to have law firms that have greater technical skills. It would be good if the law firms that help people draw up wills could advise them on such issues and act as a repository for such data. It seems to me that the technical skills that are common within law firms are not adequate for the task of guarding secret electronic data for clients.

Valgrind and OpenSSL

I’ve just filed Debian bug report #534534 about Valgrind/Helgrind reporting “Possible data race during write” [1]. I included a patch that seems to fix that problem (by checking whether a variable is not zero before setting it to zero). But on further testing with Valgrind 3.4.1 (backported from Debian/Unstable) it seems that my patch is not worth using, I expect that Valgrind related patches won’t be accepted into the Lenny version of OpenSSL.

I would appreciate suggestions on how to fix this, the problem is basically having a single static variable that is initialised to the value 1 but set to 0 the first time one of the malloc functions is called. Using a lock for this is not desirable as it will add overhead to every malloc operation. However without the lock it does seem possible to have a race condition if one thread calls CRYPTO_set_mem_functions() and then before that operation is finished a time slice is given to a thread that is allocating memory. So in spite of the overhead I guess that using a lock is the right thing to do.

deb http://www.coker.com.au lenny gcc

For the convenience of anyone who is testing these things on Debian and wants to use the latest valgrind, the above Debian repository has Valgrind 3.4.1 and a build of GCC to fix the problem I mentioned in my previous blog post about Valgrind [2].

if (default_RSA_meth == NULL)
default_RSA_meth=RSA_PKCS1_SSLeay();

I have also filed bug #534656 about another reported race condition in the OpenSSL libraries [3]. Above is the code in question (with some C preprocessor stuff removed). This seems likely to be a problem on an architecture for which assignment of a pointer is not an atomic operation, I don’t know if we even have any architectures that work in such a way.

static void impl_check(void)   {
        CRYPTO_w_lock(CRYPTO_LOCK_EX_DATA);
        if(!impl)
                impl = &impl_default;
        CRYPTO_w_unlock(CRYPTO_LOCK_EX_DATA);
}
#define IMPL_CHECK if(!impl) impl_check();

A similar issue is my bug report bug #534683 [4] which is due to a similar issue with the above code. If the macro is changed to just call impl_check() then the problem will go away, but at some performance cost.

I filed bug report #534685 about a similar issue with the EX_DATA_CHECK macro [5].

I filed bug report #534687 about some code that has CRYPTO_w_lock(CRYPTO_LOCK_EX_DATA); before it [6], so it seems that the code may be safe and it may be an issue with how Valgrind recognises problems (maybe a Valgrind bug or an issue with how Valgrind interprets what the OpenSSL code is doing). Valgrind 3.3.1 reported many more issues that were similar to this, so it appears that version 3.4.1 improved the analysis of this but didn’t do quite enough.

I filed bug report #534706 about the cleanse_ctr global variable that is used as a source of pseudo-randomness for the OPENSSL_cleanse() function without locking [7]. It seems that they have the idea that memset() is not adequate for clearing memory. Does anyone know of a good research paper about recovering the contents of memory after memset()? I doubt that we need such things.

I filed bug report #534699 about what appears to be a potential race condition in int_new_ex_data() [8]. The def_get_class() function obtains a lock before returning a pointer to a member of a hash table. It seems possible for an item to be deleted from the hash table (and it’s memory freed) after def_get_class() has returned the pointed but before int_new_ex_data() accesses the memory in question.

I filed bug report #534889 about int_free_ex_data() and int_new_ex_data() which call def_get_class() before obtaining a lock and then use the data returned from that function in a locked area[9] (it seems that obtaining the lock earlier would solve this).

I filed bug report #534892 about another piece of code which would have a race condition if pointer assignment isn’t atomic, this time in err_fns_check() [10]. In my first pass I didn’t bother filing bug reports about most of the issues helgrind raised with the error handling code (there were so many that I just hoped that there was some subtle locking involved that eluded helgrind and my brief scan of the source). But a new entry in my core file collection suggests that this may be a problem area for my code.

I think that it is fairly important to get security related libraries to be clean for use with valgrind and other debugging tools – if only to allow better debugging of the code that calls them. I would appreciate any assistance that people can offer in terms of fixing these problems. I know that there are security risks in terms of changing code in such important libraries, but there are also risks in leaving potential race conditions in such code.

As an aside, I’ve filed a wishlist bug report #534695 requesting that valgrind would have a feature to automatically add entries to the suppressions file [11]. As a function that is considered to be unsafe can be called from different contexts, and code that is considered unsafe can be in a macro that is called from multiple functions there can be many different suppressions needed. Pasting them all into the suppressions file is tedious.

Microsoft Open Source Information Evening

I have just attended a Microsoft Open Source Information Evening. It was in some ways one of the stranger things that I have experienced in my computer career.

Firstly there was the location, it was in a function room in the CBD, it was convenient for public transport and had good service but seemed likely to be quite expensive. A MS employee said that they believed that some people wouldn’t want to enter an MS office – I can’t imagine why they think that they could convince people who refuse to enter the MS office of anything if they got them to attend. As there were only about 6 people who weren’t from MS it seems likely that they paid something in excess of $200 per head for each non-MS delegate (I can’t imagine two function rooms, two dedicated hotel employees manning the bar, and a supply of food for a larger audience costing less than $1200).

If they had spent $100 per head for us all to have dinner at a good restaurant then I think that the result would have been better. They might want to consider running targeted meetings in future with a small number of people personally invited to dinner at a good restaurant. That said, the dinner of duck canapes and asian-style chicken noodles that they provided was pretty good.

I suggested that they should find other ways of promoting such events as the audience was obviously smaller than they desired. One suggestion that I made was that they create a blog about what MS in Australia is doing in relation to Linux and to offer the RSS feed URL to the people who run Planet Linux Australia. They were reluctant to accept that idea and stated that they don’t want to be seen to be forcing their presence where they are not wanted. That is a good approach (and a contrast to some activities of MS in the past). But I believe that it is misguided in terms of RSS feeds. When you create a blog you make the RSS feed available and then the people who run syndication services have the option of using it. The Linux community is on the side of open discussion, I don’t think that we have anything to fear from hearing what MS people have to say. While my opinion of MS has improved this evening, I still have no interest in using any of their software. Linux just works really well and satisfies all of my needs.

There were a bunch of smart MS people there, they seemed to really care about their work and want to improve things. Their pitch was about how Open Source software works on Windows, they showed demos of the installation process for a variety of PHP programs and showed Python code being used in a MS web environment. Most of the presentation time involved technologies developed outside of MS, while there was obviously a lot of MS code involved in getting Python, Ruby, PHP, etc working well the focus was mostly on the free software. They also mentioned some of their work in opening APIs so that free software programs can access Exchange servers (among other things). I didn’t pay a great deal of attention to the technology as I’m never going to use it. I was more interested in their approach which was positive and respectful and the general trend of what they are doing.

It seems that there is an increasing number of people within MS who realise that free software is not going away and that their customers demand that things work together.

They also didn’t display any of the arrogance for which MS is known. When one of the delegates predicted that MS would take a fall the way IBM did there was no argument about that possibility, instead there was a discussion about how MS software can be used with software from other sources to meet the current and future needs of customers.

The discussion of software patents was generally not very productive, I got the impression that they were not permitted to give anything that I would have considered to be a good answer to any of the questions. They did show examples of software that they have released with RAND terms for patents and other situations in which there would be no patent liabilities. But it seems that MS as a whole has no interest in getting any of the patent problems fixed. I can only hope that IBM, NEC, or one of the other big patent companies will give MS a demonstration of why software patents are bad.

Finally I was given a couple of 8GB USB sticks and a copy of MS Expression Studio 2. If anyone wants the unopened copy of Expression Studio they can make me an offer by email.

Unreasonably Large Source Packages

For the past few hours I’ve been going a build of the GCC packages on a dual-core Opteron system with 2.5G of RAM and a pair of reasonably fast SATA disks in a RAID-1 array. The machine is reasonably powerful so presumably such a build would take a significantly larger amount of time on a laptop or an older machine – my primary development machine is an old laptop and is thus unsuitable for such things.

My aim is to do a build with the patch for GCC bug 40518 [1] – which is a small patch to the STL.

Presumably the people who are seriously involved in GCC development don’t do this, they would be doing a build of a small sub-set that matches the code that they are working on. But as someone who is not involved in the project such an approach doesn’t seem viable, by using the Debian build tools to rebuild all packages from the source package I can reliably get a good build.

It would be convenient if these large source packages could be split into smaller packages. It shouldn’t be necessary to compile the C compiler (presumably with the full double-compile process), as well as the C++, Objective C, and Fortran compilers when I only want to compile the STL (libstdc++). It also shouldn’t be necessary for me to hack around a build system when all I want to do is to apply and test a single patch.

It seems to me that the current situation discourages contributions. If I can build a package in a reasonable amount of time on my laptop (Pentium-M 1.7GHz with 1.5G of RAM) then I can work on it at any time and in any place. If it requires hours of build time on my biggest machine then I can only work on it when at home and only when I have hours to spare (or if I have enough of a need to come back to it the next day).

So if there are two bugs that have equal importance to me and one of them happens to be in part of the GCC family then the probability that I will work on the GCC bug is close to zero.

I realise that packaging GCC etc is really hard work. But it seems that making it easier for more people to contribute would alleviate the burden slightly.