|
There has been ongoing debate in the Debian community for a number of years about what standards of behavior should be expected. Matthew Garrett sets a new low by making a joke about Jesus being molested as a child [1]. While I believe that debate and discussion about religion is a good thing, such comments about someone who is widely regarded as a God (part of the Holy Trinity) seems to provide no value and just needlessly offends people. I used to be a Christian, and while I have great disagreements with most Christians about issues of religion I still believe that Jesus (as described in the bible) was a good person and deserves some respect. I don’t believe that blasphemy should be illegal, but some minimum standards should be observed when discussing religion.
Next there is the issue of child molesting, most people agree that there’s nothing amusing in that – so I hope that nothing more needs to be said about violating babies.
Finally there is the issue of rape in general often being treated as a joke in the computer industry (I am not sure how prevalent this is in the wider community). One example that is from a Wired article: “We were raped by Microsoft. Bill Gates did it personally. Introducing Gates to the president of a small company is like introducing Mike Tyson to a virgin” [2]. I admit that finding examples of this on the web is not easy, part of it is due to such slang use being more common in spoken communication than in written communication, another is the vast number of slang terms that are used.
A Google search for “male rape” [3] turns up some informative articles. One FAQ suggests that 3% of men will be raped as an adult [4] – I expect that some guys will decide that it’s not so funny when they realise that it could happen to them.
For people who’s knowledge of the English language is not as good as that of Matthew and I, here is the dictionary definition of the word “violated” [5].
An issue that causes ongoing discussion is what is the purpose of a Planet installation such as Planet Debian [1]. The discussion usually seems to take the less effective form of what is “appropriate” content for the Planet or what is considered to be “abuse” of the Planet. Of course it’s impossible to get anything other than a rough idea of what is appropriate is the purpose is not defined, and abuse can only be measured on the most basic technical criteria.
My personal use of Planet Debian and Planet Linux Australia [2] is to learn technical things related to Linux (how to use new programs, tricks and techniques, etc), to learn news related to Linux, and to read personal news about friends and colleagues. I think that most people have some desire to read posts of a similar nature (I have received a complaint that my blog has too many technical posts and not enough personal posts), but some people want to have a Planet with only technical articles.
In a quick search of some planets the nearest I found to a stated purpose of a Planet installation was from the Wiki to document Planet Ubuntu [3] which says ‘Subscribed feeds ought to be at least occasionally relevant to Ubuntu, although the only hard and fast rule is “don’t annoy people”‘. Planet Perl [4] has an interesting approach, they claim to filter on Perl related keywords, I initially interpreted this to mean that if you are on their list of blogs and you write a post which seems to refer to Perl then it will appear – but a quick browse of the Planet shows some posts which don’t appear to match any Perl keywords. Gentoo has implemented a reasonable system, they have a Universe [5] configuration which has all blog posts by all Gentoo bloggers as well as a Planet installation which only has Gentoo related posts.
It seems to me that the a reasonable purpose for Planet Debian would be to have blog feeds which are occasionally specific to Debian and often relevant to Debian. Personal blog posts would be encouraged (but not required). Posts which are incomprehensible or have nothing to say (EG posts which link to another post for the sole purpose of agreeing or disagreeing) would be strongly discouraged and it would be encouraged to make links-posts rare.
Having two installations of the Planet software, one for posts which are specific to Debian (or maybe to Debian or Linux) and one for all posts by people who are involved with Debian would be the best option. Then people who only want to read the technical posts can do so, but other people can read the full list. Most blog servers support feeds based on tag or category (my blog already provides a feed of Debian-specific posts). If we were going to have a separate Planet installation for only technical posts then I expect that many bloggers would have to create a new tag for such posts (for example my posts related to Debian are in the categories Benchmark, Linux, MTA, Security, Unix-tips, and Xen) and the tag Debian is applied to only a small portion of such posts. But it would be easy to create a new tag for technical posts.
Ubuntu is also the only organisation I’ve found to specify conditions upon which blogs might be removed from the feed, they say: We reserve the right to remove any feed that is inaccessible, flooding the page, or otherwise interfering with the operation of the Planet. We also have the right to move clearly offensive content or content that could trigger legal action.
That is reasonable, although it would be good to have a definition for “flooding the page” (I suggest “having an average of more than two posts per day appear over the period of a week or having posts reappear due to changing timestamps”). Also the “could trigger legal action” part is a minor concern – product reviews are often really useful content on a Planet…
Some time ago my blog was removed from Planet Fedora for some reason. I was disappointed that the person who made that change didn’t have the courtesy to inform me of the reason for their action and by the fact that there is no apparent way of contacting the person who runs the Planet to ask them about it. Needless to say this did not encourage me to write further posts about Fedora.
If a blog has to be removed from a feed due to technical reasons then the correct thing to do is to inform the blogger of why it’s removed and what needs to be fixed before it can be added again.
If a blog is not meeting the content criteria then I expect that in most cases the blogger could be convinced to write more content that matches the criteria and tag it appropriately. Having criteria for some aspects of blog quality and encouraging the bloggers to meet the criteria can only improve the overall quality.
Currently there is a Planet installation on debian.net being recommended which is based on Planet Debian, but with some blogs removed (with no information available publicly or on debian-private as to what the criteria are for removing the blogs in question). It seems to me that if it’s worth using Debian resources to duplicate the Planet Debian then it should be done in a way that benefits readers (EG by going to the Planet vs Universe model that Ubuntu follows), and that if blogs are going to be removed from the feed then there should be criteria for the removal so that anyone who wants their blog to be syndicated can make whatever changes might be necessary.
Recently a Debian Developer resigned from a position of responsibility in the project by writing a blog post. I won’t name the DD or the position he resigned as I think that there are general issues which need discussion and specific examples will get in the way (everyone who is seriously involved will know who it is anyway – for those who don’t know, it’s not really exciting).
Also I think that the issue of the scope of a Planet installation is of wider importance than the Debian project, so it would be of benefit for outsiders who stumble upon this to see a discussion of general issues rather than some disagreements within the Debian project.
There has been some mild criticism of the DD in question for announcing his resignation via a blog post. I don’t think that this is appropriate. In the absence of evidence to the contrary I’ll assume that the DD in question announced his resignation to the relevant people (probably the team he worked with and the Debian Project Leader) via private email which was GPG signed (if he indeed intended to formally resign).
The resignation of one DD from one of the many positions of authority and responsibility in the project is not going to have a great affect on the work of most DDs. Therefore I don’t think that it was necessarily a requirement to post to the debian-private mailing list (the main list for communication between all developers regarding issues within the project) about this. It was however an issue that was bound to get discussed on debian-private (given that the circumstances of the resignation might be considered to be controversial) so it seems to me that sending an email of the form “here is a blog post I’ve written about my resignation” would have saved some pointless discussion (allowing us to skip the “why didn’t you send email” and get right on to the main discussion).
A resignation letter from a public position of responsibility is a significant document. Having such documents stored on publicly accessible places is good for the community. Having a record of all such documents that you have written stored on your own server for reference (by yourself and by other people you work with) is a good thing. Therefore it seems to me that a blog is an ideal place for a resignation letter. It used to be regarded that there was a certain formality in such things, and that a letter of resignation was required to be delivered in the most direct way possible (by hand if convenient) to the person who receives it. If such conventions were followed then a blog post would occur after the receipt of the letter of resignation had been confirmed (possibly in this case a confirmation email from the DPL). But in recent times things have become less formal and the free software community is particularly informal. So it seems quite appropriate to me to have the blog post come first and the email notification merely contain the URL.
Now a letter of resignation is expected to contain certain specific details. It should say specifically what duties are being resigned (particularly important when a person performs many tasks), it should have a date from which it will take effect, and it might be appropriate to mention issues related to the hand-over of tasks (whether the person resigning is willing to work with their replacements).
The “resignation” (if we should call it that) in question did not contain any of the specific details that I would expect to see in a formal resignation. This indicates to me that it could be interpreted as not being a formal and official resignation, but instead being a post (possibly written in haste while angry) about a situation which may not end up being an official resignation. Until we get some more information we won’t know for sure either way.
This demonstrates one problem with blogs, people usually have a mixture of serious documents and trivial things on the one blog. It can be difficult to determine how seriously to take blog posts. I’m not sure that there can be a good solution to this.
At the moment some people are suggesting that every DD should read Planet Debian [1]. I disagree with that. If there is an issue which is significant and affects the entire project then it should be announced on one of the mailing lists such as debian-private, debian-announce, or debian-devel-announce (and will be announced on one of them eventually even if not by the person closest to the events). Forcing every DD to read a lot of blog posts is not in the best interests of the project. Now we could create a separate Planet installation for such things, there is already Debian Times [2] and Debian Administration [3] which serve as a proof of concept. If there was a Planet installation for important stuff related to Debian which had it’s content syndicated in various ways (including an email gateway – Feedburner.com provides a quite useful one) then requesting that everyone read it’s content in some way (either by web browsing, an RSS feed reader, syndication in another Planet, email, or something else) would not be unreasonable. The volume of posts on such a Planet would be quite small (similar to the current announcement mailing lists) so if received by email it wouldn’t fill anyone’s mailbox and if people visited the web site they would only need to do so every second month if that suited them.
The issue of what types of posts are suitable for Planet Debian is probably going to get raised again soon as a result of this.
Rick Falkvinge (leader of the Swedish Pirate Party) has written his predictions about an economic crash in the US [1]. Predicting that the US economy will crash is no great stretch, it’s gross failures seem obvious. The Pirate Party [2] is a one-issue political party that is based on reform of intellectual property laws. It derived it’s name from the term Software Piracy [3] which originally referred to using software without paying for it, but in recent times has been broadened in scope to cover doing anything that copyright holders don’t like. The term “Piracy” is deprecated in the free software community based on the fact that it’s unreasonable to compare armed robbery and murder on the high seas (which still happens today and costs between $US13,000,000,000 and $US16,000,000,000 per year [4]) with copying some files without permission. But that battle has been conclusively lost, so it seems that the mis-use of the term “Piracy” will continue.
The majority of the acts which are considered to be “Piracy” are well accepted by the community, the acts of the music industry in taking legal action against young children have only drawn more public support for the “Pirate” cause. Such support is increasing the changes of the Swedish Pirate Party getting a seat in parliament at the next election, and has caused the major Swedish parties to change their positions on IP legislation.
Now Rick’s background related to Intellectual Property issues causes him to analyse the IP aspects of the current US problems. His claim is that the US economy was trashed during the Vietnam war, has been getting worse ever since, and that the US position on IP legislation is either intentionally or accidentally helping to finance the US while it’s production of useful things is steadily decreasing. He also claims that some multi-national financial customs (such as using the US dollar for the international oil trade) is propping up the US currency and effectively allowing the US government (and the US residents) to borrow money from the rest of the world.
Dmitry Orlov’s presentation titled “Closing the ‘Collapse Gap’: the USSR was better prepared for collapse than the US” [5] provides some interesting information on what happens during an economic collapse. He also has some specific advice on what can be done (by both governments and individuals) to prepare for an impending collapse. However he doesn’t mention some issues which are important to people like us (although not as important as food, water, and shelter).
On my document blog I’ve got a post with some ideas of how to run an Internet Infrastructure after a medium-scale collapse of the economy as we know it [6].
Wouter points out a mistake in one of my blog posts which was based on old data [1]. My original post was accurate for older distributions of Linux but since then the bug in question was fixed.
Normally when writing blog posts or email I do a quick test before committing the text to avoid making mistakes (it’s easy to mis-remember things). However in this case the bug would dead-lock machines which made me hesitant to test it (I didn’t have a machine that I wanted to dead-lock).
There are two lessons to be learned from this. The most obvious is to test things thoroughly before writing about them (and have test machines available so that tests which cause service interruption or data loss can be performed).
The next lesson is that when implementing software you should try not to have limitations that will affect user habits in a bad way. In the case of LVM, if the tool lvetend had displayed a message such as “Resizing the root LV would dead-lock the system, until locking is fixed such requests will be rejected – please boot from a rescue disk to perform this operation” then I would have performed a test before writing the blog post (as it would be a harmless test to perform). Also on occasion when I really wanted to resize a root device without a reboot I would have attempted the command in the hope that LVM had been fixed.
A bug that deadlocks a system is one that will really have an adverse affect on users, both their habits in future use, and the probability of them using the software in future. A bug (or missing feature) that displays a warning message will have much less of a problem.
From now on I will still be hesitant in using lvextend on a LV for a root filesystem on any machines other than the very latest for fear that they will break. The fact that lvextend will sometimes work on the root filesystem and sometimes put the machine offline is a serious problem that impacts my use of that feature.
Most people won’t be in a position to have a bug or missing feature that deadlocks a system, but there are an infinite number of ways that software can potentially interrupt service or destroy data. Having software fail in a soft way such that data is not lost is a significant benefit for users and an incentive to use such software.
I’ve put this post in the WTF category, because having a dead-lock bug in a very obvious use-case of commonly used software really makes me say WTF.
I’ve just read an interesting post on TomsHardware.com about a solar powered PC [1]. It describes all the steps involved in creating a modern high-performance low-power computer.
They have a lot of interesting information. One surprising fact (from page 3) is that the PSUs tested (both for AC and DC input) were more efficient when idle (I expected the greatest efficiency to be when under load).
An AMD processor was chosen due in large part to the fact that chipsets in suitable motherboards used less power. For the CPU itself Intel had a competitive offering but no matching motherboard was power efficient enough (from page 7).
Page 8 documents how using a cooling fan (instead of passive cooling) reduced the power requirements of the CPU to such a degree that it always saved power use overall. Why do CPUs take less power when they are cooler?
Page 9 mentions that a small passively cooled video card can draw 88.5W when idle! That sucks pretty badly, it seems that having a video controller integrated with the motherboard is the way to go if you want to save power.
It’s interesting to note how much energy can be used by RAM. Page 13 shows that the difference between 2*1G and 2*512M can be as much as 3.4W and that the difference between different brands of RAM for the 2*1G can make as much as 1.2W difference. Their final system drew 61W when idle, my latest 64bit system takes 52W when idle [2] (which compares to the 38W of their system without a monitor), so we are talking about 9% of system power being saved by using less RAM or 3% being saved by using a different brand of RAM.
The summary of hard drive power use on page 14 is interesting, the fact that 2.5 inch laptop disks use less power than 3.5 inch desktop disks is hardly surprising, but the difference when idle is very surprising (apparently one of the 3.5 inch disks spends 8W on turbulence and friction in the bearings). It’s unfortunate that they didn’t compare any of the server-class 2.5 inch disks, it was about 6 months before the article was written that HP announced that in future they would cease shipping 3.5 inch disks and only use 2.5 inch disks (I wonder if this is related to all HP’s recent work on server cooling). Rumor has it that many server class 3.5 inch disks have platters that would fit into a 2.5 inch case because at high rotational speeds a larger diameter platter would not be strong enough.
The information on DVD power use on page 15 is quite shocking. From now on when I install machines as servers which don’t have a need for for a CD-ROM drive I’ll remove the drive prior to deployment. Even if it saves only 0.47W then it’s still worth doing on a machine which uses less than 40W! An additional benefit of this is that it might speed up the boot process as the system won’t need to check for a bootable CD.
It’s unfortunate that most computer parts don’t have published documentation on how much power they draw. Even if you don’t want to run on solar power there are still significant benefits to saving electricity (including reducing the noise from cooling fans and heat problems in summer). If technical data was published then people could make informed decisions about which parts to buy.
Update: Changed the percentage savings for different types of RAM to be based on the system power use without the monitor. I’m most interested in saving power for servers and for idle desktops (running a desktop machine 24*7 is pretty common) so most of the time the monitor will be turned off.
It’s interesting to note that they power their system uses is about the same as a P3 system and could be less if they used a different hard drive.
Anthony Towns writes about using an improved version of jigdo to download CD/DVD images [1]. His improvement is basically to pipeline operation for better performance.
Jigdo (the Jigsaw download) is a tool to download a set of files and then use them to create a CD or DVD image [2]. The idea is that most web sites that have CD or DVD images also have a collection of files which comprise the DVD image available. This removes the need to store the data twice (wasting disk space on mirrors and in some situations breaking web caching).
I have never used jigdo, and for all Debian installations in recent times (the last few years at least) I download a small net-inst CD image and then have it download the other files from the net. I have Squid set up to cache large objects so this doesn’t waste too much of my precious network bandwidth (which is limited and expensive in Australia).
Now I’m thinking about what the optimum method for doing installs might be. One thing that would be good would be to support multiple repositories, the packages have unique file names and checksums so it should be possible to check one repository and then check another if it’s not working. I don’t mean multiple “deb” lines in the APT configuration. What I would like to do is to have an NFS file server or web server with an archive of downloaded packages and have APT check there first before downloading a file. So APT could get a list of packages from the net and then get the actual files locally if they are available.
The next thing that would be good is the ability to create a CD or DVD image dynamically and to store all temporary files. So I could download files from the repository and create a DVD image with just the packages that I need. Every time I create a DVD image my sub-set of the Debian archives would increase and the number of files actually downloaded in the creation process would be reduced. The effect would be to incrementally create a local mirror of the Debian repository.
Then I would like to see a two-stage DVD install process. I would like to boot from a CD or DVD and start the install and then have it give a list of files needed (which could be stored on a USB device or floppy) to create further CDs or DVDs for installation. One situation where this could have been beneficial was when I was doing an emergency install of CentOS. I did the first part of the install (selecting packages etc) to determine which CDs were needed. It turned out that almost all CDs were needed even though some of the CDs had only a few files that I was installing. If the installer could have written a list of packages to a USB device then I could have downloaded just those packages and got the install working a lot sooner. It seems to me that it’s fairly common to do one test install and then do some dozens of other installs with the same settings. So the ability to create a DVD of exactly the needed files for the other dozens of installs would be a great benefit.
Now this is just random commentary, unfortunately don’t have time to do any coding on this. But it seems obvious that something has to be done to improve the situation for developers and IT staff who need some degree of mirroring of the Debian package pool but who can’t do a full mirror. Back in 1996 I was able to mirror Debian over a 28K8 modem link and fit it on what was a reasonable hard drive by the standards of the day (incredibly tiny by today’s standards). Now I can’t practically mirror Debian over an Australian cable broadband connection and even by the standards of about 4 years ago (the age of most of my hard disks) the space requirements are significant.
I hope this post helps inspire some interest in developing these features. As delays in joining Debian [3] is the topic of the day it should be noted that work on preparing DVD images can easily be done by people who are not DDs. Such software should work from Debian archives without requiring any changes to them, and thus nothing special is needed from the Debian project to start work.
Patrick Winnertz writes about the demotivating effect of unreasonable delays on joining the Debian project [1].
While I agree that things need to be improved in terms of getting people in the project in a timely manner (the suggestion of providing assistants seems good), I don’t think that anyone has a good reason for being demotivated because of this.
I first applied to join Debian in late 1998 or some time in 1999. At the time part of the process of joining was to receive a phone call. At the time I was living in a hotel and they refused to call me on such a line. I could have easily camped out in the hallway of a hotel (the cheap London hotels often had a pay-phone in the hall and no phones in the rooms) and pretended that it was my own phone with an unlisted number. Unless they refused to allow people with unlisted numbers to join (which seems unlikely) then I could have joined then. So it seems that at the time I could only join Debian if I was prepared to lie about my ownership of a phone line.
I wasn’t overly bothered by this – there has never been a shortage of free software projects that need contributions of code. By late 2000 the rules had changed and I joined without needing a phone call. In the mean time I had forked the Bonnie storage benchmark program to form my own project Bonnie++ [2], created Postal – mail server benchmark suite [3] and worked on many other things as well.
I have sympathy for the people who apply to become Debian Developers and who have to wait a long time, I’ve been in the same situation myself. But there are plenty of things that you can do in the mean time. Some of the things that you can do are upstream development work, filing bug reports, submiting patches that fix bugs, and writing documentation (all forms including blog posts). Also when projects aren’t yet in Debian it often happens that someone creates unofficial packages, the person who does this doesn’t need to be a DD. Producing back-ported packages for new versions of programs that are in a stable release can also be done by people who are not DDs. Unofficial and back-ported packages provide less benefit for the project as a whole but considerable benefit for the people who want to use them.
There is a lot of work that can be done to fulfill clause 4 of the Debian Social Contract [4] (Our priorities are our users and free software) which doesn’t require being a Debian developer. It seems to me that if you have the right approach to this and maintain the perspective that Debian is one part of the free software community (and not necessarily the biggest or most significant) then a delay in your application to become a DD won’t be particularly demotivating.
Uwe Hermann has described how to resize a root filesystem after booting from a live-cd or recovery disk [1]. He makes some good points about resizing an LVM PV (which I hadn’t even realised was possible).
The following paragraph is outdated, see the update at the end:
Incidentally it should be noted that if your root filesystem is an LVM logical volume then it can’t be resized without booting from a different device because the way LVM appears to work is that the LV in question is locked, then files under /etc/lvm/ are written, and then the LV is unlocked. If the LV in question contains /etc/lvm then you deadlock your root filesystem and need to press reset. Of course if your root filesystem is on an LV which has been encrypted via cryptsetup (the LV is encrypted not the PV) then a live resize of the root filesystem can work as locking the LV merely means that write-backs from the encryption layer don’t get committed. I’m not sure if this means that data written to an encrypted device is less stable (testing this is on my todo list).
If your root filesystem is on a partition of a hard drive (such as /dev/hda2) then it is possible to extend it without booting from different media. There is nothing stopping you from running fdisk and deleting the partition of your root filesystem and recreating it. When you exit fdisk it will call an ioctl() to re-read the partition table, the kernel code counts the number of open file handles related to the device and if the number is greater than 1 (fdisk has one open handle) then it refuses to re-read the table. So you can use fdisk to change the root partition and then reboot to have the change be noticed. After that ext2online can be used to take advantage of the extra space (if the filesystem has a recent enough version of the ext3 disk format).
One thing he didn’t mention is that if you do need to boot from another device to manipulate your root filesystem (which should be quite rare if you are bold and know the tricks) then you can always use your swap space. To do this you simply run swapoff and then run mkfs on the device that had been used for swap (incidentally there is nothing really special about the swap space, but it does tend to often be used for such recovery operations simply because it has no data that persists across a reboot).
The minimum set of files that need to be copied to a temporary filesystem is usually /bin, /sbin, /dev, /lib (excluding all directories under /lib/modules apart from the one related to the kernel you are using), and /etc. Also you should make directories /proc, /sys, and /selinux (if the machine in question runs SE Linux). The aim is not to copy enough files for the machine to run in a regular manner, merely enough to allow manipulating all filesystems and logical volumes. Often for such recovery I boot with init=/bin/bash as a kernel parameter to skip the regular init and just start with a shell. Note that when you use init=/bin/bash you end up with a shell that has no job control and ^C is not enabled, if you want to run a command that might not terminate of it’s own accord then the command “openvt /bin/bash” can be used to start another session with reasonable terminal settings.
I recommend that anyone who wants to work as a sys-admin experiment with such procedures on a test machine. There are lots of interesting things that you can learn and interesting ways that you can break your system when performing such operations.
Update: Wouter points out that the LVM bug of deadlocking the root filesystem has been fixed and that you can also use ext2online to resize the mounted filesystem [2].
Albert writes about software development and how much teamwork is used [1]. He makes an interesting clash of analogies by suggesting that it’s not a “team sport” because “its not like commercial fishing where many hands are used to pull in the net at the same time“.
I think that software development for any non-trivial project is a team sport. You don’t have the same level of direct coordination as required for pulling in a net (or the rugby scrum [2] to use a sporting analogy), but that doesn’t stop it being a team effort.
Some parts of team development projects are like a relay event, in corporate environments the work is done in parallel simply because everyone is working the same hours but in free software development projects the work is often serialised. I think that it’s often more effective to serialise some work, if someone is actively working on one sections of code it may save time to avoid working in that area until they are finished. There is little benefit in writing code to old interfaces.
Some parts of team projects have specialised skill areas (EG debugging, skills in particular programming languages, and graphical design). Soccer is one sport where different rules apply to different players (the goal keeper can use their hands). In ice-hockey the protective clothing used by the goal keeper is considerably different from that used by other players. In most team sports where the aim is to put a ball through a goal at one end (EG basketball and all versions of football) there seems to be some degree of specialisation, some players are dedicated to scoring goals while others are dedicated to defense. The fielding team in cricket has every player assigned to a different part of the field – with slight differences in the skills required.
Then there is the issue of large projects such as Linux distributions. It seems to me that a Linux distribution will always comprise multiple teams as well as some individual projects. Maybe we could consider Linux distributions (and distributions of the other free OSs) to be similar to countries that compete in the Olympics. The culture of the Free Software (or Open Source if that’s your idea) community can be compared to the Olympic Spirit. Of course the Olympic idea that people should come together in peace for the Olympic Games and that it’s about honor not money is pretty much dead.
Maybe the Free Software development processes should be compared to an ideal of what sporting contests would be if there weren’t unreasonable amounts of money (and therefore corruption) involved.
Of course no analogy is perfect and there are many ways in which this one breaks down. One of which is the cooperation between distributions. There is a lot of private discussion between developers of various distributions and upstream developers about how to plan new features. It’s not uncommon for developers to announce certain development decisions as soon as they are made to help other distributions make decisions – for a developer in a distribution project if there is an issue which doesn’t matter much to you or your users then it’s often good to strive for compatibility with other distributions.
When users advocate new features or changes they sometimes try multiple distributions. It’s not uncommon for a feature request to be rejected by one distribution and then accepted by another. Once a feature is included in a major distribution the upstream developer is more likely to accept it due to it’s wide testing. Then when the feature is in upstream it’s almost certain to be included in all other distributions. I often recommend that when someone disagrees with one of their bugs being closed as “not a bug” that they try reproducing it in another distribution and reporting it there. As a side note, the criteria for reporting a bug in any free software distribution is that you can describe it in a way that allows other people to reproduce it – whether it’s a bug that afflicts you every day or whether you installed the distribution for the sole purpose of reporting the bug in a new forum is not relevant. As a general rule I recommend that you not have the same bug report open in more than one distribution at any time (if you notice a bug reported in multiple distributions then please add a note to each bug report so that work can be coordinated). As a general rule the only situation where I will open the same bug in multiple forums is if I have been told that the responsible person or people in one forum are unwilling or unable to fix it.
Finally, the people who consider that they don’t need to be a team player because they do their coding alone might want to consider Qmail. Dan Bernstein is a great coder and Qmail is by most metrics a fine piece of software, in terms of security Qmail is as good as it gets! If Dan was more of a team player then I believe that his mail server would have been much more successful (in terms of the number of sites using it). However I do understand his desire to have a great deal of control over his software.
|
|