Archives

Categories

Release Dates for Debian

Mark Shuttleworth has written an interesting post about Ubuntu release dates [1]. He claims that free software distributions are better able to meet release dates than proprietary OSs because they are not doing upstream development. The evidence that free software distributions generally do a reasonable job of meeting release dates (and Ubuntu does an excellent job) is clear.

But the really interesting part of his post is where he offers to have Ubuntu collaborate with other distributions on release dates. He states that if two out of Red Hat (presumably Enterprise Linux), Novell (presumably SLES), and Debian will commit to the same release date (within one month) and (possibly more importantly) to having the same versions of major components then he will make Ubuntu do the same.

This is a very significant statement. From my experience working in the Debian project and when employed by Red Hat I know that decisions about which versions of major components to include are not taken lightly, and therefore if the plan is to include a new release of a major software project and that project misses a release date then it forces a difficult decision about whether to use an older version or delay the release. For Ubuntu to not merely collaborate with other distributions but to instead follow the consensus of two different distributions would be a massive compromise. But I agree with Mark that the benefits to the users are clear.

I believe that the Debian project should align it’s release cycles with Red Hat Enterprise Linux. I believe that RHEL is being released in a very sensible manner and that the differences of opinion between Debian and Red Hat people about how to manage such things are small. Note that it would not be impossible to have some variations of version numbers of components but still stick mostly to the same versions.

If Debian, Ubuntu, and RHEL released at about the same time with the same versions of the kernel, GCC, and major applications and libraries then it would make it much easier for users who want to port software between distributions and run multiple distributions on the same network or the same hardware.

The Debian Social Contract [2] states that “Our priorities are our users and free software“. I believe that by using common versions across distributions we would help end-users in configuring software and maintaining networks of Linux systems running different distributions, and also help free software developers by reducing the difficulty in debugging problems.

It seems to me that the best way of achieving the goal that Mark advocates (in the short term at least) is for Debian to follow Red Hat’s release cycle. I think that after getting one release with common versions out there we could then discuss how to organise cooperation between distributions.

I also believe that a longer support cycle would be a good thing for Debian. I’m prepared to do the necessary work for the packages that I maintain and would also be prepared to do some of the work in other areas that is needed (EG back-porting security fixes).

Miro AKA DemocracyPlayer

www.ted.com is a premier partner for the Miro player [1]. This is a free player for free online content, the site www.getmiro.com has the player for download, it has binaries for Mac OS/X, Windows, and Ubuntu as well as the source (GPL licensed), it is in Debian/Unstable. It supports downloading in a number of ways (including bittorrent) and can keep the files online indefinitely. A Debian machine connected to the net could be a cheap implementation of my watching while waiting idea for showing interesting and educational TV in waiting areas for hospitals etc [2]. When I first checked out the getmiro.com site it only seemed to have binaries for Mac OS/X and Windows. But now I realise that it’s been in Debian since 11 Sep 2007 under the name Miro and since 12 Jun 2006 under the name Democracyplayer. I have only briefly played with Miro (just checked the channel list) and it seems quite neat so far. I wish I had tried this years ago. Good work Uwe Hermann!

I hope that the Miro player will allow me to more easily search the TED archives. Currently I find the TED site painful to use, a large part of this is slow Javascript which makes each page take an unreasonable delay before it allows me to do anything. I am not planning to upgrade my laptop to a dual-core 64bit machine just to allow Firefox to render badly written web pages.

Biella recently wrote about the Miro player and gave a link to a documentary about Monsanto [3].

One thing I really like about this trend towards publishing documentaries on the net is that they can be cited as references in blog posts. I’ve seen many blog posts that reference documentaries that I can’t reasonably watch (they were shown on TV stations in other countries and even starting to try tracking them down was more trouble than it was worth). Also when writing my own posts I try and restrict myself to using primary sources that are easy to verify, this means only the most popular documentaries.

The Future of Xen

I’m currently in Xen hell. My Thinkpad (which I won’t replace any time soon) has a Pentium-M CPU without PAE support. I think that Debian might re-introduce Xen support for CPUs without PAE in Lenny, but at the moment I have the choice of running without Xen or running an ancient kernel on my laptop. Due to this I’ve removed Xen from my laptop (I’m doing most of my development which needs Xen on servers anyway).

Now I’ve just replaced my main home server. It was a Pentium-D 2.8GHz machine with 1.5G of RAM and a couple of 300G SATA disks in a RAID-1. Now it’s a Pentium E2160 1.8Ghz machine with 3G of RAM with the same disks. Incidentally Intel suck badly, they are producing CPUs with names that have no meaning, and most of their chipsets don’t support more than 4G of physical address space [1]. I wanted 4G of RAM but the machine I was offered only supported addressing 4G and 700M of that was used for PCI devices. For computation tasks it’s about the same speed as the old Pentium-D, but it has faster RAM access, more RAM, uses less power, and makes less noise. If I was going to a shop to buy something I probably would have chosen something different to get support for more than 4G of RAM, but as I got the replacement machine for free as a favor I’m not complaining!

I expected that I could just install the new server and have things just work. There were some minor issues such as configuring X for the different video hardware (and installing the 915resolution package (which is only needed in Etch) to get the desired 1650×1400 resolution. But for the core server tasks I expected that I could just move the hard drives across and have it work.

After the initial install the system crashed whenever I did any serious hard drive access from Dom0, the Dom0 kernel Oopsed and network access was cut off from the DomU’s (I’m not sure whether the DomU’s died but without any way of accessing them it doesn’t really matter much). As a test I installed the version of the Xen hypervisor from Unstable and it worked. But the Xen hypervisor from Unstable required the Xen tools from Unstable which also required the latest libc6, and therefore the entire Dom0 had to be upgraded. Then in an unfortunate accident unrelated to Xen (cryptsetup in Debian/Unstable warns you if you try to use a non-LUKS option on a device which has been used for LUKS and would have saved me) I lost the root filesystem before I finished the upgrade.

So I did a fresh install of Debian/Unstable, this time it didn’t crash on heavy disk IO, instead it would lock up randomly when under no load.

I’ve now booted a non-Xen kernel and it’s working well. But this situation is not acceptable long-term, a large part of the purpose of the machine is to run virtualisation so that I can test various programs under multiple distributions. I think that I will have to try some other virtualisation technologies. The idea of running KVM on real servers (ones that serve data to the Internet) doesn’t thrill me, Tavis Ormandy’s paper about potential ways of exploiting virtual machine technologies [2] is a compelling argument for para-virtualisation. Fortunately however my old Pentium-3 machines running Xen seem quite reliable (replacing both software and hardware is a lot of pain that I don’t want).

In the near future I will rename the Xen category on my blog to Virtualisation. For older machines Xen is still working reasonably well, but for all new machines I expect that I will have to use something else – and I’ll be blogging about the new machines not the old. I expect that an increasing number of people will be moving away from Xen in the near future. It doesn’t seem to have the potential to give systems that are reliable when running on common hardware.

Ulrich Drepper doesn’t have a high opinion of Xen [3], the more I learn about it the more I agree with Ulrich.

Offensive Blog Posts

There has been ongoing debate in the Debian community for a number of years about what standards of behavior should be expected. Matthew Garrett sets a new low by making a joke about Jesus being molested as a child [1]. While I believe that debate and discussion about religion is a good thing, such comments about someone who is widely regarded as a God (part of the Holy Trinity) seems to provide no value and just needlessly offends people. I used to be a Christian, and while I have great disagreements with most Christians about issues of religion I still believe that Jesus (as described in the bible) was a good person and deserves some respect. I don’t believe that blasphemy should be illegal, but some minimum standards should be observed when discussing religion.

Next there is the issue of child molesting, most people agree that there’s nothing amusing in that – so I hope that nothing more needs to be said about violating babies.

Finally there is the issue of rape in general often being treated as a joke in the computer industry (I am not sure how prevalent this is in the wider community). One example that is from a Wired article: “We were raped by Microsoft. Bill Gates did it personally. Introducing Gates to the president of a small company is like introducing Mike Tyson to a virgin” [2]. I admit that finding examples of this on the web is not easy, part of it is due to such slang use being more common in spoken communication than in written communication, another is the vast number of slang terms that are used.

A Google search for “male rape” [3] turns up some informative articles. One FAQ suggests that 3% of men will be raped as an adult [4] – I expect that some guys will decide that it’s not so funny when they realise that it could happen to them.

For people who’s knowledge of the English language is not as good as that of Matthew and I, here is the dictionary definition of the word “violated” [5].

The Purpose of Planet Debian

An issue that causes ongoing discussion is what is the purpose of a Planet installation such as Planet Debian [1]. The discussion usually seems to take the less effective form of what is “appropriate” content for the Planet or what is considered to be “abuse” of the Planet. Of course it’s impossible to get anything other than a rough idea of what is appropriate is the purpose is not defined, and abuse can only be measured on the most basic technical criteria.

My personal use of Planet Debian and Planet Linux Australia [2] is to learn technical things related to Linux (how to use new programs, tricks and techniques, etc), to learn news related to Linux, and to read personal news about friends and colleagues. I think that most people have some desire to read posts of a similar nature (I have received a complaint that my blog has too many technical posts and not enough personal posts), but some people want to have a Planet with only technical articles.

In a quick search of some planets the nearest I found to a stated purpose of a Planet installation was from the Wiki to document Planet Ubuntu [3] which says ‘Subscribed feeds ought to be at least occasionally relevant to Ubuntu, although the only hard and fast rule is “don’t annoy people”‘. Planet Perl [4] has an interesting approach, they claim to filter on Perl related keywords, I initially interpreted this to mean that if you are on their list of blogs and you write a post which seems to refer to Perl then it will appear – but a quick browse of the Planet shows some posts which don’t appear to match any Perl keywords. Gentoo has implemented a reasonable system, they have a Universe [5] configuration which has all blog posts by all Gentoo bloggers as well as a Planet installation which only has Gentoo related posts.

It seems to me that the a reasonable purpose for Planet Debian would be to have blog feeds which are occasionally specific to Debian and often relevant to Debian. Personal blog posts would be encouraged (but not required). Posts which are incomprehensible or have nothing to say (EG posts which link to another post for the sole purpose of agreeing or disagreeing) would be strongly discouraged and it would be encouraged to make links-posts rare.

Having two installations of the Planet software, one for posts which are specific to Debian (or maybe to Debian or Linux) and one for all posts by people who are involved with Debian would be the best option. Then people who only want to read the technical posts can do so, but other people can read the full list. Most blog servers support feeds based on tag or category (my blog already provides a feed of Debian-specific posts). If we were going to have a separate Planet installation for only technical posts then I expect that many bloggers would have to create a new tag for such posts (for example my posts related to Debian are in the categories Benchmark, Linux, MTA, Security, Unix-tips, and Xen) and the tag Debian is applied to only a small portion of such posts. But it would be easy to create a new tag for technical posts.

Ubuntu is also the only organisation I’ve found to specify conditions upon which blogs might be removed from the feed, they say: We reserve the right to remove any feed that is inaccessible, flooding the page, or otherwise interfering with the operation of the Planet. We also have the right to move clearly offensive content or content that could trigger legal action.

That is reasonable, although it would be good to have a definition for “flooding the page” (I suggest “having an average of more than two posts per day appear over the period of a week or having posts reappear due to changing timestamps”). Also the “could trigger legal action” part is a minor concern – product reviews are often really useful content on a Planet…

Some time ago my blog was removed from Planet Fedora for some reason. I was disappointed that the person who made that change didn’t have the courtesy to inform me of the reason for their action and by the fact that there is no apparent way of contacting the person who runs the Planet to ask them about it. Needless to say this did not encourage me to write further posts about Fedora.

If a blog has to be removed from a feed due to technical reasons then the correct thing to do is to inform the blogger of why it’s removed and what needs to be fixed before it can be added again.

If a blog is not meeting the content criteria then I expect that in most cases the blogger could be convinced to write more content that matches the criteria and tag it appropriately. Having criteria for some aspects of blog quality and encouraging the bloggers to meet the criteria can only improve the overall quality.

Currently there is a Planet installation on debian.net being recommended which is based on Planet Debian, but with some blogs removed (with no information available publicly or on debian-private as to what the criteria are for removing the blogs in question). It seems to me that if it’s worth using Debian resources to duplicate the Planet Debian then it should be done in a way that benefits readers (EG by going to the Planet vs Universe model that Ubuntu follows), and that if blogs are going to be removed from the feed then there should be criteria for the removal so that anyone who wants their blog to be syndicated can make whatever changes might be necessary.

Planets and Resignations

Recently a Debian Developer resigned from a position of responsibility in the project by writing a blog post. I won’t name the DD or the position he resigned as I think that there are general issues which need discussion and specific examples will get in the way (everyone who is seriously involved will know who it is anyway – for those who don’t know, it’s not really exciting).

Also I think that the issue of the scope of a Planet installation is of wider importance than the Debian project, so it would be of benefit for outsiders who stumble upon this to see a discussion of general issues rather than some disagreements within the Debian project.

There has been some mild criticism of the DD in question for announcing his resignation via a blog post. I don’t think that this is appropriate. In the absence of evidence to the contrary I’ll assume that the DD in question announced his resignation to the relevant people (probably the team he worked with and the Debian Project Leader) via private email which was GPG signed (if he indeed intended to formally resign).

The resignation of one DD from one of the many positions of authority and responsibility in the project is not going to have a great affect on the work of most DDs. Therefore I don’t think that it was necessarily a requirement to post to the debian-private mailing list (the main list for communication between all developers regarding issues within the project) about this. It was however an issue that was bound to get discussed on debian-private (given that the circumstances of the resignation might be considered to be controversial) so it seems to me that sending an email of the form “here is a blog post I’ve written about my resignation” would have saved some pointless discussion (allowing us to skip the “why didn’t you send email” and get right on to the main discussion).

A resignation letter from a public position of responsibility is a significant document. Having such documents stored on publicly accessible places is good for the community. Having a record of all such documents that you have written stored on your own server for reference (by yourself and by other people you work with) is a good thing. Therefore it seems to me that a blog is an ideal place for a resignation letter. It used to be regarded that there was a certain formality in such things, and that a letter of resignation was required to be delivered in the most direct way possible (by hand if convenient) to the person who receives it. If such conventions were followed then a blog post would occur after the receipt of the letter of resignation had been confirmed (possibly in this case a confirmation email from the DPL). But in recent times things have become less formal and the free software community is particularly informal. So it seems quite appropriate to me to have the blog post come first and the email notification merely contain the URL.

Now a letter of resignation is expected to contain certain specific details. It should say specifically what duties are being resigned (particularly important when a person performs many tasks), it should have a date from which it will take effect, and it might be appropriate to mention issues related to the hand-over of tasks (whether the person resigning is willing to work with their replacements).

The “resignation” (if we should call it that) in question did not contain any of the specific details that I would expect to see in a formal resignation. This indicates to me that it could be interpreted as not being a formal and official resignation, but instead being a post (possibly written in haste while angry) about a situation which may not end up being an official resignation. Until we get some more information we won’t know for sure either way.

This demonstrates one problem with blogs, people usually have a mixture of serious documents and trivial things on the one blog. It can be difficult to determine how seriously to take blog posts. I’m not sure that there can be a good solution to this.

At the moment some people are suggesting that every DD should read Planet Debian [1]. I disagree with that. If there is an issue which is significant and affects the entire project then it should be announced on one of the mailing lists such as debian-private, debian-announce, or debian-devel-announce (and will be announced on one of them eventually even if not by the person closest to the events). Forcing every DD to read a lot of blog posts is not in the best interests of the project. Now we could create a separate Planet installation for such things, there is already Debian Times [2] and Debian Administration [3] which serve as a proof of concept. If there was a Planet installation for important stuff related to Debian which had it’s content syndicated in various ways (including an email gateway – Feedburner.com provides a quite useful one) then requesting that everyone read it’s content in some way (either by web browsing, an RSS feed reader, syndication in another Planet, email, or something else) would not be unreasonable. The volume of posts on such a Planet would be quite small (similar to the current announcement mailing lists) so if received by email it wouldn’t fill anyone’s mailbox and if people visited the web site they would only need to do so every second month if that suited them.

The issue of what types of posts are suitable for Planet Debian is probably going to get raised again soon as a result of this.

Preparing for a Collapse

Rick Falkvinge (leader of the Swedish Pirate Party) has written his predictions about an economic crash in the US [1]. Predicting that the US economy will crash is no great stretch, it’s gross failures seem obvious. The Pirate Party [2] is a one-issue political party that is based on reform of intellectual property laws. It derived it’s name from the term Software Piracy [3] which originally referred to using software without paying for it, but in recent times has been broadened in scope to cover doing anything that copyright holders don’t like. The term “Piracy” is deprecated in the free software community based on the fact that it’s unreasonable to compare armed robbery and murder on the high seas (which still happens today and costs between $US13,000,000,000 and $US16,000,000,000 per year [4]) with copying some files without permission. But that battle has been conclusively lost, so it seems that the mis-use of the term “Piracy” will continue.

The majority of the acts which are considered to be “Piracy” are well accepted by the community, the acts of the music industry in taking legal action against young children have only drawn more public support for the “Pirate” cause. Such support is increasing the changes of the Swedish Pirate Party getting a seat in parliament at the next election, and has caused the major Swedish parties to change their positions on IP legislation.

Now Rick’s background related to Intellectual Property issues causes him to analyse the IP aspects of the current US problems. His claim is that the US economy was trashed during the Vietnam war, has been getting worse ever since, and that the US position on IP legislation is either intentionally or accidentally helping to finance the US while it’s production of useful things is steadily decreasing. He also claims that some multi-national financial customs (such as using the US dollar for the international oil trade) is propping up the US currency and effectively allowing the US government (and the US residents) to borrow money from the rest of the world.

Dmitry Orlov’s presentation titled “Closing the ‘Collapse Gap’: the USSR was better prepared for collapse than the US” [5] provides some interesting information on what happens during an economic collapse. He also has some specific advice on what can be done (by both governments and individuals) to prepare for an impending collapse. However he doesn’t mention some issues which are important to people like us (although not as important as food, water, and shelter).

On my document blog I’ve got a post with some ideas of how to run an Internet Infrastructure after a medium-scale collapse of the economy as we know it [6].

Bugs and User Practice

Wouter points out a mistake in one of my blog posts which was based on old data [1]. My original post was accurate for older distributions of Linux but since then the bug in question was fixed.

Normally when writing blog posts or email I do a quick test before committing the text to avoid making mistakes (it’s easy to mis-remember things). However in this case the bug would dead-lock machines which made me hesitant to test it (I didn’t have a machine that I wanted to dead-lock).

There are two lessons to be learned from this. The most obvious is to test things thoroughly before writing about them (and have test machines available so that tests which cause service interruption or data loss can be performed).

The next lesson is that when implementing software you should try not to have limitations that will affect user habits in a bad way. In the case of LVM, if the tool lvetend had displayed a message such as “Resizing the root LV would dead-lock the system, until locking is fixed such requests will be rejected – please boot from a rescue disk to perform this operation” then I would have performed a test before writing the blog post (as it would be a harmless test to perform). Also on occasion when I really wanted to resize a root device without a reboot I would have attempted the command in the hope that LVM had been fixed.

A bug that deadlocks a system is one that will really have an adverse affect on users, both their habits in future use, and the probability of them using the software in future. A bug (or missing feature) that displays a warning message will have much less of a problem.

From now on I will still be hesitant in using lvextend on a LV for a root filesystem on any machines other than the very latest for fear that they will break. The fact that lvextend will sometimes work on the root filesystem and sometimes put the machine offline is a serious problem that impacts my use of that feature.

Most people won’t be in a position to have a bug or missing feature that deadlocks a system, but there are an infinite number of ways that software can potentially interrupt service or destroy data. Having software fail in a soft way such that data is not lost is a significant benefit for users and an incentive to use such software.

I’ve put this post in the WTF category, because having a dead-lock bug in a very obvious use-case of commonly used software really makes me say WTF.

Solar Powered PC

I’ve just read an interesting post on TomsHardware.com about a solar powered PC [1]. It describes all the steps involved in creating a modern high-performance low-power computer.

They have a lot of interesting information. One surprising fact (from page 3) is that the PSUs tested (both for AC and DC input) were more efficient when idle (I expected the greatest efficiency to be when under load).

An AMD processor was chosen due in large part to the fact that chipsets in suitable motherboards used less power. For the CPU itself Intel had a competitive offering but no matching motherboard was power efficient enough (from page 7).

Page 8 documents how using a cooling fan (instead of passive cooling) reduced the power requirements of the CPU to such a degree that it always saved power use overall. Why do CPUs take less power when they are cooler?

Page 9 mentions that a small passively cooled video card can draw 88.5W when idle! That sucks pretty badly, it seems that having a video controller integrated with the motherboard is the way to go if you want to save power.

It’s interesting to note how much energy can be used by RAM. Page 13 shows that the difference between 2*1G and 2*512M can be as much as 3.4W and that the difference between different brands of RAM for the 2*1G can make as much as 1.2W difference. Their final system drew 61W when idle, my latest 64bit system takes 52W when idle [2] (which compares to the 38W of their system without a monitor), so we are talking about 9% of system power being saved by using less RAM or 3% being saved by using a different brand of RAM.

The summary of hard drive power use on page 14 is interesting, the fact that 2.5 inch laptop disks use less power than 3.5 inch desktop disks is hardly surprising, but the difference when idle is very surprising (apparently one of the 3.5 inch disks spends 8W on turbulence and friction in the bearings). It’s unfortunate that they didn’t compare any of the server-class 2.5 inch disks, it was about 6 months before the article was written that HP announced that in future they would cease shipping 3.5 inch disks and only use 2.5 inch disks (I wonder if this is related to all HP’s recent work on server cooling). Rumor has it that many server class 3.5 inch disks have platters that would fit into a 2.5 inch case because at high rotational speeds a larger diameter platter would not be strong enough.

The information on DVD power use on page 15 is quite shocking. From now on when I install machines as servers which don’t have a need for for a CD-ROM drive I’ll remove the drive prior to deployment. Even if it saves only 0.47W then it’s still worth doing on a machine which uses less than 40W! An additional benefit of this is that it might speed up the boot process as the system won’t need to check for a bootable CD.

It’s unfortunate that most computer parts don’t have published documentation on how much power they draw. Even if you don’t want to run on solar power there are still significant benefits to saving electricity (including reducing the noise from cooling fans and heat problems in summer). If technical data was published then people could make informed decisions about which parts to buy.

Update: Changed the percentage savings for different types of RAM to be based on the system power use without the monitor. I’m most interested in saving power for servers and for idle desktops (running a desktop machine 24*7 is pretty common) so most of the time the monitor will be turned off.

It’s interesting to note that they power their system uses is about the same as a P3 system and could be less if they used a different hard drive.

Making Linux DVDs

Anthony Towns writes about using an improved version of jigdo to download CD/DVD images [1]. His improvement is basically to pipeline operation for better performance.

Jigdo (the Jigsaw download) is a tool to download a set of files and then use them to create a CD or DVD image [2]. The idea is that most web sites that have CD or DVD images also have a collection of files which comprise the DVD image available. This removes the need to store the data twice (wasting disk space on mirrors and in some situations breaking web caching).

I have never used jigdo, and for all Debian installations in recent times (the last few years at least) I download a small net-inst CD image and then have it download the other files from the net. I have Squid set up to cache large objects so this doesn’t waste too much of my precious network bandwidth (which is limited and expensive in Australia).

Now I’m thinking about what the optimum method for doing installs might be. One thing that would be good would be to support multiple repositories, the packages have unique file names and checksums so it should be possible to check one repository and then check another if it’s not working. I don’t mean multiple “deb” lines in the APT configuration. What I would like to do is to have an NFS file server or web server with an archive of downloaded packages and have APT check there first before downloading a file. So APT could get a list of packages from the net and then get the actual files locally if they are available.

The next thing that would be good is the ability to create a CD or DVD image dynamically and to store all temporary files. So I could download files from the repository and create a DVD image with just the packages that I need. Every time I create a DVD image my sub-set of the Debian archives would increase and the number of files actually downloaded in the creation process would be reduced. The effect would be to incrementally create a local mirror of the Debian repository.

Then I would like to see a two-stage DVD install process. I would like to boot from a CD or DVD and start the install and then have it give a list of files needed (which could be stored on a USB device or floppy) to create further CDs or DVDs for installation. One situation where this could have been beneficial was when I was doing an emergency install of CentOS. I did the first part of the install (selecting packages etc) to determine which CDs were needed. It turned out that almost all CDs were needed even though some of the CDs had only a few files that I was installing. If the installer could have written a list of packages to a USB device then I could have downloaded just those packages and got the install working a lot sooner. It seems to me that it’s fairly common to do one test install and then do some dozens of other installs with the same settings. So the ability to create a DVD of exactly the needed files for the other dozens of installs would be a great benefit.

Now this is just random commentary, unfortunately don’t have time to do any coding on this. But it seems obvious that something has to be done to improve the situation for developers and IT staff who need some degree of mirroring of the Debian package pool but who can’t do a full mirror. Back in 1996 I was able to mirror Debian over a 28K8 modem link and fit it on what was a reasonable hard drive by the standards of the day (incredibly tiny by today’s standards). Now I can’t practically mirror Debian over an Australian cable broadband connection and even by the standards of about 4 years ago (the age of most of my hard disks) the space requirements are significant.

I hope this post helps inspire some interest in developing these features. As delays in joining Debian [3] is the topic of the day it should be noted that work on preparing DVD images can easily be done by people who are not DDs. Such software should work from Debian archives without requiring any changes to them, and thus nothing special is needed from the Debian project to start work.