Archives

Categories

Logging in as Root

Martin Meredith wrote a blog post about logging in as root and the people who so strongly advocate against it [1]. The question is whether you should ssh directly to the root account on a remote server or whether you should ssh to a non-root account and use sudo or su to gain administrative privileges.

Does sudo/su make your system more secure?

Some years ago the administrator of a SE Linux Play Machine used the same home directory for play users to login as for administrative logins as for his own logins – he used newrole to gain administrative access (like su or sudo but for SE Linux).

His machine was owned by one of his friends who created a shell function named newrole in one of his login scripts that used netcat to send the administrative password out over the net. He didn’t immediately realise that this was a problem until his friend changed the password and locked him out! This is one example of a system being 0wned due to having the double-authentication – of course if he had logged in directly with administrative privs while using the same home directory that the attacker could write to then he would still have lost but the attacker would have had to do a little more work.

When you login you have lots of shell scripts run on your behalf which have the ability to totally control your environment, if someone has taken over those scripts then they can control everything you see, when you think you run sudo or something they can get the password. When you ssh in to a server your security relies on the security of the client end-point, the encryption of the ssh protocol (including keeping all keys secure to prevent MITM attacks), and the integrity of all the programs that are executed before you have control of the remote system.

One benefit for using sshd to spawn a session without full privileges is in the case where you fear an exploit against sshd and are running SE Linux or some other security system that goes way beyond Unix permissions. It is possible to configure SE Linux in the “strict” configuration to deny administrative rights to any shell that is launched directly by the sshd. Therefore someone who cracks sshd could only wait until an administrator logs in and runs newrole and they wouldn’t be able to immediately take over the system. If the sysadmin suspected that a sshd compromise is possible then a sysadmin could login through some other method (maybe visit the server and login at the console) to upgrade the sshd. This is however a very unusual scenario and I suspect that most people who advocate using sudo exclusively don’t use a SE Linux strict configuration.

Does su/sudo improve auditing?

If you have multiple people with root access to one system it can be difficult to determine who did what. If you force everyone to use su or sudo then you will have a record of which Unix account was used to start the root session. Of course if multiple people start root shells via su and leave them running then it can be difficult to determine which of the people who had such shells running made the mistake – but at least that reduces the list of suspects.

If you put “PermitUserEnvironment yes” in /etc/ssh/sshd_config then you have the option of setting environment variables by ssh authorized_keys entries, so you could have an entry such as the following:

environment=”ORIG_USER=john@example.com” ssh-rsa AAAAB3Nz[…]/w== john@example.com

Then you could have the .bashrc file (or a similar file for your favorite shell) have code such as the following to log the relevant data to syslogd:
if [ "$SSH_TTY" = "" ]; then
  logger -p auth.info "user $ORIG_USER ran command \"$BASH_EXECUTION_STRING\" as root"
else
  logger -p auth.info "user $ORIG_USER logged in as root on tty $(tty)"
fi

I think that forcing the use of su or sudo might improve the ability to track other sysadmins if the system is not well configured. But it seems obvious that the same level of tracking can be implemented in other ways with a small amount of effort. It took me about 30 minutes to devise the above shell code and configuration options, it should take people who read this blog post about 5 minutes to implement it (or maybe 10 minutes if they use a different shell or have some other combination of Bash configuration that results in non-obvious use of initialisation scripts (EG if you have a .bash_profile file then .bashrc may not be executed).

Once you have the above infrastructure for logging root login sessions it wouldn’t be difficult to run a little script that asks the sysadmin “what is the purpose for your root login” and logs what they type. If several sysadmins are logged in at the same time and one of them describes the purpose of their login as “to reconfigure LDAP” then you know who to talk to if your LDAP server stops working!

Should you run commands with minimum privilege?

It’s generally regarded that running each command with the minimum privilege is a good idea. But if the only reason you login to a server is to do root tasks (restarting daemons, writing to files that are owned by root, etc) then there really isn’t a lot of potential for achieving anything by doing so. If you need to use a client for a particular service (EG a web browser to test the functionality of a web server or proxy server) then you can login to a different account for that purpose – the typical sysadmin desktop has a dozen xterms open at once, using one for root access to do the work and another for non-root access to do the testing is probably a good option.

Can root be used for local access?

Linux Journal has an article about the distribution that used to be known as Lindows (later Linspire) which used root as the default login for desktop use [2]. It suggests using a non-root account because “If someone can trick you into running a program or if a virus somehow runs while you are logged in, that program then has the ability to do anything at all” – of course someone could trick you into running a program or virus that attempts to run sudo (to see if you enabled it without password checks) and if that doesn’t work waits until you run sudo and sniffs the password (using pty interception or X event sniffing). The article does correctly note that you can easily accidentally damage your system as root. Given that the skills of typical Linux desktop users are significantly less than those of typical system administrators it seems reasonable to assume that certain risks of mistake which are significant for desktop users aren’t a big deal with skilled sysadmins.

I think that it was a bad decision by the Lindows people to use root for everything due to the risk of errors. If you make a mistake on a desktop system as non-root then if your home directory was backed up recently and you use IMAP or caching IMAP for email access then you probably won’t lose much of note. But if you make a serious mistake as root then the minimum damage is being forced to do a complete reinstall, which is time consuming and annoying even if you have the installation media handy and your Internet connection has enough quota for the month to complete the process.

Finally there are some services that seek out people who use the root account for desktop use. Debian has some support channels on IRC [3] and I decided to use the root account from my SE Linux Play Machine [4] to see how they go. #debian has banned strings matching root. #linpeople didn’t like me because “Deopped you on channel #linpeople because it is registered with channel services“. #linuxhelp and #help let me in, but nothing seemed to be happening in those channels. Last time I tried this experiment I had a minor debate with someone who repeated a mantra about not using root and didn’t show any interest in reading my explanation of why root:user_r:user_t is safe for IRC.

I can’t imagine that good the #debian people expect to gain from denying people the ability to access that channel with an IRC client that reports itself to be running as root. Doing so precludes the possibility of educating them if you think that they are doing something wrong (such as running a distribution like Lindows/Linspire).

Conclusion

I routinely ssh directly to servers as root. I’ve been doing so for as long as ssh has been around and I used to telnet to systems as root before that. Logging in to a server as root without encryption is in most cases a really bad idea, but before ssh was invented it was the only option that was available.

For the vast majority of server deployments I think that there is no good reason to avoid sshing directly as root.

Brother MFC-9120CN Color LASER Printer

I have just bought a Brother MFC-9120CN Multi-Function Color LED LASER Printer for a relative. It was a replacement for the Lexmark printer which turned out not to support Linux properly [1].

This printer cost about $545. I bought it from OfficeWorks [2] under their price-matching deal. If you find a better price anywhere else they will beat it by 5%. I went to StaticIce.com.au and found the cheapest online store in Australia that sold the printer and then took the URL of the online store to OfficeWorks on a USB stick. After they verified the price they sold me the printer for 5% less than the online cost plus the delivery cost, which saved my relative a little more than $50.

Craig Sanders had convinced me to choose a LASER printer because the toner doesn’t have a short shelf-life unlike the ink for ink-jet printers. My parents have been using a LASER printer for more than 12 years and each toner cartridge lasts at least 4 years which is a much better result than all the ink-jet printers I’ve supported which tend to regularly need more expensive ink. I guess I’ll find out over the next few years whether this printer lives up to the general reputation of LASER printers in this regard.

The LED printers use LEDs for the LASER light, this apparently makes them more reliable and efficient but means that they tend to have a lower resolution, and often the horizontal and vertical resolutions are not equal. The printer I got is listed as 600*2400dpi resolution but that might end up giving much the same result as a 600*600dpi printer. But 600*600dpi should be good enough for a long time anyway. A4 paper (the standard size for office paper in Australia) is 210*297mm, that is about 8.27*11.69 inches or 4961*7015 pixels at 600dpi. Even if we assume that 10% of the width and height is wasted on margins that would take a 28 megapixel camera to produce a picture that can actually use 600dpi for the most common case where high quality is needed for home use – printing a single photo on an A4 sheet.

The printer ships with 64M of RAM which was not enough to print some pictures that I sent it, it has a slot for a 144pin SO-DIMM (laptop RAM) for memory expansion, it can take one SO-DIMM of up to 512M capacity that is at least PC-100. I’ve got a spare 256M PC-133 memory module that I will install in it, hopefully that will be enough to print pictures. Buying PC-100/PC-133 RAM nowadays probably isn’t going to be easy, particularly not 512M modules as many of the laptops which used PC-100/PC-133 RAM didn’t support that capacity (I believe that my ancient Thinkpads which used such memory didn’t support 512M modules).

The requirement was for a printer that could print photos in reasonable quality, could make photo-copies, and ideally work as a scanner. I got CUPS to talk to it without much effort, I just installed a PPD file from the Brother Solutions Center web site [3] and it just worked. It occurred to me later that I should have tried configuring it before installing the PPD file – maybe the version of CUPS in Debian/Squeeze supports the Brother printer natively.

So the current state of the printer is that it prints documents very well, it doesn’t print photos but that should be solved when I add more RAM, and I just have to try and get scanning to work. Everyone is happy!

The only down-side is that the printer is huge. It takes a lot of desk space to run it (they will need a new desk in their computer room), and when it’s in it’s box it’s much larger than most things that you will normally transport by car.

Update: I’ve installed a 256M PC-133 SO-DIMM and can now print full color pictures. Thanks for Rodney Brown for giving me some Thinkpad parts which included RAM.

UBAC and SE Linux in Debian

A recent development in SE Linux policy is the concept of UBAC (User Based Access Control) which prevents SE Linux users (identitied) from accessing each other’s files.

SE Linux user identities may map 1:1 to Unix users (as was required in the early versions of SE Linux), you might have unique identities for special users and a default identity for all the other users, or you might have an identity per group – or use some other method of assigning identities to groups.

The UBAC constraints in the upstream reference policy prevent a process with a SE Linux identity other than system_u from accessing any files with an identity other than system_u. So basically any regular user can access files from the system but not other users and system processes (daemons) can access files from all users. Of course this is just one layer of protection, so while the UBAC constraint doesn’t prevent a user from accessing any system files the domain-type access controls may do so.

If you used a unique SE Linux identity for each Unix account then UBAC would prevent any user from accessing a file created by another user.

For my current policy that I am considering uploading to Debian/Unstable I have allowed the identity unconfined_u to access files owned by all identities. This means that unconfined_u is an identity for administrators, if I proceed on this path then I will grant the same rights to sysadm_u.

UBAC was not enabled in Fedora last time I checked, so I’m wondering whether there is any point in including it – I don’t feel obliged to copy everything that Fedora does, but there is some benefit in maintaining compatibility across distributions.

For protecting users from each other it seems that MCS (which is Mandatory in the Debian policy) is adequate. MCS allows a much better level of access control. For example I could assign categories c0 to c10 to a set of different projects and allow the person who manages all the projects to be assigned all those categories when they login. That user could then use the command “runcon -l s0:c1 bash” to start a shell for the purpose of managing files from project 1 and any file or process created by that command would have the category c1 and be prevented from writing to a file with a different category.

Of course the down-side to removing UBAC is that since RBAC was removed there is no other way of separating SE Linux users, while MCS is good for what it does it wasn’t designed for the purpose of isolating different types of user. So I’ll really want to get RBAC reinstalled before Squeeze is released if I remove UBAC.

Regardless of this I will need to get RBAC working on Squeeze eventually anyway. I’ve had a SE Linux Play Machine running with every release of SE Linux for the last 8 years and I don’t plan to stop now.

Links May 2010

AdRevenge is an interesting concept to pay for Google Adsense adverts about how companies suck [1]. If a suitably large group of people pay to warn you about a company then it’s a good signal that the company is actually doing the wrong thing.

A guest post by Mili on Charles Stross’ blog has an interesting analysis of the economcs of “Intellectual Property” and concludes that content is a public good [2].

New Age Terrorists Develop Homeopathic Bomb [3], an amusing satire of medical fraud and security theatre. The sit has a lot of other good satire too.

Mark Shuttleworth wrote an interesting post about new window management changes that will soon go into Ubuntu [4]. He points out that the bottom status bar in applications is a throw-back to Windows 3.1 and notes that a large part of the incentive for removing it (and using the title-bar for the status) is the work on the Netbook version of Ubuntu. This is really ironic given that the resolution of current Netbooks is quite similar to that of desktop systems that were current when Windows 3.1 came out.

Omar Ahmad gave an insightful TED talk about the benefit of using a pen and paper to send a letter to a politician [5].

Sebastian Wernicke gave an amusing and informative TED talk about how to give a good TED talk [6]. His talk gives some useful ideas for public speaking that are worth considering.

Catherine Mohr gave a brief and interesting TED talk about how to build an energy efficient house with low embodied energy [7]. Her blog at www.301monroe.com has the details.

Stephen Wolfram (of Mathematica fame) gave an interesting TED talk [8]. He covers a lot of interesting things that can be done with computers, primarily based on the Wolfram Alpha [9] platform which allows natural language queries of a large data set. He also talks about the search for a Theory of Everything.

Esther Duflo gave an interesting TED talk about using social experiments to fight poverty [10]. She describes how scientific tests have been used to determine the effectiveness of various ways of preventing disease and encouraging education in developing countries. One example of the effectiveness of such research is the DeWormTheWorld.org project which was founded after it was discovered that treating intestinal worms was the most cost effective way of getting African children to spend longer at school.

David L. Rosenhan wrote an interesting research paper “On Being Sane In Insane Places” about pseudo-patients admitted to psychiatric hospitals [11]. It seems that psychiatric staff were totally unable to recognise a sane person who was admitted even though other patients could do so. It also documents how psychiatric patients were treated as sub-human. One would hope that things had improved since 1973, but it seems likely that many modern psychiatric hospitals are as bad as was typical in 1973. It’s also worth considering the issue of the treatment in society of people who have been diagnosed with a mental illness, it seems likely that the way people are treated in the community would have similar bad results to that which was documented for treatment in psychiatric hospitals – even the sanest people will act strangely if treated in an insane manner! Also it seems to me that there could be potential for using a panel of patients assembled via the Delphi Method as part of the psychiatric assessment process as it has been demonstrated that patients can sometimes assess other patients more accurately than psychiatrists!

Simon Sinek gave an inspiring TED talk about how great leaders inspire action [12]. Of course the ideas he describes don’t just apply to great leaders, they should apply to ordinary people who just want to convince others to adopt their ideas.

Stephen Collins write a good article summarising the main reasons why the proposed great firewall of Australia is a bad idea [13].

Lenore Skenazy who is famous for letting her 9yo son catch the metro alone during broad daylight on a pre-planned route home has created a web site about Free Range Kids [14]. She seems to be starting a movement to oppose Helicopter Parenting and has already written a book about her ideas for parenting. The incidence of crime has been steadily increasing, as has the ability of the police to apprehend criminals and recover abducted children. There’s no reason for children to be prevented from doing most of the things that children did when I was young!

GM Food and Vaccines

Michael Specter gave an interesting TED talk about the dangers of science-denial [1]. Most of his talk is about the people who oppose vaccines, such as the former Playboy model Jenny McCarthy who thinks that she knows more about medicine than people who do medical research. He notes that a doctor who advocates vaccination has been receiving threats from the anti-vaccine lobby, including threats to his children. An good new development is that Andrew Wakefield (the British ex-Doctor behind the discredited research linking Autism and Vaccination) has been barred from practicing by Britain’s General Medical Council [2].

Michael also mentions the opposition to GM food which has the potential to save many lives in developing countries that have food shortages. This convinced me to reduce my opposition to GM food, it’s really not GM food that I’m opposed to but the poor testing, the bad features (such as the Terminator Gene), and the Intellectual Property controls which allow GM companies to sue farmers who accidentally have GM crops grow on their land due to wind-borne seeds. It’s also a pity that there is no work being done on GM versions of any food crop which is only used for feeding poor people. Every GM plant is one that is used to provide food for rich people and is essentially a way for farmers in first-world countries to make more money. But GM versions of Cassava (with less of the toxic chemicals among other things) and Sorghum would improve the situation of many poor people.

One interesting related development is that Craig Venter has just announced the creation of the first synthetic life [3]. This technical development could lead to dramatic changes in the production of basic foods, such as algae that produce proteins that have the ideal mixture of all the essential amino acids needed for humans as well as the semi-essential ones that children need. While feeding pond slime to children isn’t going to be glamorous it would be a lot better than the current situation where a significant number of children in developing countries have their physical and mental development stunted due to malnutrition. Craig mentions the possibility of using his research to develop vaccines much faster, including perhaps the possibility of vaccinating people against fast evolving viruses such as the common cold!

A School IP Project

The music industry seems fairly aggressive in taking legal action against children when they break the licence terms of copyright material. I think it would be good to teach children about how the IP industry really works.

It seems to me that you could have a school project that involves an entire year level (maybe 100 students depending on the size of the school) each of whom can produce copyright material (everything they do in art and English classes would be suitable as a start). Then they could register their work (make digital photographs and then store them in a school database that records the entry date) and sue anyone who infringes their work.

Every student would receive licence fees for their work, but if they are sued for infringement they would have to pay all revenue plus damages. Other students could work as lawyers and take a portion of the proceeds of any successful law suit, and finally some students could run recording companies and spend their time hunting for infringing work for the purpose of launching legal action.

In terms of the licence fees paid, this could be done by just allocating a fixed value per item to each student as a way to get the system running without regard to the fact that some students just aren’t able to create good art. It could however have a large portion of the value coming from what other students choose to spend, every student gets to “spend” $10 per week on art and they can choose from the database what they want to “buy” copies of. The most popular art could then be printed on every notice-board in the school as an incentive for students to vote with their play-money for something that they don’t mind seeing all the time. It’s obvious that popularity would be a significant factor in the success of some artists, but that’s OK, a casual review of the chart topping music reveals that it’s quite obviously not created by the world’s best musicians so it seems that rewarding popularity rather than skill just adds some realism.

One possibility would be to allow the students to elect representatives to create their own IP laws. It would be interesting to see how the IP laws voted on by representatives of the students (who are all in some way involved with the process of creating, buying, selling, and distributing artistic products) differ from those which we have foisted upon us in the real-world. Also an interesting possibility would be to allow corruption in the election process and observe how the results differ from year levels where corruption is not permitted. I expect that teaching children how political corruption works would be a little controversial, but it’s nothing that they can’t learn from reading news reports about what the “entertainment” industry is really doing. Really being a corrupt politician for a school project shouldn’t be as bad as playing a murderer in a school play!

Naturally this couldn’t be done with real money, but giving higher marks at the end of the year to the students who accumulate the most play money would be quite reasonable. I don’t think that there would be a problem with giving higher marks to a student who succeeded through political corruption – as long as they gave a good written report of how they did so and the implications for society.

Please note that I am not suggesting this for a subject that is used for university entrance, I think it would be a good project for years 8-10 which in Australia have no relevance to university entrance. So the marks would just be letters on a bit of paper that might make parents happy or unhappy and otherwise mean nothing.

I anticipate responses from people who believe that educating children about how the world works is not appropriate for a school. Such people are never going to convince me, but if anyone thinks that they can make a good point to convince some of the readers then I encourage them to write it up in the comments section if it’s short or on their own blog if it’s longer.

Google Chrome and SE Linux

Google Chrome saying aw snap when it crashes

[107108.433300] chrome[12262]: segfault at bbadbeef ip 0000000000fbea18 sp 00007fffcf348100 error 6 in chrome[400000+27ad000]

When I first tried running the Google Chrome web browser [1] on SE Linux it recursively displayed the error message in the above picture, it first displayed the error and then displayed another error while trying to display a web page to describe the error. The kernel message log included messages such as the above message, it seems that some pointers are initialised to the value 0xbbadbeef to make debugging easier and more amusing.

V8 error: V8 is no longer usable (v8::V8::AddMessageListener())

When I ran Chrome from the command-line it gave the above error message (which was presumably somewhere in the 8MB ~/.xsession-errors file generated from a few hours of running a KDE4 session).

type=AVC msg=audit(1274070733.648:145): avc: denied { execmem } for pid=12833 comm=”chrome” scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=process
type=SYSCALL msg=audit(1274070733.648:145): arch=c000003e syscall=9 success=no exit=-131938567594024 a0=7fd863b41000 a1=40000 a2=7 a3=32 items=0 ppid=1 pid=12833 auid=4294967295 uid=1001 gid=1001 euid=1001 suid=1001 fsuid=1001 egid=1001 sgid=1001 fsgid=1001 tty=pts4 ses=4294967295 comm=”chrome” exe=”/opt/google/chrome/chrome” subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)
type=ANOM_ABEND msg=audit(1274070733.648:146): auid=4294967295 uid=1001 gid=1001 ses=4294967295 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 pid=12833 comm=”chrome” sig=11

V8 is the Google Javascript system which compiles JavaScript code and thus apparently needs read/write/execute access to memory [2]. In /var/log/audit/audit.log I saw the above messages (which would have been in the kernel message log as displayed by dmesg if I didn’t have auditd running). The most relevant parts are that execmem access was requested and that it was by system call 9. From linux/x86_64/syscallent.h in the strace source I discovered that system call 9 on the AMD64 architecture is sys_mmap. Does anyone know a good way to discover which system call has a given number on a particular architecture without reading strace source code?

Attempts to strace the Google Chrome process failed, Chrome gave the error “Failed to move to new PID namespace” after clone() failed. Clone was passed the flag 0x20000000 which according to /usr/include/linux/sched.h is CLONE_NEWPID. It seems that programs which create a new PID namespace (as Google Chrome does) can’t be straced as the clone() call fails. It’s a pity that Chrome doesn’t have an option to run without using this feature, losing the ability to strace it really decreases my ability to find and report bugs in the program – I’m sure that the Google developers want people like me to be able to help them find bugs in their code without undue effort.

Anyway the solution to this problem to allow it to run on the SE Linux Targeted configuration is to run the command “chcon -t unconfined_execmem_exec_t /opt/google/chrome/chrome” which causes the Chrome browser to run in the domain unconfined_execmem_t which is allowed to do such things. Of course we don’t want Chrome processes to run unconfined, I think that the idea I had in 2008 for running Chrome processes in different SE Linux contexts is viable and should be implemented [3].

As a general rule if you are running a program from the command-line on SE Linux with the Targeted configuration (the default and most common configuration) then any time you see an execmem failure logged to the kernel message log or the audit subsystem then you can change the context of the program to unconfined_execmem_exec_t to make the problem go away. Note that this isn’t necessarily a good thing to do, sometimes it’s best to change the program to not require such access. But it seems that in this case the design of V8 requires write/execute memory access to pre-compile JavaScript code.

Timing Processes

One thing that happens periodically is that I start a process from an interactive shell, discover that it takes longer than expected, and then want to know how long it took. Basically it’s a retrospective need to have run “time whatever” that I discover after the process has been running for long enough that I don’t want to restart it. My current procedure in such situations is to run ps from another session to discover when it started and then type date to display when it ends.

A quick test with strace showed that bash uses the wait4() system call to determine when a process ends, but passes NULL as the last parameter. If it passed the pointer to a struct rusage then it would have the necessary data.

I think it would be a really good feature for a shell to allow you to type something like “echo $TIME_LAST_CMD” to see how long the last command took. For the common case where you aren’t interested in that data it would only involve an extra parameter to the wait4() system call, a small amount of memory allocated for it, and to store yet another environment variable in it’s list.

A quick Google search didn’t show any way of filing wishlist bugs against Bash and I don’t think that this is a real bug as such so I haven’t filed a bug report. If anyone reads my blog and has some contact with the Bash people then please pass this idea along if you think it’s worthy of being included.

Pants To Poverty

On Wednesday I saw a group of people wearing nothing but underwear walking through the Melbourne CBD. They were promoting Pants To Poverty which is a UK based underwear company that sells Organic Fair-Trade underpants [1]. The corporate web site seems equally divided between the purposes of selling underpants and lobbying for a variety of social issues including fair-trade, better working conditions in factories, and the abolition of pesticides that kill farmers.

The web site itself is worth a look in terms of it’s design, for what it does it works really well. When viewing a picture of a model wearing underpants you can scroll your mouse over the picture to see an enlarged view of a particular area of interest. This works really well on a Netbook display (everyone should carry a Netbook with 3G net access when shopping), I would like to see car companies use the same technique for displaying pictures of cars on small screen devices. It’s unfortunate that it doesn’t just show the full sized images when you use a large display, it would all fit on a 1680*1050 display.

The underwear in the UK costs between 10 pounds ($16.40AU) and 15 pounds ($24.60AU) a pair for adults and 18 pounds ($29.52AU) a pair for a pack of three for children. The Little Green Bag Co [2] sells the adult underpants in Australia for $22AU per pair. These appear to be the cheaper items from the range, so it seems that it would be cheaper to order from the UK if you were buying a few pairs. Also if the items you wanted happened to be the more expensive items from the range then you would have to order from the UK.

A quick check of the Myer web site shows that even $22 per pair isn’t particularly expensive for women’s underwear, but it is really expensive for men’s underwear. The Australian web site for Pants to Poverty lists a bunch of resellers, but a quick scan of the list didn’t turn up anyone cheaper than the Little Green Bag Co.

According to the “sizing information” on the home page of the UK site the largest underpants that they sell are “size 14” (in the Australian size range for women which is apparently 78-83cm). According to a variety of news reports the average size of an Australian women is either size 14 or 14-16. Apparently women’s clothing is often not made to standard sizes due to the practice of vanity sizing, so it’s difficult to determine how these things compare. But it does seem that a significant portion of the women in Australia won’t be able to buy products from Pants To Poverty in their size.

I don’t think that many men will buy from them, $22 is really expensive.

So while I think it’s great to have a bunch of men and women running around the CBD wearing only underwear I don’t think that the future of their Australian business is particularly good.

The difference in price between Pants To Poverty products and other underwear would allow you to buy a significant quantity of Organic Fair Trade chocolate or other food.

Noise Canceling Headphones and People Talking

The Problem

I was asked for advice on buying headphones to protect students who have medical conditions that make them sensitive to noise, such headphones would have to allow them to hear human voices.

Due to the significant differences in hearing issues (including physical damage and sensory issues) it seems unlikely that getting identical headphones for all students will give an ideal result. The person who asked me the question didn’t explain what type of students are being taught. If it’s an adult education class then getting everyone the ideal headset wouldn’t be particularly difficult. If however it’s the special needs class in a high school then students would probably want the most shiny headphones rather than the ones that are a best match to their hearing issues.

Also some combinations of hearing problems and ambient noise can’t be addressed by such headsets. A friend who developed Noise Induced Hearing Loss from shooting tells me that he really can’t stand brass instruments. But the high frequencies from such instruments tend not to be filtered well by noise canceling headphones, so any student who has such a problem would probably need hearing aids that filter out high frequencies – I believe that such hearing aids are available but don’t have any particular knowledge about them.

Test Results

I did a quick test on my Bose QC-15 noise canceling headphones [1] which cost me $320US including tax and my cheap Bauhn headphones from Aldi [2] which cost me $69AU (and which apparently could later be purchased on special for $35AU to clear stock).

I found that when not playing music they seemed to perform about equally well in terms of allowing me to hear people speaking, although I admit that just having a conversation with the nearest people wasn’t the most scientific test. When I was playing music I found that the Bose headset made it significantly more difficult to hear people speak than the Bauhn headset. This is an advantage for the Bose for it’s intended use, and I expect that students who need a headset for medical reasons won’t want to listen to music while studying so it’s never a disadvantage.

In both cases, if the headphones are used for just canceling unwanted noise the speaker shouldn’t need to raise their voice significantly to be heard. In some situations the noise canceling headphones make it easier for someone with good hearing to hear what people are saying, for example a conversation in a car or plane could probably be held at a lower volume if all people involved were wearing suitable noise canceling headphones. If however the students have damaged hearing then I can’t make any prediction as to whether the teacher could speak at a lower volume or whether they would be required to use a higher volume if the students wore such headphones.

The Brookstone on-ear headphones that I tested [3] seem particularly noteworthy in this regard due to the way they canceled the melody of the store background music and just left the singing. If someone wants to buy headphones for people with physical damage to their ears then the Brookstone product is really worth investigating. If however the target market happens to be people on the Autism Spectrum then they may hate anything that presses on their ears (as I do) in which case the Brookstone product can’t be considered. The Brookstone price of $150US (presumably $160 including tax) was also the best price I saw when shopping in the US – but I presume that I could have found something with a similar quality and price to Bauhn in the US if I looked hard enough.

Conclusion

The big advantage of the Bose for this use is that it blocks a wider range of frequencies than some other noise canceling headsets. They all work really well on regular low frequency noise such as car engine noise (whether a car passenger or a pedestrian) but to stop certain higher frequencies such as those from air conditioning systems the Bose wins hands down. I guess this may depend on what noise is to be blocked, if a class was held in the same room every time and noise canceling headsets were purchased specifically for that class then it would probably make sense to ensure that the acoustic capabilities of the headsets match the unwanted background noise and the hearing issue that each student has.

Here’s an Amazon link: Bose® QuietComfort® 15 Acoustic Noise Cancelling® Headphones

I’ve been reading about Sensory Processing Disorder. I’m sure that some children are doing poorly in the default school system because they either have an undiagnosed case of SPD or who don’t have enough symptoms to get a diagnosis. I think it would make a good experiment to try noise canceling headphones on some of the difficult children, I wouldn’t expect a high success rate – but if it worked in as little as 5% of cases and did no harm to the children who didn’t benefit then it would be worth doing.