|
I am amazed that I had never read the article Caring for Your Introvert [1] before. One of the interesting points concerned acting like an extrovert (I can do it for the duration of a typical job interview). Another was the issue of recovery time after having to deal with people. When living in hotels (which I did for about 18 months straight in 1999 and 2000) I found that some days I would reach my quota for dealing with people before I had dinner, going to bed hungry seemed like a better option than going to a restaurant.
One thing that occurred to me is the lack of apparent introversion among most delegates at computer conferences. It seems that the majority of people who are any good at coding are introverts and you might expect an environment with a majority of introverts to be somewhat quiet. An interview with the author of the article [2] published 3 years later explains this (among other things). Here is a quote:
But once an introvert gets on a subject that they know about or care about or that intrigues them intellectually, the opposite often takes hold. They get passionately engaged and turned on by the conversation. But it’s not socializing that’s going on there. It’s learning or teaching or analyzing, which involves, I’m convinced, a whole different part of the brain from the socializing part.
Which describes a lot of the activity at conferences. It’s standard practice for people to walk up and join a conversation that covers an area of technology that interests them and then just walk away when the topic changes.
I wonder if any of the social networking and dating sites have a section for Myers-Briggs [3] test results.
Via Tim Connors blog [4].
I have previously written about some of my efforts to counter sploggers [1].
Since then I have had a particularly brazen splogger copy one of my posts entirely and claim to have written it. The only reason I noticed the copyright violation (my blog license is on my About Page [2]) was because the post in question linked to other posts of mine and I saw the links. I was offended by the flagrant violation of all aspects of copyright law (breaking the license and infringing my moral rights by not attributing me as the author) and by the fact that the splog in question was hosted by Dreamhost (who have offended me by refusing a DMCA take-down request). So I decided that merely issuing a DMCA take-down was not enough. I went through the splog and identified content copied from several major journals (including by a journalist I regard as a friend) as well as by one multi-national corporation – and I notified all the relevant people.
The splog in question deleted all it’s old content the next day, and immediately started copying new articles from other blogs. I have informed the people who appear to be copyright holders for some of the new articles…
I recommend that other people who deal with sploggers also go to the extra effort of notifying other victims. It’s usually quite easy to do you just select a random bit of text from the copied article and paste it into your favourite search engine – usually you get only a single result. Some of the splog posts are edited in small ways so the first search may fail – if so then you merely need to search for a second piece of text. If you only request that your own illegally copied material be taken down then the splogger still has a good business model. They can keep copying content in violation of the license, occasionally take a post down when they get caught, and both the splogger and the ISP continue to make money. If you notify other victims (many of which won’t have the skills to find the content themselves or the background knowledge sufficient to recognise the benefits in having it removed unless you explain it to them) then the splogger loses a lot of content at one go and the ISP will have a more difficult time claiming to be innocent of the process.
Also when you notify multi-national corporations you can expect that they have some decent lawyers and a budget assigned to such work. While I would be extremely unlikely to sue an ISP that repeatedly hosts unauthorised copies of my copyright materiel the same can’t be said for a corporation.
For more information on splogging see the Wikipedia entry [3].
Today was the final day of the AUUG 2007 conference [1].
Yesterday I gave a talk about SE Linux for about an hour (not sure exactly as I forgot to make an MP3). AUUG is well known for having conferences with very technical delegates and I wasn’t expecting an easy audience. At the start of my talk I asked for a show of hands as to who has used SE Linux before, about 1/3 of the delegates raised their hand. Someone requested that I poll the audience as to who had used SE Linux involuntarily, it wasn’t what I had planned to ask but it’s best to get these things out in the open so I asked the question. More people raised their hand as being unwilling users of SE Linux than those who had firstly admitted to using it!
A theme of the AUUG conference was quality, and I had planned to cover some of the ways that SE Linux improves the quality of code by making certain classes of bugs show up (EG file handle leaks) and by allowing the developers and sys-admins to know exactly what programs are doing. But I ended up explaining why you want to use SE Linux, the concepts of policy analysis tools (as compared to the absence of such tools for Unix permissions), the benefits of MAC and why SE Linux is worth using.
I believe that the talk did some good and conversations with delegates afterwards revealed that some of them had done some positive things with SE Linux.
Today I wore a T-shirt advertising the root password for my new SE Linux Play Machine [2] which will be online shortly (hopefully tomorrow) which got some interest (AFAIK I’m the first person to wear a root password on a T-shirt). When I have my play machine online I plan to wear the shirt whenever I visit an electronics store or any other location where geeks are likely to congregate. ;)
After the conference finished about 1/3 of the delegates went to Ginza Teppanyaki [3] for dinner. Some of the guys wanted to photograph me wearing my shirt.
Finally, the conference went pretty well, the delegates and speakers all seemed to enjoy themselves and learn some useful things. Congratulations to the AUUG conference organisers!
I was recently browsing an electronics store and noticed some laptops designed for children advertised at $50AU. These machines were vastly different from what most of us think of when the term laptop is used, they had tiny screens, flimsy keyboards, no IO devices, and a small set of proprietary programs. It was more of a toy that pretends to be a laptop than a real laptop (although I’m sure that it had more compute power than a desktop machine from 1998).
After seeing that I started wondering what we can do to provide cheap serious laptops for children running free software. The One Laptop Per Child (OLPC) [1] program aims at producing laptops for $100US to give to children in developing countries. It’s a great project, the hardware and software are innovative in every way and designed specifically for the needs of children. However they won’t have any serious production capacity for the near future, and even $100US is a little more expensive than desired.
Laptops have significant benefits for teaching children in that they can be used at any time and in any place – including long car journeys (inverters that can be used to power laptops from a car power socket are cheap).
A quick scan of a couple of auction sites suggests that laptops get cheap when they have less than 256M of RAM. A machine with 128M of RAM seems likely to cost just over $200 and a machine with less than 128M is likely to be really cheap if you can find someone selling it.
So I’m wondering, what can you do to set up a machine with 64M of RAM to run an educational environment for a child? KDE and GNOME are moderately user-friendly (nothing like the OLPC system, and even Windows 3.0 was easier in some ways) but too big to run on such a machine (particularly when GIMP is part of a computer education system). This should be a solvable problem, Windows 3.0 ran nicely in 4M of RAM, one of the lighter X window managers ran well in 8M of RAM for me in Linux 0.99 days, and the OS/2 2.0 Workplace Shell (which in many ways beats current KDE and GNOME systems) ran nicely in 12M). I think that a GUI that vaguely resembles Windows 3.0 should run well on a machine with 64M of RAM – is there such a GUI?
I have briefly scanned the Debian-Edu [2] site but the only reference to hardware requirements is for running LTSP.
Pia writes about the difficulty in getting young women and young people in general into the computer industry [1].
While I agree that having more women in the computer industry would be a good thing, I have difficulty believing some of the claims that Pia makes. For example the claim that “[girls] are more career focused earlier in their school life“. I chose my career when I was about 11 years old [2] and several of my friends made similar decisions at similar ages. I would be interested to read anecdotal evidence from women in the computer industry about how old they were when they decided on their career and if their friends did the same, a reference to any research on this topic would also be useful. I tend to believe that boys are more career focussed at all stages of their life but have little evidence to support this idea. One fact that seems obvious is that the idea that “if you don’t succeed in your career then you can always marry someone who does” is almost non-existent among boys. It seems likely that such ideas have a statistically relevant affect on the focus on career of boys vs girls. Also the Australian Bureau of Statistics reports that the MEDIAN income for women is significantly lower than for men [4], I find it difficult to imagine that girls could be more career focussed from a younger age and yet get significantly lower pay (note the fact that it’s median not mean income is very significant as it removes the “glass ceiling” effect).
Phillip Greenspun writes about why there are so few women doing scientific research [6] and makes some good points about why scientific research is generally not well paid and therefore a university student would choose a career in some other area and suggests that it’s a macho thing that guys enter such competitive fields for relatively low wages. Maybe some women correctly assess the costs and benefits of a career in scientific research and then make the mistake of equating Computer Science to other branches of science.
But the median income suggests that although there may be some valid reasons for avoiding science that would only cover a small portion of the problem (the difference in median income can not be explained by misplaced attempts to maximise income).
One problem that is significant is the quality of school education for girls. Not only is there the issue that boys may crowd-out girls for some subjects that are supposedly traditionally for boys (such as all science) but even girls schools aren’t as good as they should be. Some time ago I was talking to a teacher at an all-girls school, the school was moderately expensive and parents were paying the extra money presumably to give their daughters educational opportunities that they might miss in a co-ed school. However the school did not teach hard maths (“Maths B” was the official name at the time) and only taught the easy maths (official name “Maths A” and unofficial name “Vegie Maths“) because they didn’t have many girls demanding it (which is probably difficult to measure if you don’t offer it as a reasonable option) and the girls who wanted to study it could always move to a different school. So the choice facing girls at the exclusive school in question was “skip the subject that is most useful for further studies in most science subjects” or “go to a different school and miss most of your friends“, this sort of decision would surely discourage some potential female computer programemers. Also I think that the difference between boys and girls in regard to studying computer science has a lot to do with the fact that given a choice between missing most of their friends and missing out on something related to computers would be a no-brainer for most boys. Paul Graham’s article about Nerds has some interesting points to make in this regard – maybe the problem is that girls aren’t Nerdish enough [5].
Pia also writes about parents and teachers advising children not do study IT because of a perceived lack of jobs. I think that the problem here is not just bad advice, but also a bad tendency to take advice. Someone who wants to study in an established field which changes little over time (law and accounting spring to mind) probably should take careful note of the advice that they are given – things haven’t changed much in the last few decades. But someone who wants to study in a field that changes rapidly and where every year has new and significant developments (of which the best example is the computer industry) should probably be quite skeptical of all advice – most advice about the computer industry concerns how things used to be not how things are. Finally when considering whether to accept advice you should consider who is offering it. For example advice from a hiring manager should be carefully noted (as the manager will tell you precisely what factors influence their own decisions on hiring). Advice from people who are successful in the industry should also be noted. Advice from a school career advisor who gets paid about 1/3 what any 25yo can earn in the computer industry should be entirely ignored. I wonder whether being hesitant to ignore advice is a problem for girls in this regard.
When I was in year 11 I had to take a subject related to career planning. It covered some things that were of minor use (such as writing CVs) and had an assignment of writing a fictional CV for yourself a few years after leaving university. I received bad marks for preparing a CV that involved changing jobs as companies went bankrupt or projects failed due to bad management, I was told that if your employer fails in the market it makes you look bad! However my fictional CV did bear some resemblance to what really happened…
In terms of what industries have jobs available, the best advice I can give students is to actively do some research of their own. It’s not difficult to get the jobs sections of some newspapers and do a quick scan to see how many positions are open in a field, and it’s even easier to do some searches on online jobs sites (which usually tell you how many positions posted in the last X days match your criteria). For example I just visited jobserve.com.au and found 1724 Engineering jobs and 5622 IT jobs advertised. If you compare this to the university intake (I visited the Swinburne university courses list [3] and found 25 IT courses vs 29 Engineering course) it seems that the ratio of Engineering graduates to jobs is not likely to be as good as that for CS graduates. Of course it may be that all the other universities have hardly any Engineering courses and balance the ratio out (but I doubt it). In any case this would be a good way of injecting some facts into a discussion of the relative merits of different career choices and avoiding it being an issue of parents/teachers not liking computers vs children liking them. Determining the relative pay rates of different industries is a lot more difficult (and requires a significant amount of work), some recruiting agencies publish statistics – but those stats only apply to the positions that they fill (which is a sub-set of the actual positions).
Finally as a piece of advice for children, try and find a job that you enjoy. If you earn $30K doing something you enjoy then you’ll probably be happier than if you earn $100K doing something you hate. Also if you enjoy your work then you will probably be able to take the extra steps needed to become successful – often it’s not a choice between having fun or making good money but a choice between fun and good money or the absence of both. If someone tells you to avoid doing what you love and instead do something boring for some unsubstantiated belief that there would be more money in it then be a nerd and tell them that their opinion is not relevant (it does tend to make teachers angry though).
Dreamhost have refused my request (under the DMCA) to be correctly identified as the author of content copied from my blog. I am publishing this so that anyone else who deals with them will know what to expect. Also if someone wishes to sue Dreamhost in regard to content that they host this may help demonstrate a pattern of behaviour.
The situation is quite obviously the result of a broken script used by a splogger that doesn’t correctly match author names with articles. The fact that the official Dreamhost policy is to disregard the requirement that the author(s) of copyright material be correctly identified is reprehensible. It also seems likely to open them to the risk of legal action. If you know how to contact a director of Dreamhost then please give them a link to this post and explain the risks to them.
For anyone who wants the detail the messages are below.
Continue reading Dreamhost and the DMCA
A common misconception is that only programmers can contribute to free software. The first significant reference I recall to this was in a presentation by Pia Waugh [1] where she mentioned that she felt that the way words such as “coder” and “hacker” are used in the community as synonyms for “contributor” are denigrating to people such as herself who aren’t coders!
I’m certain that no-one in the Australian Linux community would have any doubt about Pia’s contributions, not even when they mis-use terms such as “coder“.
Non-coding ways of contributing include writing documentation, arranging meetings and conferences, and serving on the committee of LUGs and other organisations. I’m sure that there are many other ways of contributing that I can’t think of at the moment.
The development of the free software community depends on a wide range of skills, and many of the best coders don’t have great skills in other areas. The meme that you have to be a great coder to contribute causes two problems, one is that there is a lack of contributers to non-coding tasks, and another is that coders end up doing non-coding work that they are often not particularly good at and which takes them away from things that they do well. I recently refused a nomination for the committee of my local LUG because I believe that most members can do the committee work as well as I can and many of them can do it better than me – so it’s best if I spend my time coding and preparing presentations about code that I write instead of joining the LUG committee.
I was reminded of Pia’s presentation (from some years ago) by a comment on my blog post about Ideas for a Home University [2] where the commenter seems to believe that they can’t adopt the university degree equivalent via free software contributions that I suggest because of not being a programmer or sys-admin. I really doubt that anyone would care if Pia has a university degree (I don’t know whether she has one), I’m sure that the companies that hire Waugh Partners [3] do so because of the reputation that Pia and Jeff have for getting things done and for their positions in the community and not because of whatever certificates that the partners may have.
I suggest to the commenter in question (and anyone else in a similar position) that they become involved in running their local LUG (or starting one if there isn’t one already). You would really be surprised by the number of job opportunities that arise from running such a community organisation.
Eddy writes about problems getting the game oolite to run under SE Linux [1].
Strangely after I fixed the shared object issue with libffcall1 (as described in my previous post [2]) it appeared to work for me.
Eddy asked how to allow one application to create write and executable memory regions without allowing such access for all programs. This can be done, in the targeted policy the type unconfined_execmem_exec_t triggers a transtion to the domain unconfined_execmem_t which permits the execstack and execmem operations. For example if the program /usr/bin/foo needs such access then the command chcon -t unconfined_execmem_exec_t /usr/bin/foo will change the type and the next time the program is launched from an unconfined session (a user login session, cron job, or daemon which is not constrained) then the change will apply.
Debian has a program called Lintian that is used to search for common bugs in Debian packages. When it encounters a package with a shared object that requests an executable stack (as described in my previous post about executable stacks and shared objects [1]) it gives a warning such as the following:
W: liblzo1: shlib-with-executable-stack usr/lib/liblzo.so.1.0.0
Lintian is run automatically on Debian servers and has a web site at http://lintian.debian.org/. You can search the site for all packages which have such executable stacks [2].
Of all the packages listed I have only two installed on my system, liblzo1 and libsmpeg0, both of which I had already discovered and built new versions with the correct stack settings (I’ll publish an APT repository shortly). For the rest I am not sure whether they are really bugs. The ones that concern me are xserver-xorg-video-nsc (we don’t want a stack smashing attack on something as important as an X server) and the C libraries libuclibc0 and dietlibc which may cause many programs to run with an executable stack.
The above URL shows that libffcall1 [4] has this problem (as Eddy discovered [5]). Eddy filed Debian bug report 445895 [6] about this problem (I have just updated the bug report with a patch to make it work on i386).
Linda (an alternative to Lintian) does not currently warn about this. I have filed Debian bug report 445826 about this [3].
In a comment on my previous post about SE Linux and worms/trojans [1] a user enquired about which methods of gaining local root are prevented by SE Linux.
A local exploit is one that can not be run remotely. An attack via TCP or UDP is generally considered a remote exploit – even though in some cases you might configure a daemon to only bind to localhost (in which case the TCP or UDP attack would only work locally). When compromising a machine it’s not uncommon for a local exploit to be used after a remote exploit or social engineering has been used to gain non-root privileges.
The two most common types of local root exploit seem to be those which attack the kernel and those which attack a SETUID process. For a non-SE Linux system it usually doesn’t matter much how the local exploit is run. But a SE Linux system in a default configuration will be running the Targeted policy that has almost no restrictions on a user shell session. So an attacker who wants to escalate their privileges from local user to local root on a typical SE Linux system has a significant benefit in starting from a user account instead of starting from a web server or other system process.
In the SE Linux model access is granted to a domain, and the domain which is used for a process is determined by the policy based on the domain of the parent process and the labelling of the executable. Some domains are not permitted to transition to any other domains, such as the domain dhcpd_t (used for a DHCP server). Other domains are only permitted to transition to a small set of domains, for example the domain httpd_t (used for a web server) can only transition to a small number of domains none of which has any significant privileges.
On a machine running without SE Linux a compromise of a DHCP server is game-over as the server runs as root. A compromise of a daemon such as Apache on a machine without SE Linux gives unrestricted access to run applications on the systems – if a SETUID-root program has a security flaw then you lose. The same bug in a SETUID program on a machine running SE Linux is not fatal because SE Linux will prevent the program from doing anything that it’s parent could not do – even if an attacker made Apache run a buggy SETUID program the broken program in question could do nothing other than what Apache is normally permitted to do.
A security flaw in a SETUID-root program on a SE Linux system can still be exploited by a local user (someone who has logged in) when running the Targeted policy. When running the Strict or MLS policies many such vulnerabilities will not be exploitable by local users (for example exploiting PING would only permit network access).
As a rule of thumb you should consider that a kernel security flaw will make it possible to bypass all other security features. However there are some situations where SE Linux can prevent local exploits. One example is a bug discovered in July 2006 which allowed the creation of SETUID files under /proc [2], the targeted policy of SE Linux prevented this from working. Another example is CVE-2003-0127 [3] (a kernel security flaw that was exploited by triggering a module load and then exploiting a race condition in the kernel module load process), the commonly used exploit for this did not work on a SE Linux system because the user process was not permitted the socket access used to trigger the module load (it is believed that an attacker could have written a version of the exploit to work on a SE Linux system – but AFAIK no-one ever did so).
|
|