|
When I worked for Red Hat I joined AISA [1] (the Australian Information Security Association – formerly known as ISIG). Red Hat marketting paid for my membership so it was a good deal, I went to meetings (which often had free drinks), said good things about Red Hat security, and it cost me nothing.
I was recently asked why I chose not to renew my membership, I didn’t have time to give a full answer so I’ll blog it now.
AISA offers discounts on some conferences, books, and training related to computer security, if you plan to purchase such things then they do offer good deals. However I have little time to attend conferences at the moment, not enough time to read all the free Internet resources related to computer security, and feel no need to pay for such training. If at any time I plan to attend a conference where the discount for AISA members is equal or greater than the AISA membership fee then I can easily re-join.
AISA membership seems largely to consist of managers and consultants not technical people or people doing R&D type work. This isn’t a bad thing if you are a manager or consultant, but when attending AISA meetings I don’t meet the type of people I meet at events such as SecureCon [2], Linux Conf Au [3], RuxCon [4], and the SE Linux Symposium [5] (which I think is not going to be held again for a while). Meetings of my local LUG [6] typically have more people doing serious technical work related to computer security than the AISA meetings I’ve attended.
The AISA code of Ethics has as it’s second criteria “I will comply with all relevant laws“. Some laws can not be obeyed by decent people (study some German or Russian history or what is happening in China right now for examples). Many other laws should not be obeyed. Many countries (including Australia) have enacted many laws which should not be obeyed in the name of the “war on terror“.
A final thing that irked me about AISA is their professional membership system (click on this link and download the AISA_Professional_Membership_Requirements_Nov_2006 document for details). It seems that I don’t qualify because I don’t have one of the listed certifications, and a public credit on the NSA web site [7] doesn’t count (yes, I asked about this). I’m not overly worried about this, I figure that any clique that won’t accept me also won’t accept a significant portion of the people that I want to associate with – so we can hang out elsewhere. I don’t recall there being any great benefit to professional membership apart from the possibility of adding it to your business card if you are so inclined (I don’t recall ever putting B.Sc [8] on a business card and don’t plan on adding anything less).
There are some real benefits to AISA membership, but not for me.
A common feature in blog software is a Blogroll, this is a list of links to blogs which are associated in some way with the blog in question – most commonly it’s a list of blogs run by friends of the blogger in question.
Now in the case of friends with very similar interests (IE same religious and political beliefs, sexual preferences, hobbies, etc) this wouldn’t cause a problem. In the case of commercial blogs it also will work well (EG Google runs a large number of blogs which contain links to all the rest – they may not interest all readers but should be expected not to offend any).
In the case of personal blogs where people don’t always have the same interests there is scope for problems. A link to the main page of a blog is an unconditional recommendation for the blog. There are not many of my friends who I share that much in common with that I would be prepared to give such a recommendation for a frank personal blog, and for those few I am unwilling to do so as it generates social pressure to include others. Some of my friends who didn’t get listed in my blogroll would probably get offended, even though it’s only the occasional post that I strongly disagree with which would make me refrain from such a listing.
I appreciate it when people add me to their blogroll, but don’t have any expectation that the listing will remain (I’m not going to get offended if someone changes their criteria for listing and removes my entry). But I consider it an equal compliment when someone cites one of my blog posts as a reference and recommends that other people read it. Citing an individual post has the advantage (for the person citing the post) they are specifically not recommending my entire blog. For example if you recommend one of my posts about Linux and one of your readers goes further into my site and is offended by my posts about politics then it’s not your issue. Tim Berners-Lee made an interesting point [1] “so readers, when they find something distasteful or unreliable, don’t just hit the back button once, they hit it twice“. I know that some people who would be interested in the technical Linux issues I write about don’t read my blog because they are offended by my political beliefs, I am not concerned about this – but I don’t expect that everyone who chooses to link to me will necessarily want to make the same trade-off.
When citing individual posts it’s possible to strongly agree with one post and strongly disagree with another by the same author.
Probably the best way of acknowledging your friends via blogging (if you choose to do so) would be to make an occasional links post which contains short positive recommendations to the best posts your friends wrote. If every month or two someone writes a links post (post which has little content, merely recommendations for other posts) which references your posts then you know that they like you (and you get a Technorati.com boost), so a blogroll entry hardly seems necessary. An additional benefit for giving such direct credit is that the person receiving the links will know what you consider to be their best work which will help them in their future writing.
I believe that adding this feature to common blogging software was a mistake. The small number of people who actually need this (such as Google) can create it via a HTML widget (IE writing raw HTML for the page – it’s really easy to do) and the rest of the population would be better off without it.
The site http://maxfeed.ath.cx/ is copying the entire Planet Debian feed for the purpose of splogging. I’ve sent one DMCA take-down notice for one of my pages (hopefully they will go through and remove all pages that were illegally copied from my feed). Other people who have non-commercial use licenses for their blog feeds may want to do the same.
Would it be possible to have the entire Debian feed licensed in such a way such that one person could request that the Planet Debian feed not be used for such things?
A widely cited unofficial rule on the Internet is known as Godwin's Law [1]. In it’s original form this rule states that “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one“. Mike Godwin noted that “overuse of Nazi and Hitler comparisons should be avoided, because it robs the valid comparisons of their impact“. The purpose of noting and publicising this is to reduce such false comparisons. It’s often used as a rule for conduct in various fora where it’s regarded that if you compare your opponent in a debate to Hitler or unjustly call them a Nazi then the debate is over and you have lost.
In more recent times due to successful application of Godwin’s Rule the frequency of inappropriate comparisons has dramatically decreased. This combined with the small number of people on the net who are interested in discussing historical events that happened ~60 years ago (and the even smaller number of people who are interested in learning about them) means that the possibility of robbing valid comparisons of their impact is small. Now a large part of the use of Godwin’s rule is because it’s regarded as an ad-hominem attack that distracts everyone from serious discussion more than most such attacks. To see examples of this you merely have to do a Google search for “Bill Gates” and “Hitler” (currently 597,000 web pages have both those terms).
Note that the original reason for avoiding such false comparisons is given to sustain the impact of legitimate comparisons. It is quite legitimate to use the term Nazi when describing anyone who wants to implement mass-murder and slavery on a national scale (or any of the other awful things that the Nazis did), or the propaganda and other mechanisms that the NSDAP used to gain power. It is also legitimate to make historical comparisons, comparing Stalin to Hitler is historically valid (both were awful tyrants that committed a similar range of crimes) – I’m not going to compare them in this post but I merely note that such a comparison is valid (and is the subject of much debate by people who are interested in history).
I’ve just had a religious zealot named Elder Dave accuse me of breaching some new version of Godwin’s Law. I believe that my mention of the fact that his religious group (which wants to deny homosexuals the same legal rights as all other people) has some beliefs in common with the Westboro Baptist Church [2] is valid. The Westboro Baptist Church seems to be the leading organisation in the English-speaking world for demanding legislation to discriminate against homosexuals. Anyone else who demands such legislation IS trying to achieve similar aims and should expect this to be noted by everyone who has any knowledge of what is happening in the world.
Religious groups are given a lot of freedom to discriminate internally (EG “Priest” is one of the few jobs that can be denied to a qualified woman without any legal recourse). Also I think that most people will agree that it’s acceptable for them to advocate self-repression (EG the book What Some of You Were [3]). But when they try and get legislation enacted to institutionalise such discrimination I believe that they are going too far.
As for why the 95% of us who aren’t gay should be bothered about this issue, there’s the famous poem by Martin Niemöller which is known by the line “First they came for the Jews” [4] (actually there are a few versions of the poem some of which don’t mention Jews – check the link for details). Here’s another Wikipedia link for the same poem [5].
Today I released new versions of my Bonnie++ [1] benchmark. The main new feature (in both the stable 1.03b version and the experimental 1.93d version) is the ability of zcav to write to devices. The feature in question was originally written at the request of some people who had strange performance results when testing SATA disks [2].
Now I plan to focus entirely on the 1.9x branch. I have uploaded 1.03b to Debian/unstable but shortly I plan to upgrade a 1.9x version and to have Lenny include Bonnie++ 2.0x.
One thing to note is that Bonnie++ in the 1.9x branch is multi-threaded which does mean that lower performance will be achieved with some combinations of OS and libc. I think that this is valid as many applications that you will care about (EG MySQL and probably all other modern database servers) will only support a threaded mode of operation (at least for the default configuration) and many other applications (EG Apache) will have a threaded option which can give performance benefits.
In any case the purpose of a benchmark is not to give a high number that you can boast about, but to identify areas of performance that need improvement. So doing things that your OS might not be best optimised for is a feature!
While on this topic, I will never add support for undocumented APIs to the main Bonnie++ and ZCAV programs. The 1.9x branch of Bonnie++ includes a program named getc_putc which is specifically written to test various ways of writing a byte at a time, among other things it uses getc_unlocked() and putc_unlocked() – both of which were undocumented at the time I started using them. Bonnie++ will continue using the locking versions of those functions, last time I tested it meant that the per-char IO tests in Bonnie++ on Linux gave significantly less performance than on Solaris (to the degree that it obviously wasn’t hardware). I think this is fine, everyone knows that IO one character at a time is not optimal anyway so whether your program sucks a little or a lot because of doing such things probably makes little difference.
At the moment it seems that about half the USB flash devices on sale are listed as “Vista ReadyBoost Ready“. I recently bought an 8G USB device that I returned because it could only sustain 4MB/s writes (not much fun for backing up 4G+ of data). So I’ve been wondering whether I should get a ReadyBoostReady device.
ReadyBoostReady only means that it is faster than other devices in some ways, it has no inherent feature difference. After some searching I found a FAQ about ReadyBoost [1] which explains what it requires.
It seems that ReadyBoost needs “2.5MB/sec throughput for 4K random reads and 1.75MB/sec throughput for 512K random writes“, which isn’t really a lot. About 5-6 years ago I was running some machines with 4 disk RAID-5 arrays that could sustain 3MB/s writes for significantly smaller block sizes (maybe 12K), given that random access is something that disks are really bad at (physical movement of the heads is required) and something that flash is good at) it gives an indication of how low the performance bar is for ReadyBoost.
I’m sure that a device which meets the minimum specs would do some good if you have a single disk that’s not overly fast. But if you have a decent RAID array for swap then I doubt that the minimum requirements for ReadyBoost would give a benefit.
As for doing bulk transfers, it seems that while ReadyBoost Ready devices will have consistent good performance across all their storage (apparently some devices perform better for the first blocks so that people who use FAT based filesystems can get good performance from their FAT) they won’t necessarily have particularly good performance for bulk IO (1.75MB/s is pitiful when you want to copy DVD images).
There are some USB flash devices that are marketted as having high performance and which are supposed to sustain >20MB/s, but I’m unlikely to find them going cheap at my local electronics store. I had hoped that ReadyBoost would be demanding enough that the ReadyBoostReady devices (which aren’t the cheapest flash devices on sale but often aren’t too expensive) would satisfy my requirements.
Shintaro wrote an interesting post about Rakugo (a form of story-telling) and Mottainai (a particular form of gratitude that is now becoming an English word in reference to environmentalism) [1]. My definitions of the two words are poor, I encourage interested readers to read Shintaro’s post for the links.
Recently I had been considering which jobs are most like a senior programmer position in terms of skills and the work environment. It occurred to me that a chess-master has a job that bears some similarities to that of a senior programmer. A chess master might play a Simul [2] which compares to a programmer or sys-admin fixing lots of small bugs at the same time. In a serious chess tournament up to seven hours may be on the time clocks – not an uncommon amount of time taken to find a subtle bug in a program. For quick chess games as little as three minutes may be on the clock for all your moves – similar to the situation where the network goes down and the CEO is watching you fix it. The intellectual abilities for playing chess and programming have many obvious similarities, memorising significant patterns to avoid re-calculation, having a large mental register set to allow considering complex situations or multiple possibilities at the same time, and being able to spend many hours thinking about a problem.
With that in mind one thing that particularly interested me in Shintaro’s post was the reference to a Japanese apprentice system where “They are trained at his masters house from early years maybe after junior-high, living and caring his boss every day life, like Go or Shogi players“. I wonder how this could be used in other cultures and career paths (such as English-speaking countries and computer programming). Probably the “every day life” part wouldn’t be well accepted.
When comparing with the high-school experiences that seem typical of people in the computer industry an apprentice program has a lot to offer. A small amount of pay (as opposed to school-fees) and an environment where laws apply (IE almost no bullying) are both significant benefits.
Anyone who is reasonably intelligent is most likely to find that they can’t learn anything from the teachers in the final years of high-school because most teachers don’t know much about the subjects that they teach and in the instances where the teacher does know more the class environment doesn’t permit teaching more advanced material.
Someone of average intelligence who has worked for 5+ years in an industry can always teach a beginner some useful things so there is some obvious potential to learn.
There is a meme that apprentice programs only apply to trades not intellectual work. This meme is probably correct when applied to many categories of office work but seems obviously wrong when applied to computer work. My observation is that at most places where I have worked there has been a significant amount of time from the most productive, skilled, and highly paid people spent on tasks that are way below their skill level – and often requiring skills that they don’t have. One example is when I was working for an ISP in a senior sys-admin position, when things really went wrong on the network I was usually the person to fix them – but I spent moderate amounts of time fixing hardware problems with desktop PCs and some time fixing Windows networking issues (of which I know little). Some of the problems I fixed could have been fixed by a 16yo apprentice who would probably have taken less time to fix them while also having a much lower hourly rate – and in some cases may have done a better job).
I’m not sure that having an apprentice assigned to an individual programmer would work well, but having one per team should.
I have previously written about how to get the benefits of a university education without attending university [3] so I don’t think that the inability of such apprentices to attend university would necessarily be a problem.
Cory Doctorow wrote an interesting article about social networking [1]. One of his points is “Imagine how creepy it would be to wander into a co-worker’s cubicle and discover the wall covered with tiny photos of everyone in the office, ranked by “friend” and “foe,” with the top eight friends elevated to a small shrine decorated with Post-It roses and hearts“, another concerns the issue of forced “friends” where colleagues and casual acquaintances demand to be added to a friends list.
He speculates that the reason for social networking systems to be a fad is that once too many people you don’t really like force themselves into your friends list then you will feel compelled to join a different service.
I believe that the practice of ranking friends is simply a bad idea and wonder whether anyone who has completed high-school has ever used it seriously. If you publicly rank your friends then you will alienate other friends (particularly any who might have ranked you more highly than you ranked them). Everyone who has used social networking systems has discovered the pressure to avoid alienating people that you don’t actually like. It seems obvious that alienating people who you do like is even more of a problem.
In my previous post about Better Social Networking [2] I suggested having multiple lists on your social networking server that are published to different people. That would allow segregating the lists as a way of dealing with some demands to be listed. People who are associated with work (colleagues, managers, and in an example Cory used students at a school where a teacher worked) would be on a work list. The work list would point to the work profiles of other people which would match whatever the standards are for the industry in question (which would still allow quite a range, the standards for sys-admins of ISPs differ significantly from those for primary school teachers). I’m sure that someone who worked for an ISP in Amsterdam would understand that a work-based friend request made to someone who teaches primary school in a part of the world that is religiously conservative would understand why that request would be declined (but the same person might add them to a personal friends list).
Another way of alleviating such problems is to not require that listing be bi-directional. Current social networking systems involve one party making a friend request to another which is listed as pending in the GUI for both parties. The options are to either leave it in that state (which is an annoyance) or reject it (which may cause offence). With a uni-directional listing one party would add the other and hope for the best. If they aren’t obsessive about such things they may not even notice that the other party didn’t reciprocate. Also it would allow for famous people receiving links from many people to their public profile without any expectation of reciprocation. Of course a distributed social networking system such as I suggest would inherently have uni-directional links as there would be no central repository to force them to be all bi-directional.
The web site www.CheatNeutral.com offers cheaters the possibility of paying single or monogamous people to offset their cheating. It’s an interesting spin on the carbon trading schemes that are on offer.
www.greenmaven.com – a Google search site for Green related information. www.greenerbuildings.com – information on designing buildings to be “Green”.
Binary adding machine using marbles and wood-work [1]. I’ve just been reading Accelerando by Charles Stross [2], in that book he describes the Pentagon using Babbage machines to avoid the potential of electronic surveillance.
Alan Robertson has just started a blog [3]. He is a lead developer in the Linux-HA (Heartbeat) [4] project (which incidentally lists SGI as a friend due to the work that Anibal and I did [5]).
Here is an interesting article about light pollution [6]. It covers the issues of observing the stars, saving energy, and reducing crime through effective lighting.
I recently was giving away some old P3 and P4 machines and was surprised by the level of interest in P4 machines. As you can see from my page on computer power use [1] the power use from a P4 system is significantly greater than that of a P3. The conventional wisdom is that the P4 takes 1.5 times as many clock cycles to perform an instruction as a P3, the old SPEC CPU2000 results [2] seem to indicate that a 1.5GHz P4 will be about 20% faster than a 1GHz P3, but as the P4 has significantly higher memory bandwidth the benefit may be significantly greater for memory intensive applications.
But generally as a rule of thumb I would not expect a low-end P4 desktop system (EG 1.5GHz) to give much benefit over a high-end P3 desktop system (1GHz for a desktop), and a 2GHz P4 server system probably won’t give any real benefit over a 1.4GHz P3 server system. So in terms of CPU use a P4 doesn’t really offer much.
One significant limitation of many P3 systems (and most name-brand P3 desktop systems) is the fact that the Intel chipsets limited the system to 512M of RAM. This really causes problems when you want to run Xen or similar technologies. I have a few P4 1.5GHz systems that have three PC-133 DIMM sockets allowing up to 768M of RAM (it seems that PC-133 DIMMs only go up to 256M in size – at least the ones that cost less than the value of the machine). Another issue is USB 2.0 which seems to be supported on most of the early P4 systems but none of the P3 systems.
512M of RAM is plenty for light desktop use and small servers, my Thinkpad (my main machine) had only 768M of RAM until very recently and it was only Xen that compelled me to upgrade. The extra power use of a P4 is significant, my 1.5GHz P4 desktop systems use significantly more power than a Celeron 2.4GHz (which is a much faster machine and supports more RAM etc). Low-end P4 systems have little going for them except for 50% more RAM (maybe – depends on how many sockets are on the motherboard) and USB 2.0.
So it seems strange that people want to upgrade from a P3 system to a P4.
|
|