|
|
I’ve been asked for my opinion of senatoronline.org.au which claims to be Australia’s only internet-based political party. The claim may be correct depending on what you consider to be “Internet based“. Here is a copy of their platform from their web site:
Senator On-Line is not aligned to any other political party… it is neither Liberal nor Labor.
Senator On-Line (‘SOL’) is a truly democratic party which will allow everyone on the Australian Electoral roll who has access to the internet to vote on every Bill put to Parliament and have its Senators vote in accordance with a clear majority view.
We will be running candidates for the upcoming federal Upper House (Senate) elections.
When a SOL senator is elected a web site will be developed which will provide:
- Accurate information and balanced argument on each Bill and important issues
- The vast majority of those registered on the Australian Electoral roll the chance to have their say by voting on bills and issues facing our country
- A tally of all votes which will then count in Parliament
Each person on the Australian Electoral roll will be entitled to one vote and only be allowed to vote once on each bill or issue.
SOL senators will have committed in writing to voting in line with the clear majority view of the SOL on-line voters.
Senator On-Line will enable broader community involvement in the political process and the shaping of our country.
If you like the concept, please register your details and tell others about SOL.
Now at first glance it sounds like a good idea, the Liberal party (which is similar in all the bad ways to the US Republican party) has demonstrated what happens when a party gets away with entirely ignoring the wishes of the voters.
But there are three significant problems with the Senator Online idea. The first is the issue of providing “Accurate information and balanced argument on each Bill“. We have seen many media outlets claiming that there is a debate about global warming (the debate ended years ago, the vast majority of scientists have agreed that global warming existed for a long time) and now the same media outlets are claiming that there is a debate about whether it will cause any harm to us in the near future (ignoring all the dams that are running low). One of the aims of the democratic process is that representatives who spend all their time working on politics can fully analyse the issues and also gain access to resources that are not available to the average citizen, thus being able to make better informed decisions. The next problem is that it can degenerate into mob-rule. The idea of having a tabloid TV show being able to directly control votes in the senate is not an exciting one. The final problem is how to ensure that each citizen gets the opportunity to have exactly one vote, solving this requires good security and identity checks involving viewing photo-id. The checks used for voting (merely asserting your name and residential address) might be considered adequate for an election, but are grossly inadequate for online voting where one false registration allows many false votes.
I think it would be more interesting to have a populist party started that campaigns for the issues that have the most impact on the majority of citizens. Issues such as the fact that a couple on the median income can’t afford the median house price [1], the lack of job security that is caused by recent industrial relations legislation, the price of health-care, and the fact that any car which is affordable on a single median income (after money is spent on rent and other basic living expenses) will lack basic safety features such as air-bags. While the Green party has decent policies on these issues they have many other policies to consider. A party that solely concentrated on allowing more people to have decent health care, not risk losing their job, own their own home, and drive a safe car would get a lot of interest from people who don’t normally take much notice of politics.
I have just read Career Development for Geeks [1] by Erik de Castro Lopo [2]. It makes some interesting points about a traditional approach to an IT career. The path I followed for most of my career (after I had a few years experience) was to work as a contractor and happily leave jobs without having anything else lined up.
Erik suggests getting Engineers rather than managers to give references, it’s an interesting idea. Engineers can give better references about quality of work, while managers can speak on behalf of their employer (in theory). In practice a sane manager won’t give a bad review for legal reasons so the value of a reference from a manager is probably limited. Of course one problem with reviews is that I have never heard of a recruiting agent or employer actually verifying the ID of a reference. I could be listed on a friend’s CV as a senior manager in a multi-national company (which doesn’t have a major Australian presence) and give a good review and it seems unlikely that the typical recruiter would work it out.
For someone with plenty of spare time and no significant assets (no risk of being sued) it could be entertaining to apply for a bunch of positions that they are not qualified for with friends using pre-paid mobile phones to give references. This could be done as a documentary on corporate hiring practices, or to simply try and get the highest paid job possible. Based on observing some former colleagues it seems that little skill is required to get a job and that when people do really badly they get paid for at least a few months. I am constantly amazed when reading reports about so-called “con artists” who commit crimes for what are often small amounts of money. Getting an income significantly greater than average without knowing anything about how to do the work is very common and is never treated as fraud (the classic example was a former colleague who wanted to write his own encryption algorithm but who didn’t even know about binary and therefore couldn’t use operations such as XOR).
Erik’s main suggestion for dealing with recruiting agents is to talk about project management issues (recruiters don’t understand technology). My way of dealing with them has been to assure them that I know it all and tell them to forward my CV to the client.
Another suggestion is to learn new skills and diversify your skills. I don’t support this because I believe that the majority of people who read my blog are significantly more skillful than the typical programmer. If an area of technology starts to go out of fashion then it’s the people with the least skills who suffer the most. If you are good at your work and enjoy it then it shouldn’t matter much if people around you are being laid off. Of course to rely on this you have to be working in a reasonably large field. For example if you develop software in a language which has hundreds of programmers then you may be at risk, but if there are tens of thousands of programmers using the language then you only need to be in the most skillful 10% to be assured of employment for the next decade or two.
That said there are benefits to moving around, meeting new people, and working on new contracts. One thing you don’t want is to have 10 years of experience which are the same year repeated 10 times!
Update: Here is a paper Erik submitted to OSDC on the same topic [3]. Mostly the advice is the same but with more detail and in a better format for reading.

Above is a picture of a DVD player I saw on sale in Dick Smith Electronics [1] (a chain store that used to sell mostly electronics hobbyist gear but now mostly sells consumer electronics gear). I asked one of the staff why it said “root”, tests revealed that the DVD caused any player to display “root” once it was inserted. The DVD in question was from the $2 box (the DVDs that didn’t sell well at other stores) and for some reason had the string “root” in it’s title (or some other field that gets picked up by the player).
I wonder if an ex-employee of a movie company is laughing about millions of DVD players all around the world saying “root”.
Update: I’ve been told by private mail that “It means it’s displaying the “root” menu. (As opposed to the title menu or any submenu’s)” and “most just display ‘menu’ or similar“. So apparently every time my DVD player says “menu” the new ones from Dick Smith will say “root” (I have yet to test this theory).
Reporters Sans Frontiers (AKA RSF AKA Reporters Without Borders) has an interesting document about blogging [1]. They are specifically focussed on blogging as a way of reporting news. Their definition of a blog states that it is “a personal website” (there are many corporate blogs run by teams) and that it contains “mostly news” (most blogs that I read contain mostly technial information about computers and any “news” is mostly about the author). They also describe a blog as being set up with an “interactive tool” – most blogs are written using web-based tools, but some are written with plain text and a compilation system.
Some of their technical information is simply wrong. For example they say that RSS “alerts users whenever their favourite blogs are updated” (this can be done through RPC notification which then requires RSS to pull the data – but a user will almost always have a tool polling their favourite RSS feeds). Their section listing the various options for blogging platforms mentions LiveJournal, Blogger, and MSNSpaces but doesn’t mention WordPress.com which seems to be a serious omission (although it does mention civiblog.org which is a useful hosting resource for grassroots political campaigning). But these are minor problems, they are reporters primarily not programmers.
Their document gets interesting when it gives links to pages about blogging ethics. One page that was not directly linked (presumably because it mainly concerns non-journalistic blogging) is Rebecca Blood’s document about blogging ethics [2]. She gives a brief coverage to conflicts of interest and gives most space to the topic of maintaining a record of changes. One thing I have been considering is having a separate instance of WordPress for documents that change [3]. This way regular blog posts (such as this one) can be relied on to preserve a list of changes (any change other than correcting a typo will be documented) so if you link to a post you can rely on the original intent being preserved. But for posts which have ongoing value as live documents they will be kept current without a significant change-log. Items that go into the main blog would include commentary on news and predictions, and the document blog would contain technical information about Linux, the science-fiction stories that I will eventually write, etc. When I wrote my previous blog post about this issue I was mainly considering technical issues, but when considering the ethical issues it becomes clear that I need a separate blog (or other CMS) for such things. The site etbe.coker.com.au needs to be a reliable record of what I wrote while another site such as docs.coker.com.au could contain live documents with no such record. The essential factor here is the ability of the user to know what type of document they are seeing.
Another really interesting point that Rebecca makes is in regard to known bad sources. Sometimes a known bad source can produce worthy data and is worth referencing, but you have to note the context (which among other things means that if the worthy data gets any reasonable number of hits it may get replaced by something entirely different). If a blogger cites a reference on a known bad site and explains the reason for the reference (and the fact that the nature of the site is known) then the reader reaction will be quite different to just a reference with no explanation.
As a contrast to Rebecca Blood’s document the cyberjournalist.net code of ethics [4] covers the traditional issues of journalistic integrity, “public interest” etc. One interesting issue that they raise (but apparently don’t address) is the definition of “private people“. While it is generally agreed that movie stars and similar people deserve less protection as they have chosen to be in the public eye the situation regarding bloggers is less clear. Is a blogger with a high ranking on technorati.com in a similar position to a Hollywood star when it comes to privacy?
A claim is sometimes made that blogs are unreliable because they can change (based on the actions of some bloggers who change posts with no notification and turn sections of the blogsphere into a swamp). Magazines and newspapers are held as an example of unchanging content and people who publish online magazines sometimes express horror at the idea of changing a past issue. The fact is that newspapers and magazines never changed past issues because it is impossible to recall hundreds of thousands of printed documents that are distributed around a country or the world. When it’s possible to fix old issues (as it is online) then the requirement is to document the changes not keep bad versions.
It probably would be a useful feature for a blogging package to have the ability to display (and link to) old versions of a page. This would allow the users to easily see the changes and anyone who references a post can reference a particular version of it. In some situations it may make sense to use a Wiki server instead of a Blog server for some data to preserve the change history. Maybe I should consider a wiki in which only I have write access for my documents repository.
No-one seems to cover the issue of how to deal with fiction. Obviously for a blog such as 365tomorrows.com (a science-fiction blog) it’s assumed that all content is fictional unless stated otherwise. The problem comes when someone includes fiction amongst the non-fiction content of a blog. I recently unsubscribed from a blog feed when the author revealed that what appeared to have been a diary entry was actually a fiction story (and a poor one at that). If your blog has mostly non-fiction then any fiction must be clearly marked, probably including the word “fiction” in the title and the permalink would be required to avoid all potential of confusion. For my documents CMS/blog I am considering having a category of “fiction” and to use the category name in the permalink address of each post. Of course science-fiction is sometimes obvious as fiction, but to avoid problems with fiction that is set in current times I think it’s best to clearly mark all such posts. The old-fashioned media seems to have dealt with this, tabloid womens magazines (which almost everyone reads occasionally as they are often the only reading materiel in a waiting room) traditionally have a fiction story in every issue which is clearly marked.
Another type of blogging is corporate blogging. I wonder whether that needs to be covered separately in an ethics code.
One thing that the three documents about ethics have in common (as a side-note in each) is the fact that the personal reputation of the writer depends on their ethics. If you are known for writing truthfully and fairly then people will treat your posts more seriously than that of writers who lie or act unfairly. There are direct personal benefits in acting ethically. The RSF document claims that the sole purpose of ethics standards is to instill trust in the readership which directly benefits the writer – not for any abstract reasons. Whenever ethics is mentioned in terms of writing there is always someone who claims that it can’t be enforced, well many people in your audience will figure out whether you are acting ethically and decide whether they want to continue reading your materiel. I wonder whether Planet installations should have ethics codes and remove blogs that violate them.
In conclusion I think that a complete code of ethics for blogging needs to have some IF clauses to cover the different types of blog. I may have to write my own. Please let me know if you have any suggestions.
I just read an interesting post by Kylie Willison [1] which mentions the restaurant Lentil as Anything [2].
The restaurant chain is noteworthy for charging what people believe that the food is worth (poor people can eat for free). I think that there are cultural similarities with the Linux community, so we should have a Linux meeting at one of those restaurants some time. Comment or email me if you are interested, I’ll probably arrange the details on the LUV-talk mailing list [3].
I have just had a lot of trouble with Thumbnails on one of my blogs. It turned out that I had to install the package php5-gd and restart Apache before thumbnails would even be generated. The package php5-gd (or php4-gd) is “suggested” by the Debian WordPress package and it’s not a dependency, so the result of apt-get install wordpress will be that thumbnails won’t work.
I’ve filed Debian bug report 447492 [1] requesting that php5-gd be a dependency. Another slightly controversial issue is the fact that the MySQL server is not a dependency. I believe that it’s correct to suggest MySQL as the database server is commonly run on a different host and WordPress will clearly inform you if it can’t access the database.
An alternate way of solving this bug report would be to have WordPress give a warning such as “Thumbnails disabled due to lack of php-gd support” which would allow users to make requests of their sys-admins that can be easily granted.
I have just read an interesting post speculating about the possibility of open source hardware [1].
To some extent things have been following a trend in that direction. Back in the bad old days every computer manufacturer wanted to totally control their market segment and prevent anyone else from “stealing their business”. Anti-competitive practices were standard in the computer industry, when you bought a mainframe you were effectively making a commitment to buy all peripherals and parts from the same company. The problems were alleviated by government action, but the real change came from the popularity of PC clones.
White-box clones where every part came from a different company truly opened up the hardware development, and it wasn’t all good. When running a simple single-tasking OS such as MS-DOS the problems were largely hidden. But when running a reliable multi-tasking OS such as Linux hardware problems became apparent. The PCI bus (which autoconfigured most things) reduced the scope of the problem but there are still ways that white-box machines can fail you. Now when I get a white-box machine I give it away to members of my local LUG. My time is too valuable to waste on debugging white-box hardware, I would rather stick to machines from IBM and HP which tend to just work.
Nowadays I buy only name-brand machines, all the parts were designed and tested to work together – this doesn’t guarantee that the machine will be reliable but it does significantly improve the probability. Fortunately modern hardware is much faster than I require for the work I do, so buying second-hand name-brand machines (for less money than a new white-box machine) is a viable option.
The PCI bus [2] standard from Intel can be compared to some of the “Open Source” licenses from companies where anyone can use the software but only one company can really be involved in developing it.
One significant impediment to open hardware development is the proprietary nature of CPU manufacture. Currently there are only a few companies that have the ability to fabricate high-end CPUs so projects such as OpenRISC [3] which develop free CPU designs will be limited to having their CPUs implemented with older technology (which means lower clock speeds). However this doesn’t mean that they aren’t useful, tailoring factors such as the number of registers, the bus width of the CPU, and the cache size to match the target application has the potential to offset the performance loss from a lower clock speed. But this doesn’t mean that an OpenRISC or similar open core would be ideal for your typical desktop machine.
If companies such as Intel and AMD were compelled to fabricate any competing CPU design at a reasonable cost (legislation in this regard is a possibility as the two companies collectively dominate the world computer industry and it would be easy for them to form a cartel) then designs such as OpenRISC could be used to implement new CPUs for general purpose servers.
Another issue is the quality of support for some optional extra features which are essential for some operations. For example Linux software RAID is quite good for what it does (basic mirroring, striping, RAID-5 and RAID-6), but it doesn’t compare well with some hardware RAID implementations (which actually are implemented in software with a CPU on the RAID controller). For example with a HP hardware RAID device you can start with two disks in a RAID-1 and then add a third disk to make it a RAID-5 (I’ve done it). Adding further disks to make it a larger RAID-5 is possible too. Linux software RAID does not support such things (and I’m not aware of any free software RAID implementation which does). It would certainly be possible to write such code but no-one has done so – and HP seem happy to make heaps of money selling their servers with the RAID features as a selling point.
Finally there’s the issue of demand. When hardware without free software support (such as some video cards which need binary-only drivers for best performance) is discussed there is always a significant group of people who want it. The binary-only drivers in question are of low quality, often don’t support the latest kernels, and have such a history of causing crashes that kernel developers won’t accept bug reports from people who use them, but still people use them. In the short-term at least I expect that an open hardware design would deliver less performance and in spite of the fact that it would have the potential to offer better reliability the majority of the market would not accept it. The production volume of electronics gear is the major factor determining the price so it would cost more.
I think that both IBM and HP provide hardware that is open enough for my requirements, they both have engineers working on their Linux support and the interfaces are well enough documented that we generally don’t have any problems with them. Both Intel and AMD and the major system vendors are all working on making things more open, so I expect some small bug significant improvements in the near future.
In my previous post about Advice for Speakers [1] I referred to the common problem of going through presentation materiel too quickly due to being nervous. In extreme cases (which tend to happen when giving a presentation for an unusually large audience) the materiel for an hour long presentation may be covered in 10 minutes or less. This is a problem that most speakers have at least once in their career.
I recently heard an interesting (and in retrospect obvious) way of dealing with this problem. That is to label each note card with the estimated time through the presentation when it should be presented. If you are reading from the 10 minute card at 2 minutes into the presentation then you need to slow down.
Of course this doesn’t work as well if you follow the “strict powerpoint” method of presenting where the only notes are the slides. It would be good if a presentation program supported having windows on two displays so you could have one full-screen window on an external video device for the audience to see and one window that’s not full-screen on the built-in display in the laptop for the speaker. The built-in display could have speaker notes, a time clock, and other useful things.
I have just filed Debian bug report 447207 [2] requesting that this feature be added to Open Office. It was closed before this post was even published due to Unstable apparently having some degree of support for this and the rest being already on the planned feature list (see the bug report for details). I found the complaint about a feature request being against Etch interesting as Debian doesn’t have bugs tracked against different releases, so it’s not as if a bug reported against Etch will get any different treatment than a bug reported against Unstable.
Bruce Schneier summarised a series of articles about banking security [1]. He mentioned the fact that banks don’t seem to care about small losses and would rather just deal with the problem (presumably by increasing their fees to account for losses).
There are some other interesting bits in the article, for example banks are planning a strategy of securing transactions with an infected computer [2]! Now there are some possible solutions to this, for example if the bank issued a hardware device that allowed the customer to enter the account number, amount to transfer, destination account, and PIN number and then produced a cryptographically secure hash (based in part on a rolling code) that the user could type in.
The only way that you are going to do anything securely with an infected host is if everything is offloaded into an external device. In which case why not just do the Internet banking on the external device? It’s not difficult to make a hardware device that is small enough to carry everywhere, has a display, an input device, net access, and which is reasonably difficult to crack externally. Consider for example a typical mobile phone which has more RAM, CPU power, and storage than a low-end machine that was used for web browsing in 1996. Mobile phones have a good history of not being hacked remotely and are difficult for the owner to “unlock”. A locked-down mobile phone would be a good platform for Internet banking, it has wireless net access in most places (and with a Java application on the phone it could do banking by encrypted SMS). Being locked down to prevent the user from reconfiguring the software (or installing new software) will solve most of the security problems that plague Windows.
If when signing up for a new phone contract I was offered the possibility of getting a phone with secure banking software installed for a small extra fee then I would be very interested. Of course we would want some external auditing of the software development to make sure that it’s not like some of the stupid ideas that banks have implemented. Here is a classic example of banking stupidity [3]. They display a selected word and picture for the user when they login to try and prevent phishing (of course a proxy or a key-logger on the local machine will defeat that). They also ask for an extra password (of the simple challenge phrase variety) if you use a different IP address, of course as the typical broadband user doesn’t know when their IP address changes they wouldn’t know if their data was being proxied and dial-up users will enter it every time. A google search for “internet banking” picture password turns up a bunch of banks that implement such ideas.
deb http://www.coker.com.au etch selinux
The above sources.list line has all the i386 packages needed for running SE Linux with strict policy on Etch as well as a couple of packages that are not strictly needed but which are really convenient (to solve the executable stack issue).
gpg --keyserver hkp://subkeys.pgp.net --recv-key F5C75256
gpg -a --export F5C75256 | apt-key add –
To use it without warnings you need to download and install my GPG key, the above two commands do this. You will of course have to verify my key in some way to make sure that it has not been replaced in a MITM attack.
The only thing missing is a change to /etc/init.d/udev to have a new script called /sbin/start_udev used to replace the make_extra_nodes function (so that the make_extra_nodes functionality can run in a different context). Of course a hostile init script could always exploit this to take over the more privileged domain, but I believe that running the init scripts in a confined domain does produce some minor benefits against minor bugs (as opposed to having the init scripts entirely owned).
I back-ported all the SE Linux libraries from unstable because the version in Etch doesn’t support removing roles from a user definition by the “semanage user -m” command (you can grant a user extra roles but not remove any roles). Trying to determine where in the libraries this bug occurred was too difficult.
Does anyone know of a good document on how to create repositories with apt-ftparchive? My current attempts are gross hacks but I’ve gone live anyway as the package data is good and the apt configuration basically works.
|
|