|
I just read an interesting post by Kylie Willison [1] which mentions the restaurant Lentil as Anything [2].
The restaurant chain is noteworthy for charging what people believe that the food is worth (poor people can eat for free). I think that there are cultural similarities with the Linux community, so we should have a Linux meeting at one of those restaurants some time. Comment or email me if you are interested, I’ll probably arrange the details on the LUV-talk mailing list [3].
I have just had a lot of trouble with Thumbnails on one of my blogs. It turned out that I had to install the package php5-gd and restart Apache before thumbnails would even be generated. The package php5-gd (or php4-gd) is “suggested” by the Debian WordPress package and it’s not a dependency, so the result of apt-get install wordpress will be that thumbnails won’t work.
I’ve filed Debian bug report 447492 [1] requesting that php5-gd be a dependency. Another slightly controversial issue is the fact that the MySQL server is not a dependency. I believe that it’s correct to suggest MySQL as the database server is commonly run on a different host and WordPress will clearly inform you if it can’t access the database.
An alternate way of solving this bug report would be to have WordPress give a warning such as “Thumbnails disabled due to lack of php-gd support” which would allow users to make requests of their sys-admins that can be easily granted.
I have just read an interesting post speculating about the possibility of open source hardware [1].
To some extent things have been following a trend in that direction. Back in the bad old days every computer manufacturer wanted to totally control their market segment and prevent anyone else from “stealing their business”. Anti-competitive practices were standard in the computer industry, when you bought a mainframe you were effectively making a commitment to buy all peripherals and parts from the same company. The problems were alleviated by government action, but the real change came from the popularity of PC clones.
White-box clones where every part came from a different company truly opened up the hardware development, and it wasn’t all good. When running a simple single-tasking OS such as MS-DOS the problems were largely hidden. But when running a reliable multi-tasking OS such as Linux hardware problems became apparent. The PCI bus (which autoconfigured most things) reduced the scope of the problem but there are still ways that white-box machines can fail you. Now when I get a white-box machine I give it away to members of my local LUG. My time is too valuable to waste on debugging white-box hardware, I would rather stick to machines from IBM and HP which tend to just work.
Nowadays I buy only name-brand machines, all the parts were designed and tested to work together – this doesn’t guarantee that the machine will be reliable but it does significantly improve the probability. Fortunately modern hardware is much faster than I require for the work I do, so buying second-hand name-brand machines (for less money than a new white-box machine) is a viable option.
The PCI bus [2] standard from Intel can be compared to some of the “Open Source” licenses from companies where anyone can use the software but only one company can really be involved in developing it.
One significant impediment to open hardware development is the proprietary nature of CPU manufacture. Currently there are only a few companies that have the ability to fabricate high-end CPUs so projects such as OpenRISC [3] which develop free CPU designs will be limited to having their CPUs implemented with older technology (which means lower clock speeds). However this doesn’t mean that they aren’t useful, tailoring factors such as the number of registers, the bus width of the CPU, and the cache size to match the target application has the potential to offset the performance loss from a lower clock speed. But this doesn’t mean that an OpenRISC or similar open core would be ideal for your typical desktop machine.
If companies such as Intel and AMD were compelled to fabricate any competing CPU design at a reasonable cost (legislation in this regard is a possibility as the two companies collectively dominate the world computer industry and it would be easy for them to form a cartel) then designs such as OpenRISC could be used to implement new CPUs for general purpose servers.
Another issue is the quality of support for some optional extra features which are essential for some operations. For example Linux software RAID is quite good for what it does (basic mirroring, striping, RAID-5 and RAID-6), but it doesn’t compare well with some hardware RAID implementations (which actually are implemented in software with a CPU on the RAID controller). For example with a HP hardware RAID device you can start with two disks in a RAID-1 and then add a third disk to make it a RAID-5 (I’ve done it). Adding further disks to make it a larger RAID-5 is possible too. Linux software RAID does not support such things (and I’m not aware of any free software RAID implementation which does). It would certainly be possible to write such code but no-one has done so – and HP seem happy to make heaps of money selling their servers with the RAID features as a selling point.
Finally there’s the issue of demand. When hardware without free software support (such as some video cards which need binary-only drivers for best performance) is discussed there is always a significant group of people who want it. The binary-only drivers in question are of low quality, often don’t support the latest kernels, and have such a history of causing crashes that kernel developers won’t accept bug reports from people who use them, but still people use them. In the short-term at least I expect that an open hardware design would deliver less performance and in spite of the fact that it would have the potential to offer better reliability the majority of the market would not accept it. The production volume of electronics gear is the major factor determining the price so it would cost more.
I think that both IBM and HP provide hardware that is open enough for my requirements, they both have engineers working on their Linux support and the interfaces are well enough documented that we generally don’t have any problems with them. Both Intel and AMD and the major system vendors are all working on making things more open, so I expect some small bug significant improvements in the near future.
In my previous post about Advice for Speakers [1] I referred to the common problem of going through presentation materiel too quickly due to being nervous. In extreme cases (which tend to happen when giving a presentation for an unusually large audience) the materiel for an hour long presentation may be covered in 10 minutes or less. This is a problem that most speakers have at least once in their career.
I recently heard an interesting (and in retrospect obvious) way of dealing with this problem. That is to label each note card with the estimated time through the presentation when it should be presented. If you are reading from the 10 minute card at 2 minutes into the presentation then you need to slow down.
Of course this doesn’t work as well if you follow the “strict powerpoint” method of presenting where the only notes are the slides. It would be good if a presentation program supported having windows on two displays so you could have one full-screen window on an external video device for the audience to see and one window that’s not full-screen on the built-in display in the laptop for the speaker. The built-in display could have speaker notes, a time clock, and other useful things.
I have just filed Debian bug report 447207 [2] requesting that this feature be added to Open Office. It was closed before this post was even published due to Unstable apparently having some degree of support for this and the rest being already on the planned feature list (see the bug report for details). I found the complaint about a feature request being against Etch interesting as Debian doesn’t have bugs tracked against different releases, so it’s not as if a bug reported against Etch will get any different treatment than a bug reported against Unstable.
Bruce Schneier summarised a series of articles about banking security [1]. He mentioned the fact that banks don’t seem to care about small losses and would rather just deal with the problem (presumably by increasing their fees to account for losses).
There are some other interesting bits in the article, for example banks are planning a strategy of securing transactions with an infected computer [2]! Now there are some possible solutions to this, for example if the bank issued a hardware device that allowed the customer to enter the account number, amount to transfer, destination account, and PIN number and then produced a cryptographically secure hash (based in part on a rolling code) that the user could type in.
The only way that you are going to do anything securely with an infected host is if everything is offloaded into an external device. In which case why not just do the Internet banking on the external device? It’s not difficult to make a hardware device that is small enough to carry everywhere, has a display, an input device, net access, and which is reasonably difficult to crack externally. Consider for example a typical mobile phone which has more RAM, CPU power, and storage than a low-end machine that was used for web browsing in 1996. Mobile phones have a good history of not being hacked remotely and are difficult for the owner to “unlock”. A locked-down mobile phone would be a good platform for Internet banking, it has wireless net access in most places (and with a Java application on the phone it could do banking by encrypted SMS). Being locked down to prevent the user from reconfiguring the software (or installing new software) will solve most of the security problems that plague Windows.
If when signing up for a new phone contract I was offered the possibility of getting a phone with secure banking software installed for a small extra fee then I would be very interested. Of course we would want some external auditing of the software development to make sure that it’s not like some of the stupid ideas that banks have implemented. Here is a classic example of banking stupidity [3]. They display a selected word and picture for the user when they login to try and prevent phishing (of course a proxy or a key-logger on the local machine will defeat that). They also ask for an extra password (of the simple challenge phrase variety) if you use a different IP address, of course as the typical broadband user doesn’t know when their IP address changes they wouldn’t know if their data was being proxied and dial-up users will enter it every time. A google search for “internet banking” picture password turns up a bunch of banks that implement such ideas.
deb http://www.coker.com.au etch selinux
The above sources.list line has all the i386 packages needed for running SE Linux with strict policy on Etch as well as a couple of packages that are not strictly needed but which are really convenient (to solve the executable stack issue).
gpg --keyserver hkp://subkeys.pgp.net --recv-key F5C75256
gpg -a --export F5C75256 | apt-key add –
To use it without warnings you need to download and install my GPG key, the above two commands do this. You will of course have to verify my key in some way to make sure that it has not been replaced in a MITM attack.
The only thing missing is a change to /etc/init.d/udev to have a new script called /sbin/start_udev used to replace the make_extra_nodes function (so that the make_extra_nodes functionality can run in a different context). Of course a hostile init script could always exploit this to take over the more privileged domain, but I believe that running the init scripts in a confined domain does produce some minor benefits against minor bugs (as opposed to having the init scripts entirely owned).
I back-ported all the SE Linux libraries from unstable because the version in Etch doesn’t support removing roles from a user definition by the “semanage user -m” command (you can grant a user extra roles but not remove any roles). Trying to determine where in the libraries this bug occurred was too difficult.
Does anyone know of a good document on how to create repositories with apt-ftparchive? My current attempts are gross hacks but I’ve gone live anyway as the package data is good and the apt configuration basically works.
There are currently some adverts on Trams in Melbourne for some sort of community organisation. They have an amusing picture of an “Amazon” character from an RPG with statistics such as Self Esteem at zero.
They have a web site at www.reachoutcentral.com.au which firstly tries to launch a popup window (did the web designers not notice that almost everyone blocks popups?) then uses a refresh for a redirect (with a message to click on an icon if it doesn’t work) and finally demands that Flash be installed.
So someone who is in need of whatever counselling the Inspire Foundation (the organisation that registered the domain) offers is likely to be told that their computer is not adequate, I’m sure that they’ll appreciate that. As will the visually impaired people who will get even less out of the site.
Is the web site offering a service that might tend to be needed by people who don’t have the latest computer gear and can’t run the latest version of Flash? It’s not only Linux users who are unwilling or unable to use Flash, older Windows installations also apparently have problems.
Are people who can’t afford a broadband net connection likely to need the service in question? Dial-up network access is cheaper (and many people will be hesitant to visit a web site related to personal problems in a public place such as an Internet Cafe).
A common practice in the blog space is to write posts that ask a question in the hope that someone else will answer it via a comment or a post. This is known as a “Lazyweb Post”.
It seems to me that the way of managing such posts could be improved with a little informal cooperation. From now on I plan to tag each Lazyweb post with a Lazyweb Tag, now any reader of my blog can with a single click see all the unanswered lazyweb posts that I have written (I will remove the tag once an adequate answer has been provided or I have discovered and documented the solution myself).
Almost all bloggers want to get more traffic to their blogs, the question is how to get traffic of the nature that you desire. Links from blogs that you like are a preferred source of traffic. If a blogger that you would like to receive a link from has a lazyweb tag or category then it provides a good list of ideas for post topics that will get you the links you desire. Such lists would also be good for determining what information is not generally available and which therefore can be used for the topics of original posts.
Such tags or categories should also be good for getting answers to lazyweb posts. I’ll start doing this and see how well it takes off.
Danny Angus writes about the potential threat posed by small storage devices with large capacity [1]. His post was prompted by a BBC article about Hitachi’s plans for new hard drives [2], they are aiming for 4TB of data on a single drive by 2011 and a 1TB laptop drive. One thing I noticed about the article is that they made the false claim that current drives are limited to 1TB, the storage capacity is determined by the total surface area which is proportional to the square of the radius and the height of the drive (AFAIK there are no practical limits to the number of platters apart from the height of the drive). So if a 5.25 inch hard drive was to be manufactured with today’s technology it should get a capacity equivalent to at least three times the capacity of the larger 3.5 inch drive.
The reason that 5.25 inch drives are not manufactured is that for best performance you want multiple spindles so that multiple operations can be performed concurrently. Using 3.5 inch drives in servers allows the use of more disks for the same amount of space in the rack and the same amount of power. The latest trend is towards 2.5 inch (Small Form Factor AKA SFF) disks for servers to allow more drives for better performance. With 3.5 inch disks a 1U system was limited to 3 disks and a 2U system was often limited to 4 or 5 disks. But with 2.5 inch drives a 2U server can have 10 drives or more. I know of one hardware vendor that plans to entirely cease using 3.5 inch drives and claims that 2.5 inch disks will give better performance, capacity, and power use!
In regard to Danny’s claim (which is entirely correct) about the threat posed by insiders. I don’t believe that a laptop with 1TB of capacity is the threat. In a server room people notice where laptops get connected and there are often strictly enforced policies about connecting machines that don’t belong to the company. I believe that the greatest threat is posed by USB flash devices. For example let’s consider a database with customer name (~20B), birth-date (10B), address (~80B), phone number (~12B), card type (1B), card number (16B), card expiry (5B), and card CVV code (3B). That’s ~155 bytes per record in CSV or TSV format. If you have data for a million customers that’s 155M uncompressed and probably about 50M when compressed with gzip or WinZip (depending on which platform is being ripped). No-one even sells a USB flash device that is smaller than 50M, I recently bought a 2G flash device that was physically very small and cheap (it was in the bargain bin).
The next issue is, what data might be worth stealing that is large enough to not fit on a USB device? I guess that if you want to copy entire network file shares from a corporation then you would need more than the 16G that seems to be the maximum capacity of a USB device at the moment. Another theoretical possibility would be to copy the entire mail spool of a medium to large ISP. For the case of a corporate file server you could probably get the data at reasonable speed, 1TB of data would take 10,000 seconds or 2.8 hours to transfer at gigabit Ethernet speeds (if you max out a GigE link – it could be as much as five times that if the network is congested or if the server is slow). It’s doable, but it would be a rather tense three or more hours waiting by an illegally connected laptop. For the mail server of a large ISP there is often no chance of getting anywhere near line speed, it’s lots of small reads and seek performance is the bottleneck, such servers are usually running close to capacity (and trying to copy data fast would hurt performance and draw unwanted attention).
Another possibility might be to copy the storage of an Intranet search device. If a company has a Google appliance or similar device indexing much of their secret data then copying the indexes would be very useful. It would allow offline searches of the corporate data to prepare a list of files to retrieve later.
It would probably be more useful to get online access to the data from a remote site. I expect that an unethical person could sell remote access to someone who is out of range of extradition. All that would be required would be to intentionally leave a flaw in the security of the system. In most large corporations this could be done in a way that is impossible to prove. For example if management decrees that the Internet servers run some software that is known to be of low quality then a hostile insider could make configuration changes to increase the risk – it would look like an innocent mistake if the problem was ever discovered (the blame would entirely go to the buggy software and the person who recommended it).
A large part of the solution to this problem is to hire good employees. The common checks performed grudgingly by financial companies are grossly inadequate for this. Checking whether a potential employee has a criminal record does not prevent hiring criminals, it merely prevents hiring unsuccessful criminals and people who have not yet been tempted enough! The best way to assess whether HR people are being smart about this is to ask them for an estimate of how many criminals are employed by the company. If you have a company that’s not incredibly small then it’s inevitable that some criminals will be employed. Anyone who thinks that it is possible to avoid hiring criminals simply isn’t thinking about the issues. I may write more about this issue in a future post.
Another significant part of the solution to the problem is to grant minimum privileges to access data. Everyone should only be granted access to data that they need for their work so that the only people who can really compromise the company are senior managers and sys-admins, and for best security different departments or groups should have different sys-admin teams and separate server rooms. Of course this does increase the cost of doing business, and probably most managers would rather have it be cheap than secure.
I just read a nutty post claiming that Neo-Conservatism is good for the environment [1].
The first bogus claim is that Saddam had WMD and war was required because he was a despot. The fact is that the Iraqi government was always repressive, there are many factions in Iraq that don’t like each other and a repressive government is the only way to keep such groups in a united country. The current civil war in Iraq and the effective secession of Kurdistan (which currently seems to be involved in an undeclared border war with Turkey) demonstrates this. Saddam was always a despot, but he did improve the living conditions of most Iraqis – the best way to avoid a revolution is to convince the majority of the population that things will get worse if there is change. I suggest reading the Wikipedia page about Saddam Hussein [2].
The best information on Fourth Generation Warfare (4GW) seems to be on the conservative military analysis site Defense and the National Interest [7]. It covers all the issues related to invading other countries from a conservative point of view. Note that Neo-Conservatives are not Conservatives, the real Conservatives hate the Neo-Cons more than anyone else does.
The amusing statement is made that “apologists claim it was one of the most advanced Arab nations” and then a link is provided to information on Saudi Arabian censorship. It’s worth reading the wikipedia page about the history of Saudi Arabia [3], among other interesting facts “the U.S. Army Corps of Engineers built the country’s television and broadcast facilities and oversaw the development of its defense industry” (does the US army share responsibility for the censorship?). It’s widely regarded that if the US military support was removed then the Saudi government would be overthrown. Referring to Saudi Arabia hardly seems like something you want to do if trying to justify occupying other middle-eastern states.
An unsubstantiated claim is made that under-developed countries produce excess pollution due to inefficient technology. Unlike some people I try to get some facts before posting so I looked up the wikipedia page on CO2 emissions per capita [4]. It seems that the highest ranking first-world country is Luxembourg at #4, the next is the US at #10. The countries on the list that rank higher than the US have a combined population of about 11,000,000 while the US population is 302,000,000 – some quite mental arithmetic suggests that the US produces about 20 times more CO2 than the top 9 countries on the list combined! It doesn’t seem that having the highest technology is helping the US protect the environment, I guess that they just use it to build bigger cars. The next thing I noticed is the countries that are at the bottom of the list – they are the world’s poorest countries. It seems that countries without much money just can’t afford to burn lots of oil, while countries with lots of money can. No real surprises there.
The lowest ranking on the list for a country that is unlikely to be regarded as being in abject poverty is India at position #133. The next lowest is Turkey at position #98 followed by China at #91.
As a final point of reference Switzerland is at position #69 a produces just under 27% the CO2 that the US does (on a per-capita basis). According to the CIA World Fact Book Switzerland has an infant mortality rate of 4.28/1000 and a life expectancy of 80.62 [5], while the US has an infant mortality rate of 6.37/1000 and a life expectancy of 78 [6]. I believe that the infant mortality rate and the life expectancy are the two factors that are most representative of quality of life as they are the easiest factors for measuring the overall health of the population. Being healthy is one of the most important factors in quality of life. It seems to me that by all objective measures the Swiss are doing better than the people of the US, yet they produce less pollution and never invade other countries.
Probably the most ridiculous statement in the post is “see rapidly dwindling resources wasted on jihad and revolution“. A revolution (locals using force to create a new government) takes little resources and most actions that a more simple-minded analysis might call “jihad” takes almost none. Sending an invasion force to the other side of the world and supporting an occupying army for years does however use significant resources, consider that the Hummvee is the least fuel-efficient vehicle on American roads in terms of work done (trucks and buses use more fuel but carry large amounts of cargo or many people), but it’s also the most fuel-efficient vehicle used by the US army in Iraq.
There is the possibility that Jaldhar was attempting satire. If so then I suggest that satire be kept separate from serious web content to avoid confusion about where the satire ends. But if you want some satire about oil then I suggest consulting theonion.com.
Before someone accuses me of being impolite, over a year ago the best estimate for the death toll from the occupation of Iraq was 655,000 [8]. Current extrapolations from the previous medical research suggest that the death toll has now exceeded 1,000,000. Regardless of whether the original post was intended as satire or not, I’m not laughing and I don’t feel the need to be polite to someone who makes excuses for such loss of life.
Finally as a positive suggestion towards the environment (and any other issue that you may want to discuss), I suggest analysing the issues before writing about them and not blindly trusting other people. When you write a post make objective claims with references to back them up. When you read a post consider the points that are made and the references that are cited. Do the references support the claims? Are there other interpretations of the evidence? Are the reference sites reputable?
|
|