|
Finally I found the URL of a device I’ve been hearing rumours about. The HotPlug is a device to allow you to move a computer without turning it off [1]. It is described as being created for “Government/Forensic customers” but is also being advertised for moving servers without powering them down.
The primary way that it works is by slightly unplugging the power plug and connecting wires to the active and neutral terminals, then when mains power is no longer connected it supplies power from a UPS. When mains power is re-connected the UPS is cut off.

Modern electrical safety standards in most countries require that exposed pins of a power plug (other than the earth) be shielded to prevent metal objects or the fingers of young children from touching live conductors. The image above shows a recent Australian power plug which has the active and neutral pins protected with plastic such that if the plug is slightly removed there will be no access to live conductors. I have photographed it resting on a keyboard so that people who aren’t familiar with Australian plugs can see the approximate scale.
I’m not sure exactly when the new safer plugs were introduced, a mobile phone I bought just over three years ago has the old-style plug (no shielding) while most things that I bought since then have it. In any case I expect that a good number of PCs being used by Australian companies have the old style as I expect that some machines with the older plugs haven’t reached their three year tax write-down period.
For a device which has a plug with such shielding they sell kits for disassembling the power lead or taking the power point from the wall. I spoke to an an electrician who assured me that he could with a 100% success rate attach to wires within a power cord without any special tools (saving $149 of equipment that the HotPlug people offer). Any of these things will need to be implemented by a qualified electrician to be legal, and any electrician who has been doing the job for a while probably has a lot of experience working before the recent safety concerns about “working live“.
The part of the web site which concerns moving servers seems a little weak. It seems to be based on the idea that someone might have servers which don’t have redundant PSUs (IE really cheap machines – maybe re-purposed desktop machines) which have to be moved without any down-time and for which spending $500US on a device to cut the power (plus extra money to pay an electrician to use it) is considered a good investment. The only customers I can imagine for such a device are criminals and cops.
I also wonder whether you could get the same result with a simple switch that cuts from one power source to another. I find that it’s not uncommon for brief power fluctuations to cause the lights to flicker but for most desktop machines to not reboot. So obviously the capacitors in the PSU and on the motherboard can keep things running for a small amount of time without mains power. That should be enough for the power to be switched across to another source. It probably wouldn’t be as reliable but a “non-government” organisation which desires the use of such devices probably doesn’t want any evidence that they ever purchased one…
Now given that such devices are out there, the question is how to work around them. One thing that they advertise is “mouse jigglers” to prevent screen-lock programs from activating. So an obvious first step is to not allow jiggling to prevent the screen-saver. Forcing people to re-authenticate periodically during their work is not going to impact productivity much (of course the down-side is that it offers more opportunities for shoulder-surfing authentication methods).
Once a machine is taken the next step is to delay or prevent an attacker from reading the data. If an attacker has the resources of a major government behind them then they could read the bus of the machine to extract data and maybe isolate the CPU and send memory read commands to system memory to extract all data (including the keys for decrypting the hard drive). The only possible defence against that would be to have multiple machines exchanging encrypted heart-beat packets and configured to immediately shut themselves down if all other machines stop sending packets to them. But if defending against an attacker with more modest resources the shutdown period could be a lot longer (maybe a week without a successful login).
Obviously an attacker who gets physical ownership of a running machine will try and crack it. This is where all the OS security features we know can be used to delay them long enough to allow an automated shut-down that will remove the encryption keys from memory.
I was watching the British police drama show The Bill [1] and I was impressed by their use of computers.
They were analysing the evidence of a homicide and one of the tasks was to assemble a time-line of the related events. They had a projector connected to a computer which displayed the data and used what was apparently an infra-red pen (presumably similar in technology to Rusty’s infra-red pen for Wii-Mote Pong [2]) to write on the wall and the computer then performed OCR to convert it to printed text. When new evidence was discovered about the time of events they used drag and drop to move the events around to the correct time slots.
I’ve seen many horribly faked renditions of computer stuff on TV, but this seemed quite realistic. It was all technically possible, in many cases what was displayed would be quite inconvenient to fake if you didn’t have software to do it, and little touches like closing the session when the case was resolved are things that you wouldn’t bother with if faking it. The down-side to my analysis is that I had a couple of glasses of red wine so I might be more gullible than usual. ;)
Some years ago I swore off watching The Bill after a horribly stupid episode about computer crime (teenage haxor sells copies of cracked software to computer store owner for resale to the public, then computer store owner murders him for disrespect and price-gouging). I refrained from watching for almost a decade after that. But it seems that they have redeemed themself to some degree.
I wonder whether the software they were showing is really used by law enforcement, or whether it’s designed for corporations to use in tele-conferences etc.
In the same episode they showed the recordings of two security cameras on projectors side by side for comparison, they could fast-forward them independently but they couldn’t zoom in or do other silly and impossible things.
It is becoming increasingly apparent that this post is not going to do any good, so I have deleted the content.
Sorry to the people who were offended.
I won’t be writing about such topics again.
Currently in the US the main political parties are deciding who will contest the next presidential election. Naturally this gets some commentary from all sides.
Planet Debian has syndicated two blog posts commenting on these issues, it’s interesting to compare them:
First John Goerzen writes a post about an issue he (and almost everyone in the Debian community) considers important – copyright law [1]. He quotes the Randall Munroe who is the author of www.xkcd.com (described by the author as “A webcomic of romance, sarcasm, math, and language“) which is wildly popular in the free software community (I believe that there is a fan-club which has meetings). Randall’s commentary is quite interesting and I recommend reading it [2]. The most noteworthy fact is that Barak Obama has sought advice from Lawrence Lessig – who has done more for the free software community than any other lawyer.
John’s post doesn’t contain much original content, but citing a lengthy post from a highly regarded source and quoting a particularly relevant paragraph make it very noteworthy.
Next Jaldhar Vyas writes about “the war against Islamic terrorism” [3] and makes the claim “we are finally gaining the upper hand in Iraq“. To consider that claim it’s best to read some expert commentary. William S. Lind [4] is a widely respected conservative who is an expert on military strategy, his comments about stabilising Iraq are always interesting [5]. It’s sufficient to note that William has been predicting failure for the US occupation of Iraq right from the start and that events over the last five years have proven him correct.
William S. Lind’s thoughts on Democracy are interesting too [8], I don’t agree with him in this regard (I generally disagree with him on most things other than military matters) but it’s an interesting thought.
Jaldhar also claims that “Democrats on the other hand have a closer association with the media who are behind many of the sillier IP laws“, this seems to be a reference the the “liberal media” conspiracy theories. There seems no clear party association with bad IP legislation (both the Democrats and Republicans do silly things in this regard), and debunking the liberal media claims can be done simply by watching some TV (choose a couple of random news shows on different channels). Fox is not the only right-wing news outlet.
Jaldhar also has a conspiracy theory about the Supreme Court and the “need to ensure that the court does not veer leftward again as it will inevitably do during a Democratic administration“, not that the Democratic party is particularly “left” by whatever definition you might apply to the word. I prefer the www.politicalcompass.org analysis which shows that all the serious contenders in the Democratic primary are Authoritarian Right [6]. So the result of the next US presidential election will determine how authoritarian and right-wing the government will be, the fact that it will be authoritarian and right-wing is beyond doubt. That said it would be good if the authoritarian right-wing government that we (*) get at least has some decent policies in regard to IP law (which is the hope for Barak Obama).
John Nichols has an interesting analysis of Ann Coulter’s support of Hillary Clinton over John McCain [7]. It would be interesting to see Jaldhar’s comments on this issue.
(*) I use the term “we” when talking about US politics to acknowledge the fact that the Australian government will take orders from Washington. Some pundits predict that the Greens will be the second major party in Australian politics in a couple of elections time, so maybe in 8 years time we will have an Australian government that represents Australians instead of Americans. But for the entire duration in office of the next US president (be it 4 years or 8) they will be able to get the Australian government to do almost anything that they desire.
Using the “ulimit” controls over process resource use it is possible to limit RAM for processes and to limit the number of processes per UID. The problem is that this often is only good for accidental problems not dealing with malicious acts.
For a multi-user machine each user needs to be allowed to have two processes to be able to do anything (IE the shell and a command that they execute). A more practical limit is five processes for a single shell session (one or two background jobs, a foreground job where one process pipes data to another, and the shell). But even five processes is rather small (a single Unix pipe can have more than that). A shell server probably needs a limit of 20 processes per user if each user will have the possibility of running multiple logins. For running the occasional memory intensive process such as GCC the per-process memory limit needs to be at least 20M, if the user was to compile big C++ programs then 100M may be needed (I’ve seen a G++ process use more than 90M of memory when compiling a KDE source file). This means that a single user who can launch 20 processes which can each use 20M of memory could use 400M of memory, if they have each process write to a pages in a random order then 400M of RAM would be essentially occupied by that user.
If a shell server had 512M of RAM (which until quite recently was considered a lot of memory – the first multi-user Linux machine I ran on the net had 4M of RAM) then 400M of that could be consumed by a single hostile user. Leaving 100M for the real users might make the machine unusable. Note that the “hostile user” category also encompasses someone who gets fooled by the “here’s a cool program you should run” trick (which is common in universities).
I put my first SE Linux Play Machine [1] on the net in the middle of 2002 and immediately faced problems with DOS attacks. I think that the machine had 128M of RAM and because the concept was new (and SE Linux itself was new and mysterious) many people wanted to login. Having 20 shell users logged in at one time was not uncommon, so a limit of 50 processes for users was minimal. Given that GCC was a necessary part of the service (users wanted to compile their own programs to test various aspects of SE Linux) the memory limit per process had to be high. The point of the Play Machine was to demonstrate that “root” was restricted by SE Linux such that even if all Unix access control methods failed then SE Linux would still control access (with the caveat that a kernel bug still makes you lose). So as all users logged into the same account (root) the process limit had to be adequate to handle all their needs, 50 processes was effectively the bare minimum. 50 processes with 5M of memory each is more than enough to cause a machine with 128M of RAM to swap to death.
One thing to note is that root owned system processes count towards the ulimit for user processes as SE Linux does not have any resource usage controls. The aim of the SE Linux project is access control not protection against covert channels [2]. This makes it a little harder to restrict things as the number of processes run by daemons such as Postfix varies a little over time so the limits have to be a little higher to compensate, while Postfix is run with no limits the processes that it creates apply to the global limit when determining whether user processes can call fork().
So it was essentially impossible to implement any resource limits on my Play Machine that would prevent a DOS. I changed the MOTD (message of the day – displayed at login time) to inform people that a DOS attack is the wrong thing to do. I implemented some resource limits but didn’t seriously expect them to help much (the machine was DOSed daily).
Recently I had a user of my Play Machine accidentally DOS it and ask whether I should install any resource limits. After considering the issue I realised that I can actually do so in a useful manner nowadays. My latest Play Machine is a Xen DomU which I have now assigned 300M of RAM, I have configured the limit for root processes to be 45, as the system and my login comprise about 30 processes that leaves 15 for unprivileged (user_r) logins. Of recent times my Play Machine hasn’t been getting a lot of interest, having two people logged in at the same time is unusual so 15 processes should be plenty. Each process is limited to 20M of memory so overflowing the 300M of RAM should take a moderate amount of effort.
Recently I intentionally have not used swap space on that machine to save on noise when there’s a DOS attack (on the assumption that the DOS attack would succeed regardless of the amount of swap). Now that I have put resource limits in place I have installed 400M of swap space. A hostile user can easily prevent other unprivileged users from logging in by keeping enough long-running processes active – but they could achieve the same goal by having a program kill users shells as soon as they login (which a few people did in the early days). But it should not be trivial for them to prevent me from logging in via a simple memory or process DOS attack.
Update: It was email discussion with Cam McKenzie that prompted this blog psot.
On the 20th of January (8 days before the start of linux.conf.au) I advertised contest to write blog posts related to computer security for the conference Planet [1].
The aim of the contest was to encourage (by money prizes) people who had no prior experience in computer security to get involved by writing blog posts. The rules permitted security professionals to enter but only for an honourable mention, the money was reserved for people without prior experience.
The money that I initially advertised was a fraction of what was reserved for prizes, the idea being that if the contest went well then the prize pool could be easily increased but that if it didn’t go well then there would only be one small prize for someone to win by default. At the time I considered a single entry winning by default to be the worst case scenario.
The eventual result was that there was only one entry, this was from Martin Krafft on the point of keysigning [2]. Martin has prior experience in the computer security field which excludes him from a money prize, but he gets the only honourable mention. From a quick conversation with him it seems that his desire from entering the contest was to get his ideas about weaknesses in the keysigning process spread more widely, so this seems like a fairly ideal result for him. I agree with Martin that there are significant issues related to the keysigning process, but my ideas about them are a little different (I’ll blog about it later). His point about people merely checking that the picture matches on the ID and not verifying what the ID means is significant, the fact is that the vast majority of people are not capable of recognising ID from other countries. Other than requiring passports (which differ little between countries) I can’t think of a good solution to this problem.
Congratulations Martin! It is a good post and a worthy entry.
Now as to why the contest failed. I spoke to some people at the end of the conference about this. One delegate (who I know has the skills needed to produce a winning entry) said that I advertised it too soon before the conference and didn’t give delegates time to write entries. While I can’t dispute his own reasons for not entering I find it difficult to believe that more than a small proportion of delegates had that motivation. The LCA Planet had some lengthy posts by other delegates, and the guy who won second prize in the hack-fest spent something like 20 hours coding on his entry during the conference time (I suspect that my contest had the potential for a better ratio of work to prize money). Also the 8 days before the conference started was a good time to write entries for the contest.
One suggestion was that I propose that the conference organisers run such a contest next year. The problem with this is that it’s difficult to say “I tried this and failed, could you please try it too”. If nothing else I would need some significant reasons to believe that the contest has the potential to be more successful before attempting it on a larger scale. If the contest had been backed by the LCA organisers then it might have been more successful, but that possibility seems unlikely (and there is scope for an event to be more successful than mine while still being a failure). The reason that I consider it unlikely that official support would make it more successful is that I first advertised the event on my blog (syndicated to the conference Planet). Everyone who has a blog and attends the conference can be expected to have read about it. I then advertised it on the conference mailing list which I believe had as subscribers a large portion of the people who have enough spare time to create a blog specifically for the purpose of entering such a contest.
A blogging contest related to a conference but which had a wider scope (IE not limited to one field but instead covering anything related to the conference) might be successful. If someone wants to run such a contest next year then it’s probably worth doing.
Of course I have not given up on the plan of getting more people involved in computer security, expect to see some blog posts from me in the near future with other approaches to this. Suggestions would be appreciated.
This year at Linux.Conf.Au there was a student party sponsored by Google. The party was held in a bar and lots of free drinks were provided. This was fine for the university students, but for school kids it was obviously lacking.
Some people point out that it’s “quite legal” to run a party that excludes children, the point however is whether it’s desirable to exclude them. Also the concept of a state where laws dictate all aspects of your life to a degree such that obeying the law is the only possibility is fortunately restricted to science fiction.
Another common fallacy is when people point out that we should be grateful for Google’s sponsorship. As far as I am aware Google doesn’t insist on any particular conditions for sponsoring a party. If the conference organisers were to request a party at a more child-friendly venue (for example a restaurant – which if licensed could serve alcohol to adults who desire it) then I doubt that Google would refuse. Being grateful for Google’s sponsorship is entirely unrelated to the issue of whether their sponsorship money was spent in the best possible manner.
My interest in this topic started at LCA 2007 when I heard complaints from young delegates who were excluded from the Google party. This year the Google party (a different event from the “Google student party”) allowed everyone to attend and issued coloured wrist-bands to indicate whether the person had shown suitable ID to be served alcohol. The Google party was inviting for all and I believe that it was a significant improvement over last year (more attention was paid to serving food than alcohol). I have suggested that at future events some tables be reserved for people who aren’t drinking. As a general rule people enjoy being around people who have consumed a similar type and quantity of mind-altering substances (something I learned from my friends in Amsterdam).
There is of course demand for serious drinking, and it seems impossible to satisfy people who want to do serious drinking at the same party as people who won’t or can’t drink at all.
If there is not going to be an official party that is suitable for U18s then I’ll arrange it and pay for it myself. The consensus of opinion seems to be that less than six U18s are not worth catering for (one of the common objections to my suggestions is that there may be only four or five U18s). I can pay for a party for that many people which (in terms of food and non-alcoholic drinks) compares well with whatever Google might offer for the drinkers.
The rough idea is that U18s will have free food and non-alcoholic drinks. The venue will be some suitable restaurant (is there a Pizza hut or similar near the next LCA?). The party will be open to parents and delegates who are >18 and don’t plan to drink (but I’m sorry I can’t afford to shout you). Opportunities to learn about cool Linux stuff will abound (I expect that a number of knowledgable Linux people who can teach some interesting things will be interested in attending).
If I’m paying then children who aren’t delegates will also be welcome to attend, but their parents would have to pay for their food/drink. But this is merely a matter of budget, if it was to be an official event or there were other sponsors then this might not be the case.
What I would like right now is expressions of interest from young people who plan to attend the conference and from parents (who plan to attend the conference or whose children will attend). If it looks like there will be a suitably large number of people interested in this then the conference organisers may decide to make it an official event.
Also comments from adults who would prefer an alcohol free event (whether it be due to medical reasons, religion, or choice) would be of interest. It’s all about the number of people who will attend.
It seems that my blogging contest idea is a failure. Could the interested people please meet me near the LCA registration desk at the start of the lunch breakh today for a post-mortem.
Any last-minute entries can be submitted by telling me the URL then.
I am going to make some suggestions to a company that might possibly sponsor development of free electronic text books for schools (suitable for running on an OLPC machine).
I would appreciate any suggestions for things I should include. I will make my suggestions as a blog post summarising all input I receive and send the URL to the company in question.
I previously wrote about how I gave a talk about SE Linux at a conference spot when a talk about AppArmor was scheduled. It turned out that the Suse people had notified the LCA people some time in advance about the fact that John would not be attending the conference. The LCA people had removed the entries from their databases and when the conference schedule was printed it had no reference to such a talk.
The problem occurred when another tutorial (which had occupied the slot that was previously assigned to John) was moved to a different part of the schedule. For some reason the CMS that they use did not leave the slot in question empty but instead restored earlier contents (which was the Suse tutorial). No-one at LCA noticed this error and from that time on the web page generated by the CMS was used as the authoritative source of information about the issue by delegates and most of the LCA team.
|
|