|
Debian/Unstable Development
deb http://www.coker.com.au wheezy selinux
The above APT sources.list line has my repository for SE Linux packages that have been uploaded to Unstable and which will eventually go to testing and then the Wheezy release (if they aren’t obsoleted first). I have created that repository for people who want to track SE Linux development without waiting for an Unstable mirror to update.
In that repository I’ve included a new version of policycoreutils that now includes mcstrans and also has support for newer policy such that the latest selinux-policy-default package can be installed. The version that is currently in Testing supports upgrading policy on a running system but doesn’t support installing the policy on a system that previously didn’t run SE Linux.
I have also uploaded SE Linux Policy packages from upstream release 20110726 compared to the previous packages which were from upstream release 20100524. As the numbers imply there is 14 months of upstream policy development which changes many things. Many of the patches from my Squeeze policy packages are not yet incorporated in the policy I have uploaded to Unstable. I won’t guarantee that an Unstable system in Enforcing mode will do anything other than boot up and allow you to login via ssh. It’s definitely not ready for production but it’s also very suitable for development (10 years ago I did a lot of development on SE Linux systems that often denied login access, it wasn’t fun).
Kyle Moffett submitted a patch for libselinux which dramatically changed the build process. As Manoj (who wrote the previous build scripts) was not contactable I accepted Kyle’s patch as provided. Thanks for the patch Kyle, and thanks for all your work over the years Manoj. Anyway the result of these changes should mean that it’s easier to bootstrap Debian on a new architecture and easier to support multi-arch – but I haven’t tested either of these.
Squeeze
The policy packages from Squeeze can’t be compiled on Unstable. The newer policy compilation tool chain is more strict about how some things can be declared and used, thus some policy which was fairly dubious but usable is now invalid. While it wouldn’t be difficult to fix those problems I don’t plan to do so. There is no good reason for compiling Squeeze policy on Unstable now that I’ve uploaded a new upstream release.
deb http://www.coker.com.au squeeze selinux
I am still developing Squeeze policy and releasing it in the above APT repository. I will also get another policy release in a Squeeze update if possible to smooth the transition to Wheezy – the goal is that Squeeze policy will be usable on Wheezy even if it can’t be compiled. Also note that the compilation failures only affect the Debian package, it should still be possible to make modules for local use on a Wheezy system with Squeeze policy.
MLS
On Wednesday I’m giving a lecture at my local LUG about MLS on SE Linux. I hope to have a MLS demonstration system available to LUG members by then. Ideally I will have a MLS system running on a virtual server somewhere that’s accessible as well as a Xen/KVM image on a USB stick that can be copied by anyone at the meeting.
I don’t expect to spend much time on any aspect of SE Linux unrelated to MLS for the rest of the week.
Version Control
I need to change the way that I develop SE Linux packages, particularly the refpolicy source package (source of selinux-policy-default among others). A 20,000 line single patch is difficult to work with! I will have to switch to using quilt, once I get it working well it should save me time on my own development as well as making it easier to send patches upstream. Also I need to setup a public version control system so I can access the source from my workstation, laptop, and netbook. While doing that I might as well make it public so any interested people can help out. Suggestions on what type of VCS to use are welcome.
How You Can Help
Sorting out the mess that is the refpolicy package, sending patches upstream and migrating to a VCS is a fair bit of work. But there are lots of small parts. Sending patches upstream is a job that could be done in small pieces.
Writing new policy is not something to do yet. There’s not much point in doing that while I still haven’t merged all the patches from Squeeze – maybe next week. However I can provide the missing patches to anyone who wants to review them and assist with the merging.
I have a virtual server that has some spare capacity. One thing I would like to do is to have some virtual machines running Unstable with various configurations of server software. Then we could track Unstable on those images and use automated testing to ensure that nothing breaks. If anyone wants root access on a virtual server to install their favorite software then let me know. But such software needs to be maintained and tested!
Ron has written an interesting blog post about the US as a “lottery economy” [1]. Most people won’t win the lottery (literally or metaphorically) so they remain destined for poverty.
Tim Connors wrote an informative summary of the issues relating to traffic light timing and pedestrians/cyclists [2]. I have walked between Southgate and the Crown Casino area many times and have experienced the problem he describes many times.
Scientific American has an interesting article about a new global marketplace for scientific research [3]. The concept is that instead of buying a wide range of research equipment (and hiring people to run it) you can outsource non-core research for a lower cost.
Svante Pääbo gave an interesting TED talk about his work analysing human DNA to determine prehistoric human migration patterns [4]. Among other things he determined that 2.5% of the DNA from modern people outside Africa came from the Neandertals.
Lisa wrote an informative article about Emotional Support Animals (as opposed to Service Animals such as guide dogs) for disabled people [5]. It seems that the US law is quite similar to Australian law in that “reasonable accommodations” have to be made for disabled people which includes allowing pets in rental properties – even if such pets aren’t officially ESAs.
Beyond Zero Emissions has an interesting article about electricity prices which explains how wind power forces prices down [6]. This should offset the new “carbon tax”.
Problogger has an article listing some of the ways that infographics can be used on the web [7]. This can be for blog posts or just for your personal understanding.
Petter Reinholdtsen wrote a handy post about ripping DVDs which also explains how to do it when the DVD has errors [8], I haven’t yet ripped a DVD but this one is worth noting for when I do.
Miriam has written about the “Fantastic Park” ICT training for 8-12yo kids [9]. It’s run in Spain (and all the links are in Spanish – but Google Translation works well) and is a camp to teach children about computers and robotics using Lego Wedo among other things. We need to have more of these things in other countries.
The Atlantic Cities has an interesting article comparing grid and cul-de-sac based urban designs [10]. Apparently the cul-de-sac design forces an increase in car use and therefore an increase in fatal accidents while also decreasing the health benefits of walking. Having lived in both grid and cul-de-sac based urban areas I have personally experienced the benefits of the grid based layout.
Sarah Chayes wrote an interesting LA Times article about governments being taken over by corruption [11]. She argues that arbitrary criminal government leads to an increase in religious fundamentelism.
Michael Lewis has an insightful article in Vanity Fair about the bankruptcy of US states and cities [12].
Ben Goldacre gave an interesting TED talk about bad medical science [13]. He starts with the quackery that is published in tabloid newspapers and then moves on to deliberate scientific fraud by medical companies.
Geoff Mulgan gave an interesting TED talk about the Studio Schools in the UK which are based around group project work [14]. The main thing I took from this is that the best method of teaching varies by subject and by student. So instead of having a monolithic education department controlling everything we should have schools aimed at particular career paths and learning methods.
Sophos has an interesting article about the motion sensors of smart phones being used to transcribe keyboard input based on vibration [15]. This attack could be launched by convincing a target to install a trojan application on their phone. It’s probably best to regard your phone with suspicion nowadays.
Simon Josefsson wrote a good article explaining how to use a GPG smart-card to authenticate ssh sessions with particular reference to running backups over ssh [16].
Cùran wrote a good article explaining how to use all the screen space when playing DVDs on a wide screen display with mplayer [17].
Charles Stross has an informative blog post about Wall St Journal circulation fraud [18]. Apparently the WSJ was faking readership numbers to get more money from advertisers, this should lead to law suits and more problems for Rupert Murdoch. Is everything associated with Wall St corrupt?
Augmented reality is available on all relatively modern smart phones. I’ve played with it on my Android phone but it hasn’t delivered the benefits that I hoped, there is a game where you can walk through a virtual maze which didn’t work for me, and a bunch of programs which show me the position of stars, pizza restaurants, and other things which are cool but not really useful.
It has been proven that larger screen size can make a surprising difference in productivity for increasing monitor size. The general concept seems to be that ideally everything you are thinking about at one time should be on the screen at once. I’m not aware of any research comparing phones to desktop monitors but it is obvious that some tasks become extremely difficult or nearly impossible when attempted on the tiny screen of a phone. One significant example is coding. One noteworthy thing about coding is that the amount of typing is often quite small when compared to the amount of time spent looking at code, so the lack of good keyboard options on phones isn’t always a serious problem.
The iPhone 4 has a resolution of 640*960 which seems to be the best available phone resolution (with 480*854 being the highest resolution that is available in many phones). The Dell Streak at 5 inches seemed to have the largest screen in a phone, but they have stopped selling them. It seems that the largest screen available in a phone is about 4.2 inches. Probably the minimum that would be considered usable for development would be a resolution of about 1280*1024 and a screen size of about 14 inches, while opinion will vary a lot about this I think that the vast majority of programmers will agree that the bigger tablet computers and Netbooks (at about 10 inches and something like 1366*768 resolution) are well below the minimum size.
It seems to me that a possible solution to this problem involves using augmented reality to provide a virtual desktop that is significantly larger and which has a significantly higher resolution. The advantage of augmented reality over merely scrolling is that it should allow faster and more reliable seeking for the section of virtual desktop that is of interest, and seek speed is probably the bottleneck with small monitors. One problem for this would be turning corners when on public transport, but the camera button could be used to reset the current phone position to be the middle of the viewing area, if the process of resetting the angle is fast enough it wouldn’t be a great distraction.
I don’t think that a mobile phone will ever be a great device for software development and I don’t think that the places where a serious computer isn’t available are good places to work. But sometimes I get inspiration for tracking down a difficult bug when on the move and it would be really good to be able to read the code immediately.
I won’t have any time to work on such things myself. I’m just publishing the idea in case someone who likes it happens to have a lot of spare time…
In December 2010 a paper was published by Robert N.M. Watson and Jonathan Anderson from the Cambridge University and Ben Laurie and Kris Kennaway of Google about the Capsicum capabilities system [1]. It seems that the aim of the project is to allow systems that need privileges briefly when they start (such as tcpdump) a safe method of dropping privs. The main project page is here [2].
The focus of the paper is on the Chromium web browser and six different ways of constraining the Chromium sandbox are compared. For the SE Linux comparison they claim 200 lines of code changes as of Fedora 15, in Fedora 16 I couldn’t find a Chromium package, so I presume that they mean 200 lines of SE Linux policy (I am not aware of anyone modifying the Chromium source for SE Linux). They note that SE Linux doesn’t support separating different sandboxes, while it would be possible to have each sandbox be assigned a different MCS sensitivity label to separate them that option would be unwieldy enough that they are essentially correct in this regard. For SE Linux systems running the MLS policy the correct thing to do would be to run multiple copies of Chromium at different levels to access different sensitivity levels of data, this would normally be done by polyinstantiating the home directory.
One thing to note however is that there is no requirement that only one security method be implemented. I can’t think of any technical reason why it would be impossible to run SE Linux and Capsicum on the same system. SE Linux could constrain daemons and restrict the access to Capsicum services while Capsicum could be used to give minimum privileges to parts of Chromium. I’m not sure that such a combination would offer anything that the MLS users would desire, but it seems that everyone else (the vast majority of computer users) would be served well by a combination of SE Linux and Capsicum.
It’s disappointing that the paper didn’t mention Posix 1003.1e capabilities, but given the lack of use that Posix capabilities get that’s understandable.
It’s also disappointing when someone develops something new and different nowadays and doesn’t provide a virtual machine image for it. Installing and configuring something that requires application and kernel changes is a lot of work and most people who are idly curious about the technology won’t go to the effort. By today’s standards it’s not that difficult to share a 1GB filesystem image via Bittorrent.
Currently Dick Smith is offering two dual-SIM mobile phones for sale in Australia. One is the LG T510 for $99, but it only supports GSM on each SIM. This might be a good phone for someone who needs to receive both work and personal calls and doesn’t want to carry two phones, but the lack of 3G support is a major limit on what can be done with the phone.
The other phone is the Huawei U8520 which supports 3G on one SIM and GSM on the other. It costs $249, runs Android 2.2, has a 320*480 display, and a 3.2 megapixel camera. For comparison the LG Optimus One is a single-SIM phone with similar specs that only costs $179 from TeleChoice, so there is a 40% price premium to pay for a dual-SIM phone.
When I first heard about dual-SIM phones (before they were commonly and cheaply available in Australia) I had thought that it would be a good option for using a cheap 3G broadband SIM along with a SIM for voice calls from one of the cheaper pre-paid mobile companies. But the helpful guy at Dick Smith informed me that Amaysim offers good pre-paid deals for voice and data [1]. With 10G of data quota to be used in one year for $100 and reasonable rates on voice calls it should be easy to keep under $200 per annum if you don’t use many calls.
Rene Cunningham has described how to use a pre-paid data-only plan on the Optus network with VOIP for most outbound calls [2]. To do that he is paying $30 every 6 months to keep his old number for inbound calls for which he gets $30 of credit, with Amaysim you can pay $10 every 3 months to get the same result with $40 per annum call cost instead of $60. As Amaysim are on the Optus network the result should be the same as long as Amaysim have enough capacity for IP data transfer. Rene uses an iPhone but the same result can be achieved with an Android phone.
If by using VOIP the cost of running a phone on Amaysim was reduced to something like $160 per annum (with a possibly optimistic aim of $20 per annum for outbound VOIP calls) then over two years that could save $376 over a $29 per month contract. A Virgin $29 contract includes a Sony Ericsson Xperia X10 which is a fairly nice phone if you can deal with the short battery life and the fact that it’s locked to Android 2.1. An Xperia X10 can be bought on Ebay for less than $376 but the hassle of setting up VOIP and Amaysim will be more effort than it’s worth to save $100 over two years.
A couple of my relatives have phone contracts that are about to expire. I’m not going to set them up on VOIP as it’s too much effort for too little benefit and the dual-SIM phone really isn’t an option. I will recommend Virgin contracts with Xperia X10 phones or Amaysim with their existing phones (2yo smart phones that are still quite usable).
On a recent visit to my local e-waste disposal place I noticed an open PC on the top of the pile with a pair of DIMMs that were begging to be removed. I also noticed three PCI Ethernet cards that were stacked in a manner that made them convenient to grab – possibly some nice person deliberately placed them so someone like me could take them. The DIMMs turned out to be 3G of DDR2-800 RAM and were regarded as good by Memtest86+ – a nice upgrade for one of my test systems that previously only had 1G of RAM.
If you have old hardware to dispose of then please try to take the RAM to your local computer users’ group meeting. In any such gathering there’s always someone who wants old RAM, anything better than PC-133 will find good home unless it’s very small (128M sticks of DDR-266 and 256M sticks of anything faster probably won’t get any interest). RAM is small and light so you can carry it in your pocket without inconvenience. Ethernet cards of all vintages are in demand due to people reusing old desktop systems as routers and PCIe video cards are in great demand, PCI and PCIe cards are small enough that it’s usually not a great inconvenience to transport them.
Hard drives larger than about 100G are in demand as are ATX power supplies, these are really inconvenient to transport unless you travel by car.
For computer systems, anything that can use DDR2-800 RAM will probably be of use to some member of a computer users’ group, if you offer it on the mailing list then you can expect that someone will want to collect it from you at your home or a meeting.
There are organisations such as Computerbank that take donations of old hardware and make systems for disadvantaged people [1], it’s worth considering them if you have hardware to dispose of. But for me the hardware I use every day is quite close to the minimum specs for donations that Computerbank will accept so there’s no possibility of me discarding systems that are useful to them.
I’ve created a page listing hardware that I need, if anyone in my area has such hardware that they don’t need then please let me know [2].

Today I attended the Stop HRL demonstration [1]. The government plans to spend $100,000,000 of federal money and $50,000,000 of Victorian state money to build a new coal power station. The state government has imposed some unreasonable restrictions on renewable energy which includes allowing a single person who objects within 2Km of a wind turbine to block the project while also not allowing anyone to object to coal power plants or fracking (both of which have proven health implications). Today VCAT had a hearing about this issue, HRL wants to double the size of the proposed coal power plant while Environment Victoria, Doctors for the Environment Australia, local climate action group Locals Into Victoria’s Environment Martin Shield from Climate Action Moreland are opposing the plan.
What the government should be doing is permitting (if not encouraging) renewable energy production, particularly wind power which will drive down energy prices by undercutting the energy auctions due to it’s almost zero marginal cost. The government should not be spending tax payer money on new coal projects due to climate change and air pollution – in addition to the fact that coal is simply more expensive in the long term.
As usual it’s disappointing that the National party which supposedly represents farmers is supporting the government in such things. The pollution caused by coal power and fracking ends up damaging farmland. The Liberal party doesn’t have a majority on it’s own in either house of the state government, so this proposed coal power plant could be stopped immediately if the National party supported renewable power, or even if they merely opposed giving $50,000,000 of state tax money to HRL.
The demonstration went reasonably well, there was about 80 people there when I arrived and by the time it ended there were more than 150 people. I took the above picture at the start and it missed many people who were off to the right. I think that’s a good result for a demonstration that had little publicity and was held in business hours with really bad weather.
Update:
On Sunday the 13th of November at 13:00-14:30 there is a “Going Backwards Under Baillieu” demonstration at Parliament House. This is to mark the negative affects that the Liberal state government has had on the environment of Victoria.
Lindsay Holmwood has written about the benefits of a standing desk and how to buy one [1]. The case for avoiding sitting is strong, but I couldn’t stand up all day.
One thing that’s been on my list of things to do if I had an unreasonably large amount of spare time or money is to make a reclining computer station. The idea would be to take a bed and mount a TFT display above it so that it can be viewed while lying down. Then a split keyboard would be required so that each hand can be used for half the keys (this would be difficult or expensive). If the keyboard halves were aligned correctly then it would reduce the carpal tunnel problems associated with computer use (which has been a big problem for me in the past and is something that I will probably never fully recover from [2]). As far as I am aware the risk of back problems is eliminated when lying down, so two of the major problems with regular computer desks would be avoided.
I don’t think that lying down all day would be that great and it wouldn’t work for collaborative projects. But as monitors are so cheap nowadays it would be viable to have a second monitor at a desk connected to the same computer. Then I could spend about half my computer time lying down.
The Occupy Wall St blog has an informative summary of attempts to reclaim the American political process which has been pwned badly by financiers in recent times [1]. The basic concept is that people who represent the 99% of the population who aren’t super rich have protests in Wall St and now other business areas. Care2 has an interesting article about US marines opposing the brutal actions of police against Occupy Wall St protesters [2], apparently they treated Iraqis better than US police are treating Americans.
The movement has spread to other locations, the OccupyTogether.org site has information on related events all around the world [3].
We have an ongoing event in Melbourne, Australia. It’s been going for a week and yesterday Robert Doyle (the Mayor of Melbourne) ordered police to disperse the protest, so riot police and mounted police forced the protesters out of the city square [4]. According to the news reports there were only 100 people there at the time, here is a Google Maps link to the location, as you can see 100 people would not take up much of that space, not even with banners etc. The smart move would have been for the government to ignore it all until the protesters got bored.
Now of course we will have more and bigger protests. The use of riot police will probably be considered as a good thing by some of the more aggressive protesters, but anyone who doesn’t want to make the government (and the corporations that control it) look bad would consider it a gross error. Robert Doyle needs to be replaced, the liberal reason for replacing him is that we just don’t want unnecessary force used against peaceful protesters. The “conservative” reason for replacing him is that he’s grossly incompetent, he transformed a small protest that wasn’t getting much media attention and appeared to be losing interest into a large protest with a lot of media attention.
It will be interesting to see what happens next.
A common question about hosting is whether to use a dedicated server or a virtual server.
Dedicated Servers
If you use a dedicated server then you will face the risk of problems which interrupt the boot process. It seems that all the affordable dedicated server offerings lack any good remote management, so when the server doesn’t boot you either have to raise a trouble ticket with the company running the Data-Center (DC) or use some sort of hack. Hetzner is a dedicated server company that I have had good experiences with [1], when a server in their DC fails to boot you can use their web based interface (at no extra charge or delay) to boot a Linux recovery environment which can then be used to fix whatever the problem may be. They also charge extra for hands-on support which could be used if the Linux recovery environment revealed no errors but the system just didn’t work. This isn’t nearly as good as using something like IPMI which permits remote console access to see error messages and more direct control of rebooting.
The up-side of a dedicated server is performance. Some people think that avoiding virtualisation improves performance, but in practice most virtual servers use virtualisation technologies that have little overhead. A bigger performance issue than the virtualisation overhead is the fact that most companies running DCs have a range of hardware in their DC and your system (whether a virtual server or a dedicated server) will be on a random system from their DC. I have observed hosting companies to give different speed CPUs and for dedicated servers different amounts of RAM for the same price. I expect that the disk IO performance also varies a lot but I have no evidence. As long as the hosting company provides everything that they offered before you sign the contract you can’t complain. It’s worth noting that CPU performance is either poorly specified or absent in most offers and disk IO performance is almost never specified. One advantage of dedicated servers in this regard is that you get to know the details of the hardware and can therefore refuse certain low spec hardware.
The real performance benefit of a dedicated server is that disk IO performance won’t be hurt by other users of the same system. Disk IO is the real issue as CPU and RAM are easy to share fairly but disk performance is difficult to share and is also a significant bottleneck on many servers.
Dedicated servers also have a higher minimum price due to the fact that a real server is being used which involves hardware purchase and rack space. Hetzner’s offers which start at E29 per month are about as cheap as it’s possible to get. But it appears that the E29 offer is for an old server – new hardware starts at E49 per month which is still quite cheap. But no dedicated server compares to the virtual servers which can be rented for prices less than $10 per month.
Virtual Servers
A virtual server will typically have an effective management interface. You should expect to get web based access to the system console as well as ssh console access. If console access is not sufficient to recover the system then there is an option to boot from a recovery device. This allows you to avoid many situations that could potentially result in down-time and when things go wrong it allows you to recover faster. Linode is an example of a company that provides virtual servers and provides a great management interface [2]. It would take a lot of work with performance monitoring and graphing tools to give the performance overview that comes for free with the Linode interface.
Disk IO performance can suck badly on virtual servers and it can happen suddenly and semi-randomly. If someone else who is using a virtual server on the same hardware is the target of a DoS attack then your performance can disappear. Performance for CPU is generally fairly reliable though. So a CPU bound server would be a better fit on the typical virtual server options than a disk IO bound server.
Virtual servers are a lot cheaper at the low end so if you don’t need the hardware capabilities of a minimal dedicated server (with 1G of RAM for Hetzner and a minimum of 8G of RAM for some other providers) then you can save a lot of money by getting a virtual server.
Finally the options for running a virtual machine under a virtual machine aren’t good, AFAIK the only options that would work on a commercial VPS offering are QEMU (an x86 CPU instruction emulator), Hercules (a S/370 S/390, and Z series IBM mainframe emulator), and similar CPU emulators. Please let me know if there are any other good options for running a virtual machine on a VPS. Now while these emulators are apparently good for debugging OS development they aren’t something that are generally useful for running a virtual machine. I knew someone who ran his important servers under Hercules so that x86 exploits couldn’t be used for attacking them, but apart from that CPU emulation isn’t generally useful for servers.
Summary
If you want to have entire control of the hardware or if you want to run your own virtual machines that suit your needs (EG one with lots of RAM and another with lots of disk space) then a dedicated server is required. If you want to have minimal expense or the greatest ease of sysadmin use then a virtual server is a better option.
But the cheapest option for virtual hosting is to rent a server from Hetzner, run Xen on it, and then rent out DomUs to other people. Apart from the inevitable pain that you experience if anything goes wrong with the Dom0 this is a great option.
As an aside, if anyone knows of a reliable company that offers some benefits over Hetzner then please let me know.
What I would Like to See
There is no technical reason why a company like Linode couldn’t make an offer which was a single DomU on a server taking up all available RAM, CPU, and disk space. Such an offer would be really compelling if it wasn’t excessively expensive. That would give Linode ease of management and also a guarantee that no-one else could disrupt your system by doing a lot of disk IO. This would be really easy for Linode (or any virtual server provider) to implement.
There is also no technical reason why a company like Linode couldn’t allow their customers to rent all the capacity of a physical system and then subdivide it among DomUs as they wish. I have a few clients who would be better suited by Linode DomUs that are configured for their needs rather than stock Linode offerings (which never seem to offer the exact amounts of RAM, disk, or CPU that are required). Also if I had a Linode physical server that only had DomUs for my clients then I could make sure that none of them had excessive disk IO that affected the others. This would require many extra features in the Linode management web pages, so it seems unlikely that they will do it. Please let me know if there is someone doing this, it’s obvious enough that someone must be doing it.
Update:
A Rimuhosting employee pointed out that they offer virtual servers on dedicated hardware which meets this criteria [3]. Rimuhosting allows you to use their VPS management system for running all the resources in a single server (so no-one else can slow your VM down) and also allow custom partitioning of a server into as many VMs as you desire.
|
|