|
When installing Xen servers one issue that arises is how to assign MAC addresses. The Wikipedia page about MAC addresses [1] shows that all addresses that have the second least significant bit of the most significant byte set to 1 are “locally administered”. In practice people just use addresses starting with 02: for this purpose although any number congruent to two mod four used in the first octet would give the same result. I prefer to use 02: because it’s best known and therefore casual observers will be more likely to realise what is happening.
Now if you have a Xen bridge that is private to one Dom0 (for communication between Xen DomU’s on the same host) or on a private network (a switch that connects servers owned by one organisation and not connected to machines owned by others) then it’s easy to just pick MAC addresses starting with 02: or 00:16:3e: (the range assigned to the Xen project). But if Xen servers run by other people are likely to be on the same network then there is a problem.
Currently I’m setting up some Xen servers that have public and private networks. The private network will either be a local bridge (that doesn’t permit sending data out any Ethernet ports) or a bridge to an Ethernet port that is connected to a private switch, for that I am using MAC addresses starting with 02:. As far as I am aware there is no issue with machine A having a particular MAC address on one VLAN while machine B has the same MAC address on another VLAN.
My strategy for dealing with the MAC addresses for the public network at the moment is to copy MAC addresses from machines that will never be in the same network. For example if I use the MAC addresses from Ethernet cards in a P3 desktop system running as a router in a small company in Australia then I can safely use them in a Xen server in a co-location center in the US (there’s no chance of someone taking the PCI ethernet cards from the machine in Australia and sending them to the US – and no-one sells servers that can use such cards anyway). Note that I only do this when I have root on the machine in question and where there is no doubt about who runs the machine, so there should not be any risk.
Of course if someone from the ISP analyses the MAC addresses on their network it will look like they have some very old machines in their server room. ;)
I wonder if there are any protocols that do anything nasty with MAC addresses. I know that IPv6 addresses can be based on the MAC address, but as long as the separate networks have separate IPv6 ranges that shouldn’t be a problem. I’m certainly not going to try bridging networks between Australia and the US!
Another possible way of solving this issue would be to have the people who run a server room assign and manage MAC addresses. One way of doing this would be to specify a mapping of IP addresses to MAC addresses, EG you could have the first two bytes be 02:00: and the next four be the same as the IPv4 address assigned to the DomU in question. In the vast majority of server rooms I’ve encountered the number of public IP addresses has been greater than or equal to the number of MAC addresses with the only exception being corporate server rooms where everything runs on private IP address space (but there’s nothing wrong with 02:00:0a: as the prefix for a MAC address).
I also wonder if anyone else is thinking about the potential for MAC collisions. I’ve got Xen servers in a couple of server rooms, I told the relevant people in writing of my precise plans (and was assigned extra IP addresses for all the DomUs) but never had anyone mention any scheme for assigning MAC addresses.
I’ve just started work on a new HP server running RHEL5 AS (needs to be AS to support more than 4 DomU’s). While I still have the Xen issues that made me give up using it on Debian [1] (the killer one being that an AMD64 Xen Dom0 would kernel panic on any serious disk IO) but the Xen implementation in RHEL is quite solid.
The first thing I did was run zcav (part of my Bonnie++ benchmark suite) [2] to see how the array performs (as well as ensuring that the entire array actually appears to work). The result is below. For a single disk performance is expected to decrease as you read along the disk (from outer to inner tracks). I don’t know why the performance decreases until the half-way point and then starts with good performance again and again decreases.

The next thing was to ensure that the machine had RAID-6 (I have been convinced that that using only RAID-5 verges on professional malpractice). As the machine is rented from a hosting company there was no guarantee that they would follow my clear written instructions involving running RAID-6.
The machine is a HP rack-mounted server with a CCISS RAID controller, so to manage the array the command /usr/sbin/hpacucli is used.
The command hpacucli controller all show reveals that there is a “Smart Array P400 in Slot 1“.
The command hpacucli controller slot=1 show gives the following (amongst a lot of other output):
RAID 6 (ADG) Status: Enabled
Cache Board Present: True
Cache Status: OK
Accelerator Ratio: 25% Read / 75% Write
Drive Write Cache: Disabled
Total Cache Size: 512 MB
Battery Pack Count: 1
Battery Status: OK
SATA NCQ Supported: True
So the write-back cache is enabled, 384M of data is for the write-back cache and 128M is for the read cache (hopefully all for read-ahead – the OS should do all the real caching for reads).
The command hpacucli controller slot=1 array all show reveals that there is one array: “array A (SAS, Unused Space: 0 MB)“.
The command hpacucli controller slot=1 array a show status tells me that the status is “array AOK“.
Finally the command hpacucli controller slot=1 show config gives me the data that I really want at this time and says:
Smart Array P400 in Slot 1 (sn: *****)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (820.2 GB, RAID 6 (ADG), OK)
Then it gives all the data on the disks. It would be nice if there was a command to just dump all that. I would like to be able to show the configuration of all controllers with a single command.
Also it would be nice if the fact that hpacucli is the tool to use for managing CCISS RAID arrays when using Linux on HP servers was more widely documented. It took me an unreasonable amount of effort to discover what tool to use for CCISS RAID management.
I have been asked about the current status of Lenny SE Linux on the Desktop.
The first thing to consider is the combinations of policies and configurations. I will number them if only for the purpose of this post, if the numbering is considered generally helpful it could be more widely adopted to describe configurations.
- Default configuration. This has the default policy and is configured with all users having the domain unconfined_t and daemons such as POP servers are allowed to access home directories of type unconfined_home_dir_t. This allows such daemons to attack privileged user accounts.
- Some restricted users. This is the same as above but with some users restricted. Daemons such as POP servers are only allowed to access the home directories of restricted users. This means that if a user is to have an unconfined account and receive email they must have two Unix accounts or receive their mail under /var/spool/mail. This is one setsebool command and one (or maybe a few) “semanage login -m” commands from the default configuration.
- All users restricted. The system administrator has the domain sysadm_t and users have domains such as user_t. This requires a few more semanage commands. It is equivalent to the old strict policy.
- MLS. This is anything that is based around the MLS policy.
Currently I have two Desktop machines running Lenny (a test machine and my EeePC) and one server. I have only just switched my test machine to enforcing mode so have no good data on it (apart from the fact that I can boot it up and login – which is always a good start). The server is running in permissive mode because I have not yet written the policy to allow the POP server to read from unconfined_home_dir_t. I could get it working by switching from level 1 to level 2 or 3, but I want to get level 1 server policy working for the benefit of others else first.
My EeePC however is fully functional, I have been doing some work on it – that mostly means running a ssh client under GNOME but that’s OK (desktop environments such as GNOME and KDE are quite complex and demanding, getting a machine to boot and run such a desktop environment tests out many parts of the system). It’s only at level 1 for the moment because I want to get level 1 working everywhere before moving to the higher levels. I want to get things ready for real users ASAP. With the way the policy is managed now it will be possible to move from level 1 to 2 or 3 without rebooting or interrupting running services. So once users have systems running well at level 1 they can easily increase the security at a later date.
The problems that I have had are due to text relocations in libraries (see my previous post about execmod permission [1]). I’ve filed bug report #493678 against libtheora0 [2] in regard to this issue and included a patch from Fedora (which disables the non-relocatable assembly code in question). It seems that upstream have some new assembler code to try and fix this issue, so hopefully we’ll have something that can make it into Lenny!
I’ve filed bug report #493705 against libswscale0 for the same issue [3]. I included a patch to turn off the assembler code in question but that was not well received. If anyone has some i386 assembler skill and some spare time I would appreciate it if you could try and find a way to make the code position independent while losing little or no performance.
One thing to note is that I am now using an Opteron 1212 (2.0GHz dual-core) system for compiling, I run the i386 DomU with a 64bit kernel (I expect that 32bit user-space runs faster with a 64bit kernel than a 32bit kernel), and the disks are reasonably fast. Even so it takes about 15 minutes to build libswscale0 and the other packages from the same source tree. Previously I was using a 1.0GHz Pentium-3 for my Lenny i386 development until I had the libswscale0 build process go for more than 90 minutes before running out of disk space! If your build machine is old enough to only be 32bit then you should probably plan on watching a movie or going to bed while the build is in progress.
I have built packages that work around the above bugs and included them in my Lenny repository [4]. If you take the packages from that repository plus the Lenny packages then you should have a functional desktop system at level 1. I would appreciate it if people would start testing that and providing feedback. One important issue is the discovery of libraries that want shared stacks, text relocations, and executable memory. The deadline for fixing them properly is even more of a problem due to the number of people who have to be involved in a solution (as compared to the policy where I can do it on my own).
One finally problem is a bug in xdm which causes it to give the wrong context for login sessions due to having an old version of the SE Linux related code [5]. Due to a combination of this and some policy bugs you can not login with xdm. This is not a hugely important issue as most people will use gdm (which has the newer patch) or kdm (which has no SE Linux patch but can use pam_selinux.so). Also another option is wdm which works with pam_selinux.so. I’ve had a response to my bug report suggesting that there’s a bug in the patch (which was taken from gdm so maybe there’s a bug in gdm code too). I haven’t responded to that yet as I’ve been concentrating on the things that will make the most impact for Lenny.
At this stage I’m still unsure of when the release team will cut me off and prevent further SE Linux related fixes from going in Lenny. I need at least one more update to the policy packages before Lenny is released. I could release one right now with some useful improvements over what is currently in unstable, but am waiting until I get some other things fixed.
If I get everything fully working at level 1 (both client and server) before Lenny then I will provide a similar status report for users and testers of levels 2 and 3. I don’t expect that I will even get a chance to test level 4 (MLS) properly before Lenny releases.
There is an interesting article in The Age about the effect of petrol prices on the poorer people in Melbourne [1].
The article claims that people are unable to sell large old cars and buy smaller cars. To investigate that claim I did a price search on Ford Falcons and Holden Commodores on the web site www.drive.com.au . The Ford Falcon and Holden Commodore were for a long time the two leading marques of cars sold in Australia – both of them large cars. It seems that if I wanted to buy a Falcon that is less than 20 years old with an engine of 4.0L (or bigger) then I would have many choices available with list prices under $2500, including some cars in that price range which are as little as 10 years old (the average age of a car that’s registered for use in Australia). For Commodores there seems to be less choice, there are a few of them with 4L engines that are just over 10 years old being advertised for just under $5000 and a significant number being advertised in the $5,000 to $7,500 range. I don’t know whether the increased asking price for Commodores is due to greater optimism by the owners or a greater demand. One thing that we have to keep in mind is that due to the low price of advertising on the web site and the duration of the advert (which permits changing the price at any time) the sensible strategy is to start the advert with an optimistic price, and then gradually drop the price if there is little interest by the buyers.
There are also some Falcons on auction on Ebay that are going fairly cheaply, one example is less than $6000 for a 2000 Falcon with only a few minutes to go.
The HSV (Holden Special Vehicles) cars are listed on Drive as a different make on Drive (it’s just Holden’s range of faster vehicles), and surprisingly their prices are quite strong. There is only one vehicle on offer for less than $5000, and only a few for less than $10,000.
Now when it comes to buying a small car, on ebay there are a number of Toyota Corollas on sale, two 1997 models are on sale for just under $9,000 and just under $10,000. It seems that you could sell a 2000 model Ford Falcon on ebay and not receive enough money to buy a 1997 Toyota Corolla!
For the Corollas advertised on Drive the majority of them seem to be advertised for around $15,000, but the volume on sale is great enough that there is a significant minority advertised for lower prices. There are 173 Corollas advertised for between $2,500 and $5,000 it might be possible to find one of those that has no significant problems. So it seems that ebay is not the place to buy a Corolla!
So it seems that the main premise of the article (that you can’t sell a second-hand large car and buy a small car) is correct. If you were to sell a 1990’s Falcon or Commodore and buy a Corolla (which cost about half as much as a high-end Falcon or Commodore when new) of the same age then you would be lucky to get more than half the Corolla purchase price. Then of course there’s a 4% “stamp duty” tax to pay and the risk that a second-hand car you buy might have some hidden problem (the cheaper cars are unlikely to have been well serviced).
The lecture by Professor James Duane about why you should not talk to the police (in the US at least) is doing the rounds at the moment. The Google video site doesn’t work for me, so I downloaded it from youtube with the following references:
part 1 [rVq6N0xAEEM]
part 2 [-Z0bpj3EEHI]
part 3 [44-GSZofXIE]
part 4 [zSvxiaO-TG8]
part 5 [gzYHnWrqfWg]
part 6 [BErNzdOnWGY]
The first thing that struck me about this is that it’s the first time I’ve ever seen someone clearly state the problem with the excessive number and complexity of the laws. It’s impossible to know whether you are breaking the law. This is obviously wrong and it should be a legislative priority to reduce the number of laws and the complexity of the laws. If for every law they sought evidence regarding whether it helped people or helped preserve the state then that would be a good start.
The excessive number of laws is not just a problem due to the risk of accidentally breaking a law, but also due to the fact that a malicious cop could arrest anyone that they wished – it would merely be a matter of following them for long enough. In the second half of the lecture Officer George Bruch from the Virginia Beach Police Department gives his side of the story. At one point he cites an example of the road laws where he can follow a car and find a legitimate reason for booking the driver if he wishes.
Virginian law allows the police to destroy primary evidence. If they tape an interview then they can transcribe it to a written record and then destroy the original tape. If the transcription removes some of the content that has meaning then the defendant is probably going to suffer for it. It’s often stated that on the net you have to make extra effort to make your meaning clear as jokes can’t be recognised by the tone of voice. It seems that the same effort would need to be made in a police interview (but without the possibility of going back and saying “sorry I was misunderstood” that you have in email).
It’s also legal for police to lie to suspects. While I can imagine some situations where this could be reasonable, it seems that at a minimum there should be some protection for child suspects against being lied to by police (there have been some travesties of justice stemming from children trusting police in an interview room).
It seems that if someone doesn’t get a good lawyer and refuse to talk to the police then they will be likely to be convicted regardless of whether they are guilty or innocent. Officer Bruch says that he tries to avoid interviewing innocent people. So I guess whether someone ends up in jail or not will depend to some extent on the opinion of a cop who may or may not interview them.
While I guess a high rate of success at securing convictions is in most cases a good thing, the fact that convicting an innocent person is the absolute best way of letting a criminal escape justice seems to be forgotten.
Another issue is the fact that a witness may have reason to believe that they could be a suspect. The advice that a suspect should never talk to a police officer seems to logically imply that intelligent people will be hesitant about providing witness statements in situations where they may be a suspect.
I wonder how the legal system in Australia compares to the US in regard to these issues. I know that we have far too many laws and too many complex laws which can not be understood and obeyed by a reasonable person.
When I first packaged the SE Linux policy for Debian the only way to adjust the policy was to edit the source files and recompile. Often changes that you might desire involved changing macros so while it would have been theoretically possible to just tack a few type definitions and allow rules at the end, you often wanted to change a macro to have a change apply all through the policy. To deal with that I had the policy source installed on all machines and the package update policy would compile it into a binary form and load it into the kernel.
Now there was the issue of merging user changes with changes from a new policy package. For most configuration files on a Unix system you can just leave any files that are modified by the user, not taking the new default configuration might cause the user to miss out on some new features – but presumably they were happy with the way it worked in the past. However due to inter-dependencies this wasn’t possible for SE Linux, if one file was not ungraded due to user changes and other files related to it were then the result could be a compile failure.
Another issue was the fact that a newer version of the policy might permit operations that the sys-admin did not desire and therefore not meet their security goals, or it might not permit operations that are essential to the operation of the system and interrupt service.
To solve this I wrote a script that prompted for upgrades to policy source files and allowed the sys-admin to choose which files to upgrade. This worked reasonably well in the early days when the number of files was small. But as the policy increased in size it became increasingly painful to upgrade the policy with as many as 100 questions being asked.
The solution to this (as implemented in Fedora Core 5, Debian/Etch, and newer distributions) was to have binary policy modules that maintain their dependencies. Now there are binary policy modules which can be loaded at will (the default install for Debian only installs modules that match the installed programs) and the modules can have optional sections with dependencies. So if you remove a module that defines a domain and there are other modules which have rules to allow communication with that domain then the optional sections of policy in the other modules is disabled when the domain becomes undefined. This solves the technical issues related to module inter-dependencies but the issue of intent and interaction with the rest of the system remains.
With Red Hat distributions the solution has been to upgrade the running policy every time the policy package is upgraded and be cautious when changing policy. They do a good job of the upgrade process (including relabeling files when the file contexts change) and in terms of policy changes I have not heard complaints from users about that. Users who don’t want a newer version of the policy can always put the package on hold.
For the Debian distribution after Lenny I plan to have a policy upgrade process that relabels files and a debconf question as to whether package upgrades should upgrade the policy. But for Lenny the freeze is already in progress so it seems to late to make such changes. Instead I’m going to upload a new version of the selinux-basics package with a program named selinux-policy-upgrade that will upgrade all the policy modules that are in use. This is not the ideal solution, but I think it will keep Lenny users reasonably happy.
I have written a script named postfix-nochroot to disable the chroot functionality of Postfix. I plan to initially include this in the selinux-basics package in Debian, but if the script was adopted by the Postfix package or some other package that seems more appropriate then I would remove it from selinux-basics.
The reason for disabling chroot is that when running SE Linux the actions of the various Postfix processes are restricted greatly, such that granting chroot access would increase the privileges. Another issue is the creation of the chroot environment, the Postfix package in Debian will recreate the files needed for the chroot under /var/spool/postfix when it is started. The first problem with this is that when a package is ugraded the chroot environment won’t be upgraded (with the exceptions of some packages that have special code to restart Postfix) and when the sys-admin edits files under /etc those changes won’t be mirrored in the chroot environment either.
The real problem when running SE Linux is that it requires extra privileges to be granted to the Postfix processes (to be able to call chroot()). While the SE Linux policy places much greater restrictions on the actions of daemons than a chroot would. For example a non-chrooted daemon process running with SE Linux will not be able to see most processes in ps output (it will be able to see that processes exist through entries under /proc, but without the ability to search the subdirectories of /proc related to other processes it won’t be able to see what they are).
It would be possible for my script to be used as the first step towards making a Postfix installation automatically use a chroot when SE Linux is disabled or in permissive mode, and not use a chroot when SE Linux is in enforcing mode. I’ve probably done about half the work that is needed if this was the end goal, but I have no great interest in such configuration and no time to work on it. I would be prepared to accept patches from other people who want to go in this direction.
I have written a script for Debian named selinux-activate which is included in selinux-basics version 0.3.3+nmu1 (which I have uploaded to Debian/Unstable). The script when run with no parameters will change the GRUB configuration to include selinux=1 on the kernel command-line and enable SE Linux support in the PAM modules for login, gdm, and kdm. One issue with this is that if you run the command before installing kdm or gdm then you won’t have the pam configuration changed – but as it’s OK to run the script multiple times this shouldn’t be a problem.
The new selinux-basics package will also force a reboot after relabelling all filesystems. I had tested “umount -a ; reboot -f” but discovered that “reboot -f” causes filesystem corruption in some situations (my EeePC running an encrypted LVM volume on an SD card had this problem). So I now use a regular “reboot“.
If no-one points out any serious flaws I plan to ask the release team to include this version of selinux-basics in Lenny. I believe that it will make it significantly easier to install SE Linux while also reducing the incidence of systems being damaged due to mistakes. If you edit the GRUB configuration file by hand then there is a risk of a typo making a system unbootable.
The package in question is already in my Lenny repository, see my previous post about Lenny SE Linux for details [1].
One thing that seems overlooked by most people who discuss productive work habits is the varying mental capacity for performing different types of work. While it’s well known that alcohol and other substances decrease mental ability and it’s slightly less well known that sleep deprivation has a similar affect to being drunk [1], the reference cites the case of medical residents – the effects of sleep deprivation can affect people with medical training who presumably are better qualified to recognise and mitigate such problems than most people.
Previously I wrote about increasing efficiency through less work [2], which was mainly based on the total number of hours worked in a week – with some mention of extreme fatigue.
However it seems obvious that things aren’t that simple. The ability to perform work can be reduced due to temporary issues, poor sleep the previous night (not the same as sleep deprivation but enough to decrease performance), general health (I find that having a cold really reduces the quality and quantity of my work), and physical state (skipping a meal due to being busy can lead to lower quality work).
For the work I do (system programming and system/network administration) there are a range of tasks that I need to perform, which require different levels of skill. Now when I have several tasks that need to be done it makes sense to do the most demanding task that I can do well. The problem is in assessing my own ability to perform such tasks, while I can have a general idea of how alert I feel, it seems likely that self assessment by subjective criteria will decrease in accuracy at least as fast as my ability to perform the work in question.
So having an ability to assess my mental capacity at any time seems useful in determining when to work on the hard tasks, when to work on easy tasks, and when to just give up and go to bed! ;)
Fortunately I have found a benchmark that seems to give reasonable results. I have found that I can reliably solve the game Codebreaker (based on the board game Mastermind [3]) on the Familiar distribution of Linux running on my iPaQ in under 30 seconds when I’m fully alert. So far my tests have indicated that when I seem less alert (due to finding tasks difficult which should be easy or making mistakes) the amount of time taken to complete the game increases (my worst day was when I couldn’t complete it in under a minute).
It seems likely that someone who is doing intellectual work could take one day off work a week without a great decrease in productivity if it was their least productive day. If there was a factor of three difference in productivity between the best and worst days then skipping the worst day might decrease productivity by 10% if the 8 hours in question were used doing something as demanding as working. If what might have been the least productive day had been spent relaxing followed by going to sleep earlier than usual then it doesn’t seem impossible for productivity to be increased enough on the next day to provide a net benefit.
One thing I plan to do from now on is to use Codebreaker to help me determine when I should cease work in the evening. If I take more than a minute to complete it (or have difficulty in solving it) then it’s probably best to finish work for the day.
Currently Debian/Lenny contains all packages needed to run SE Linux apart from the policy. The policy package is missing because it needs to sit in unstable for a while before migrating to testing (Lenny), and I keep fixing bugs and uploading new versions.
I have set up my own APT repository for SE Linux packages (as I did for Etch [1]). The difference is that it’s working now (for i386 and AMD64) while I released my Etch repository some time after the release of Etch.
gpg --keyserver hkp://subkeys.pgp.net --recv-key F5C75256
gpg -a --export F5C75256 | apt-key add –
To enable the use of my repository you must first run the above two commands to retrieve and install my GPG key (take appropriate measures to verify that you have the correct key).
deb http://www.coker.com.au lenny selinux
Then add the above line to /etc/apt/sources.list and run “apt-get update” to download the list of packages.
Next run the command “apt-get install selinux-policy-default selinux-basics” to install all the necessary packages and then “touch /.autorelabel” to cause the filesystems to be labeled on the next boot. Edit the file /boot/grub/menu.lst and add “selinux=1” to the end of the line which starts with “# kopt=” and then run the command update-grub to apply this change.
Then reboot and the filesystems will be relabeled. Init will be running in the wrong context so you have to reboot again before everything is running correctly (I am thinking of having the autorelabel process automatically do the second reboot).
For future reference please use the page on my documents blog – I will update it regularly as needed [2]. This post will not be changed when it becomes outdated in a few days.
|
|