|
|
I have just configured IPVS on a Xen server for load balancing between multiple virtual hosts. The benefit is not load balancing but management. With two virtual machines providing a service I can gracefully shut one down for maintenance and have the other take the load. When there are two machines providing a service a load balancing configuration is much better than a hot-spare, one reason is the fact that there may be application scaling issues that prevent one machine with twice the resources from giving as much performance as two smaller machines. Another is the fact that if you have a machine configured but never used there will always be some doubt as to whether it would work…
The first thing to do is to assign the IP address of the service to the front-end machine so that other machines on the segment (IE routers) will be able to send data to it. If the address for the service is 10.0.0.5 then the command “ip addr add dev eth0 10.0.0.5/24 broadcast +” will make it a secondary address on the eth0 interface. On a Debian system you would add the line “up ip addr add dev eth0 10.0.0.5/24 broadcast + || true” to the appropriate section of /etc/network/interfaces, for a Red Hat system it seems that /etc/rc.local is the best place for it. I expect that it would be possible to merely advertise the IP address via ARP without adding it to the interface, but the ability to ping the IPVS server on the service address seems useful and there seems no benefit in not assigning the address.
There are three methods used by IPVS for forwarding packets, gatewaying/routing (the default), IPIP encapsulation (tunneling), and masquerading. The gatewaying/routing method requires the back-end server to respond to requests on the service address. That would mean assigning the address to the back-end server without advertising it via ARP (which seems likely to have some issues for managing the system). The IPIP encapsulation method requires setting up IPIP which seemed like it would be excessively difficult (although maybe not more than required to set up masquerading). The masquerading option (which I initially chose) rewrites the packets to have the IP address of the real server. So for example if the service address is 10.0.0.5 and the back-end server has the address 10.0.1.5 then it will see packets addresses to 10.0.1.5. A benefit of masquerading is that it allows you to use different ports, so for example you could have a non-virtualised mail server listening on port 25 and a back-end server for a virtual service listening on port 26. While there is no practical limit to the number of private IP addresses that you might use it seems easier to manage servers listening on different ports with the same IP address – and there is the issue of server programs that are not written to support binding to an IP address.
ipvsadm -A -t 10.0.0.5:25 -s lblc -p
ipvsadm -a -t 10.0.0.5:25 -r 10.0.1.5 -m
The above two commands create an IPVS configuration that listens on port 25 of IP address 10.0.0.5 and then masquerades connections to 10.0.1.5 on port 25 (the default is to use the same port).
Now the problem is in getting the packets to return via the IPVS server. If the IPVS server happens to be your default gateway then it’s not a problem and it will already be working after the above two commands (if a service is listening on 10.0.1.5 port 25).
If the IPVS server is not the default gateway and you have only one IP address on the back-end server then this will require using netfilter to mark the packets and then route based on the packet matching. Marking via netfilter also seems to be the only well documented way of doing similar things. I spent some time working on this and didn’t get it working. However having multiple IP addresses per server is a recommended practice anyway (a back-end interface for communication between servers as well as a front-end interface for public data).
ip rule add from 10.0.1.5 table 1
ip route add default via 10.0.0.1 table 1
I use the above two commands to set up a new routing table for the data for the virtual service. The first line causes any packets from 10.0.1.5 to be sent to routing table 1 (I currently have a rough plan to have table numbers match ethernet device numbers, the data in question is going out device eth1). The second line adds a default router to table 1 which sends all packets to 10.0.0.1 (the private IP address of the IPVS server).
Then it SHOULD all be working, but in the network that I’m using (RHEL4 DomU and RHEL5 Dom0 and IPVS) it doesn’t. For some reason the data packets from the DomU are not seen as part of the same TCP stream (both in Net Filter connection tracking and by the TCP code in the kernel). So I get an established connection (3 way handshake completed) but no data transfer. The server sends the SMTP greeting repeatedly but nothing is received. At this stage I’m not sure whether there is something missing in my configuration or whether there’s a bug in IPVS. I would be happy to send tcpdump output to anyone who wants to try and figure it out.
My next attempt at this was via routing. I removed the “-m” option from the ipvsadm command and added the service IP address to the back-end with the command “ifconfig lo:0 10.0.0.5 netmask 255.255.255.255” and configured the mail server to bind to port 25 on address 10.0.0.5. Success at last!
Now I just have to get Piranha working to remove back-end servers from the list when they fail.
Update: It’s quite important that when adding a single IP address to device lo:0 you use a netmask of 255.255.255.255. If you use the same netmask as the front-end device (which would seem like a reasonable thing to do) then (with RHEL4 kernels at least) you get proxy ARPs by default. For example you used netmask 255.255.255.0 to add address 10.0.0.5 to device lo:0 then on device eth0 the machine will start answering ARP requests for 10.0.0.6 etc. Havoc then ensues.
It’s widely regarded that the best practice is to set the time zone of a server to UTC if people are going to be doing sys-admin work from various countries. I’m currently running some RHEL4 servers that are set to Los Angeles time. So I have to convert the time from Melbourne time to UTC and then from UTC to LA time when tracking down log entries. This isn’t really difficult for recent times (within the last few minutes) as my KDE clock applet allows me to select various time zones to display on a pop-up. For other times I can use the GNU date command to convert from other time zones to the local zone of the machine, for example the command date -d "2008-08-06 10:57 +1000" will display the current Melbourne time (which is in the +1000 time zone) converted to the local time zone. But it is still painful.
In RHEL5, CentOS 5, and apparently all versions of Fedora newer Fedora Core 4 (including Fedora Core 4 updates) the command system-config-date allows you to select Etc/GMT as the time zone to get GMT. For reference selecting London is not a good option, particularly at the moment as it’s apparently daylight savings time there.
For RHEL4 and CentOS 4 the solution is to edit /etc/sysconfig/clock and change the first line to ZONE="Etc/GMT" (the quotes are important), and then run the command ln -sf /usr/share/zoneinfo/Etc/GMT /etc/localtime. Thanks to the Red Hat support guy who found the solution to this, it took a while but it worked in the end! Hopefully this blog post will allow others to fix this without needing to call Red Hat.
In Debian the command tzconfig allows you to select 12 (none of the above) and then GMT or UTC to set the zone. This works in Etch, I’m not sure about earlier versions (tzconfig always worked but I never tried setting UTC). In Lenny the tzconfig command seems to have disappeared, now to configure the time zone you use the command dpkg-reconfigure tzdata which has an option og Etc at the end of the list.
Updated to describe how to do this in Lenny, thanks for the comments.
When installing Xen servers one issue that arises is how to assign MAC addresses. The Wikipedia page about MAC addresses [1] shows that all addresses that have the second least significant bit of the most significant byte set to 1 are “locally administered”. In practice people just use addresses starting with 02: for this purpose although any number congruent to two mod four used in the first octet would give the same result. I prefer to use 02: because it’s best known and therefore casual observers will be more likely to realise what is happening.
Now if you have a Xen bridge that is private to one Dom0 (for communication between Xen DomU’s on the same host) or on a private network (a switch that connects servers owned by one organisation and not connected to machines owned by others) then it’s easy to just pick MAC addresses starting with 02: or 00:16:3e: (the range assigned to the Xen project). But if Xen servers run by other people are likely to be on the same network then there is a problem.
Currently I’m setting up some Xen servers that have public and private networks. The private network will either be a local bridge (that doesn’t permit sending data out any Ethernet ports) or a bridge to an Ethernet port that is connected to a private switch, for that I am using MAC addresses starting with 02:. As far as I am aware there is no issue with machine A having a particular MAC address on one VLAN while machine B has the same MAC address on another VLAN.
My strategy for dealing with the MAC addresses for the public network at the moment is to copy MAC addresses from machines that will never be in the same network. For example if I use the MAC addresses from Ethernet cards in a P3 desktop system running as a router in a small company in Australia then I can safely use them in a Xen server in a co-location center in the US (there’s no chance of someone taking the PCI ethernet cards from the machine in Australia and sending them to the US – and no-one sells servers that can use such cards anyway). Note that I only do this when I have root on the machine in question and where there is no doubt about who runs the machine, so there should not be any risk.
Of course if someone from the ISP analyses the MAC addresses on their network it will look like they have some very old machines in their server room. ;)
I wonder if there are any protocols that do anything nasty with MAC addresses. I know that IPv6 addresses can be based on the MAC address, but as long as the separate networks have separate IPv6 ranges that shouldn’t be a problem. I’m certainly not going to try bridging networks between Australia and the US!
Another possible way of solving this issue would be to have the people who run a server room assign and manage MAC addresses. One way of doing this would be to specify a mapping of IP addresses to MAC addresses, EG you could have the first two bytes be 02:00: and the next four be the same as the IPv4 address assigned to the DomU in question. In the vast majority of server rooms I’ve encountered the number of public IP addresses has been greater than or equal to the number of MAC addresses with the only exception being corporate server rooms where everything runs on private IP address space (but there’s nothing wrong with 02:00:0a: as the prefix for a MAC address).
I also wonder if anyone else is thinking about the potential for MAC collisions. I’ve got Xen servers in a couple of server rooms, I told the relevant people in writing of my precise plans (and was assigned extra IP addresses for all the DomUs) but never had anyone mention any scheme for assigning MAC addresses.
I’ve just started work on a new HP server running RHEL5 AS (needs to be AS to support more than 4 DomU’s). While I still have the Xen issues that made me give up using it on Debian [1] (the killer one being that an AMD64 Xen Dom0 would kernel panic on any serious disk IO) but the Xen implementation in RHEL is quite solid.
The first thing I did was run zcav (part of my Bonnie++ benchmark suite) [2] to see how the array performs (as well as ensuring that the entire array actually appears to work). The result is below. For a single disk performance is expected to decrease as you read along the disk (from outer to inner tracks). I don’t know why the performance decreases until the half-way point and then starts with good performance again and again decreases.

The next thing was to ensure that the machine had RAID-6 (I have been convinced that that using only RAID-5 verges on professional malpractice). As the machine is rented from a hosting company there was no guarantee that they would follow my clear written instructions involving running RAID-6.
The machine is a HP rack-mounted server with a CCISS RAID controller, so to manage the array the command /usr/sbin/hpacucli is used.
The command hpacucli controller all show reveals that there is a “Smart Array P400 in Slot 1“.
The command hpacucli controller slot=1 show gives the following (amongst a lot of other output):
RAID 6 (ADG) Status: Enabled
Cache Board Present: True
Cache Status: OK
Accelerator Ratio: 25% Read / 75% Write
Drive Write Cache: Disabled
Total Cache Size: 512 MB
Battery Pack Count: 1
Battery Status: OK
SATA NCQ Supported: True
So the write-back cache is enabled, 384M of data is for the write-back cache and 128M is for the read cache (hopefully all for read-ahead – the OS should do all the real caching for reads).
The command hpacucli controller slot=1 array all show reveals that there is one array: “array A (SAS, Unused Space: 0 MB)“.
The command hpacucli controller slot=1 array a show status tells me that the status is “array AOK“.
Finally the command hpacucli controller slot=1 show config gives me the data that I really want at this time and says:
Smart Array P400 in Slot 1 (sn: *****)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (820.2 GB, RAID 6 (ADG), OK)
Then it gives all the data on the disks. It would be nice if there was a command to just dump all that. I would like to be able to show the configuration of all controllers with a single command.
Also it would be nice if the fact that hpacucli is the tool to use for managing CCISS RAID arrays when using Linux on HP servers was more widely documented. It took me an unreasonable amount of effort to discover what tool to use for CCISS RAID management.
I have been asked about the current status of Lenny SE Linux on the Desktop.
The first thing to consider is the combinations of policies and configurations. I will number them if only for the purpose of this post, if the numbering is considered generally helpful it could be more widely adopted to describe configurations.
- Default configuration. This has the default policy and is configured with all users having the domain unconfined_t and daemons such as POP servers are allowed to access home directories of type unconfined_home_dir_t. This allows such daemons to attack privileged user accounts.
- Some restricted users. This is the same as above but with some users restricted. Daemons such as POP servers are only allowed to access the home directories of restricted users. This means that if a user is to have an unconfined account and receive email they must have two Unix accounts or receive their mail under /var/spool/mail. This is one setsebool command and one (or maybe a few) “semanage login -m” commands from the default configuration.
- All users restricted. The system administrator has the domain sysadm_t and users have domains such as user_t. This requires a few more semanage commands. It is equivalent to the old strict policy.
- MLS. This is anything that is based around the MLS policy.
Currently I have two Desktop machines running Lenny (a test machine and my EeePC) and one server. I have only just switched my test machine to enforcing mode so have no good data on it (apart from the fact that I can boot it up and login – which is always a good start). The server is running in permissive mode because I have not yet written the policy to allow the POP server to read from unconfined_home_dir_t. I could get it working by switching from level 1 to level 2 or 3, but I want to get level 1 server policy working for the benefit of others else first.
My EeePC however is fully functional, I have been doing some work on it – that mostly means running a ssh client under GNOME but that’s OK (desktop environments such as GNOME and KDE are quite complex and demanding, getting a machine to boot and run such a desktop environment tests out many parts of the system). It’s only at level 1 for the moment because I want to get level 1 working everywhere before moving to the higher levels. I want to get things ready for real users ASAP. With the way the policy is managed now it will be possible to move from level 1 to 2 or 3 without rebooting or interrupting running services. So once users have systems running well at level 1 they can easily increase the security at a later date.
The problems that I have had are due to text relocations in libraries (see my previous post about execmod permission [1]). I’ve filed bug report #493678 against libtheora0 [2] in regard to this issue and included a patch from Fedora (which disables the non-relocatable assembly code in question). It seems that upstream have some new assembler code to try and fix this issue, so hopefully we’ll have something that can make it into Lenny!
I’ve filed bug report #493705 against libswscale0 for the same issue [3]. I included a patch to turn off the assembler code in question but that was not well received. If anyone has some i386 assembler skill and some spare time I would appreciate it if you could try and find a way to make the code position independent while losing little or no performance.
One thing to note is that I am now using an Opteron 1212 (2.0GHz dual-core) system for compiling, I run the i386 DomU with a 64bit kernel (I expect that 32bit user-space runs faster with a 64bit kernel than a 32bit kernel), and the disks are reasonably fast. Even so it takes about 15 minutes to build libswscale0 and the other packages from the same source tree. Previously I was using a 1.0GHz Pentium-3 for my Lenny i386 development until I had the libswscale0 build process go for more than 90 minutes before running out of disk space! If your build machine is old enough to only be 32bit then you should probably plan on watching a movie or going to bed while the build is in progress.
I have built packages that work around the above bugs and included them in my Lenny repository [4]. If you take the packages from that repository plus the Lenny packages then you should have a functional desktop system at level 1. I would appreciate it if people would start testing that and providing feedback. One important issue is the discovery of libraries that want shared stacks, text relocations, and executable memory. The deadline for fixing them properly is even more of a problem due to the number of people who have to be involved in a solution (as compared to the policy where I can do it on my own).
One finally problem is a bug in xdm which causes it to give the wrong context for login sessions due to having an old version of the SE Linux related code [5]. Due to a combination of this and some policy bugs you can not login with xdm. This is not a hugely important issue as most people will use gdm (which has the newer patch) or kdm (which has no SE Linux patch but can use pam_selinux.so). Also another option is wdm which works with pam_selinux.so. I’ve had a response to my bug report suggesting that there’s a bug in the patch (which was taken from gdm so maybe there’s a bug in gdm code too). I haven’t responded to that yet as I’ve been concentrating on the things that will make the most impact for Lenny.
At this stage I’m still unsure of when the release team will cut me off and prevent further SE Linux related fixes from going in Lenny. I need at least one more update to the policy packages before Lenny is released. I could release one right now with some useful improvements over what is currently in unstable, but am waiting until I get some other things fixed.
If I get everything fully working at level 1 (both client and server) before Lenny then I will provide a similar status report for users and testers of levels 2 and 3. I don’t expect that I will even get a chance to test level 4 (MLS) properly before Lenny releases.
There is an interesting article in The Age about the effect of petrol prices on the poorer people in Melbourne [1].
The article claims that people are unable to sell large old cars and buy smaller cars. To investigate that claim I did a price search on Ford Falcons and Holden Commodores on the web site www.drive.com.au . The Ford Falcon and Holden Commodore were for a long time the two leading marques of cars sold in Australia – both of them large cars. It seems that if I wanted to buy a Falcon that is less than 20 years old with an engine of 4.0L (or bigger) then I would have many choices available with list prices under $2500, including some cars in that price range which are as little as 10 years old (the average age of a car that’s registered for use in Australia). For Commodores there seems to be less choice, there are a few of them with 4L engines that are just over 10 years old being advertised for just under $5000 and a significant number being advertised in the $5,000 to $7,500 range. I don’t know whether the increased asking price for Commodores is due to greater optimism by the owners or a greater demand. One thing that we have to keep in mind is that due to the low price of advertising on the web site and the duration of the advert (which permits changing the price at any time) the sensible strategy is to start the advert with an optimistic price, and then gradually drop the price if there is little interest by the buyers.
There are also some Falcons on auction on Ebay that are going fairly cheaply, one example is less than $6000 for a 2000 Falcon with only a few minutes to go.
The HSV (Holden Special Vehicles) cars are listed on Drive as a different make on Drive (it’s just Holden’s range of faster vehicles), and surprisingly their prices are quite strong. There is only one vehicle on offer for less than $5000, and only a few for less than $10,000.
Now when it comes to buying a small car, on ebay there are a number of Toyota Corollas on sale, two 1997 models are on sale for just under $9,000 and just under $10,000. It seems that you could sell a 2000 model Ford Falcon on ebay and not receive enough money to buy a 1997 Toyota Corolla!
For the Corollas advertised on Drive the majority of them seem to be advertised for around $15,000, but the volume on sale is great enough that there is a significant minority advertised for lower prices. There are 173 Corollas advertised for between $2,500 and $5,000 it might be possible to find one of those that has no significant problems. So it seems that ebay is not the place to buy a Corolla!
So it seems that the main premise of the article (that you can’t sell a second-hand large car and buy a small car) is correct. If you were to sell a 1990’s Falcon or Commodore and buy a Corolla (which cost about half as much as a high-end Falcon or Commodore when new) of the same age then you would be lucky to get more than half the Corolla purchase price. Then of course there’s a 4% “stamp duty” tax to pay and the risk that a second-hand car you buy might have some hidden problem (the cheaper cars are unlikely to have been well serviced).
The lecture by Professor James Duane about why you should not talk to the police (in the US at least) is doing the rounds at the moment. The Google video site doesn’t work for me, so I downloaded it from youtube with the following references:
part 1 [rVq6N0xAEEM]
part 2 [-Z0bpj3EEHI]
part 3 [44-GSZofXIE]
part 4 [zSvxiaO-TG8]
part 5 [gzYHnWrqfWg]
part 6 [BErNzdOnWGY]
The first thing that struck me about this is that it’s the first time I’ve ever seen someone clearly state the problem with the excessive number and complexity of the laws. It’s impossible to know whether you are breaking the law. This is obviously wrong and it should be a legislative priority to reduce the number of laws and the complexity of the laws. If for every law they sought evidence regarding whether it helped people or helped preserve the state then that would be a good start.
The excessive number of laws is not just a problem due to the risk of accidentally breaking a law, but also due to the fact that a malicious cop could arrest anyone that they wished – it would merely be a matter of following them for long enough. In the second half of the lecture Officer George Bruch from the Virginia Beach Police Department gives his side of the story. At one point he cites an example of the road laws where he can follow a car and find a legitimate reason for booking the driver if he wishes.
Virginian law allows the police to destroy primary evidence. If they tape an interview then they can transcribe it to a written record and then destroy the original tape. If the transcription removes some of the content that has meaning then the defendant is probably going to suffer for it. It’s often stated that on the net you have to make extra effort to make your meaning clear as jokes can’t be recognised by the tone of voice. It seems that the same effort would need to be made in a police interview (but without the possibility of going back and saying “sorry I was misunderstood” that you have in email).
It’s also legal for police to lie to suspects. While I can imagine some situations where this could be reasonable, it seems that at a minimum there should be some protection for child suspects against being lied to by police (there have been some travesties of justice stemming from children trusting police in an interview room).
It seems that if someone doesn’t get a good lawyer and refuse to talk to the police then they will be likely to be convicted regardless of whether they are guilty or innocent. Officer Bruch says that he tries to avoid interviewing innocent people. So I guess whether someone ends up in jail or not will depend to some extent on the opinion of a cop who may or may not interview them.
While I guess a high rate of success at securing convictions is in most cases a good thing, the fact that convicting an innocent person is the absolute best way of letting a criminal escape justice seems to be forgotten.
Another issue is the fact that a witness may have reason to believe that they could be a suspect. The advice that a suspect should never talk to a police officer seems to logically imply that intelligent people will be hesitant about providing witness statements in situations where they may be a suspect.
I wonder how the legal system in Australia compares to the US in regard to these issues. I know that we have far too many laws and too many complex laws which can not be understood and obeyed by a reasonable person.
When I first packaged the SE Linux policy for Debian the only way to adjust the policy was to edit the source files and recompile. Often changes that you might desire involved changing macros so while it would have been theoretically possible to just tack a few type definitions and allow rules at the end, you often wanted to change a macro to have a change apply all through the policy. To deal with that I had the policy source installed on all machines and the package update policy would compile it into a binary form and load it into the kernel.
Now there was the issue of merging user changes with changes from a new policy package. For most configuration files on a Unix system you can just leave any files that are modified by the user, not taking the new default configuration might cause the user to miss out on some new features – but presumably they were happy with the way it worked in the past. However due to inter-dependencies this wasn’t possible for SE Linux, if one file was not ungraded due to user changes and other files related to it were then the result could be a compile failure.
Another issue was the fact that a newer version of the policy might permit operations that the sys-admin did not desire and therefore not meet their security goals, or it might not permit operations that are essential to the operation of the system and interrupt service.
To solve this I wrote a script that prompted for upgrades to policy source files and allowed the sys-admin to choose which files to upgrade. This worked reasonably well in the early days when the number of files was small. But as the policy increased in size it became increasingly painful to upgrade the policy with as many as 100 questions being asked.
The solution to this (as implemented in Fedora Core 5, Debian/Etch, and newer distributions) was to have binary policy modules that maintain their dependencies. Now there are binary policy modules which can be loaded at will (the default install for Debian only installs modules that match the installed programs) and the modules can have optional sections with dependencies. So if you remove a module that defines a domain and there are other modules which have rules to allow communication with that domain then the optional sections of policy in the other modules is disabled when the domain becomes undefined. This solves the technical issues related to module inter-dependencies but the issue of intent and interaction with the rest of the system remains.
With Red Hat distributions the solution has been to upgrade the running policy every time the policy package is upgraded and be cautious when changing policy. They do a good job of the upgrade process (including relabeling files when the file contexts change) and in terms of policy changes I have not heard complaints from users about that. Users who don’t want a newer version of the policy can always put the package on hold.
For the Debian distribution after Lenny I plan to have a policy upgrade process that relabels files and a debconf question as to whether package upgrades should upgrade the policy. But for Lenny the freeze is already in progress so it seems to late to make such changes. Instead I’m going to upload a new version of the selinux-basics package with a program named selinux-policy-upgrade that will upgrade all the policy modules that are in use. This is not the ideal solution, but I think it will keep Lenny users reasonably happy.
I have written a script named postfix-nochroot to disable the chroot functionality of Postfix. I plan to initially include this in the selinux-basics package in Debian, but if the script was adopted by the Postfix package or some other package that seems more appropriate then I would remove it from selinux-basics.
The reason for disabling chroot is that when running SE Linux the actions of the various Postfix processes are restricted greatly, such that granting chroot access would increase the privileges. Another issue is the creation of the chroot environment, the Postfix package in Debian will recreate the files needed for the chroot under /var/spool/postfix when it is started. The first problem with this is that when a package is ugraded the chroot environment won’t be upgraded (with the exceptions of some packages that have special code to restart Postfix) and when the sys-admin edits files under /etc those changes won’t be mirrored in the chroot environment either.
The real problem when running SE Linux is that it requires extra privileges to be granted to the Postfix processes (to be able to call chroot()). While the SE Linux policy places much greater restrictions on the actions of daemons than a chroot would. For example a non-chrooted daemon process running with SE Linux will not be able to see most processes in ps output (it will be able to see that processes exist through entries under /proc, but without the ability to search the subdirectories of /proc related to other processes it won’t be able to see what they are).
It would be possible for my script to be used as the first step towards making a Postfix installation automatically use a chroot when SE Linux is disabled or in permissive mode, and not use a chroot when SE Linux is in enforcing mode. I’ve probably done about half the work that is needed if this was the end goal, but I have no great interest in such configuration and no time to work on it. I would be prepared to accept patches from other people who want to go in this direction.
I have written a script for Debian named selinux-activate which is included in selinux-basics version 0.3.3+nmu1 (which I have uploaded to Debian/Unstable). The script when run with no parameters will change the GRUB configuration to include selinux=1 on the kernel command-line and enable SE Linux support in the PAM modules for login, gdm, and kdm. One issue with this is that if you run the command before installing kdm or gdm then you won’t have the pam configuration changed – but as it’s OK to run the script multiple times this shouldn’t be a problem.
The new selinux-basics package will also force a reboot after relabelling all filesystems. I had tested “umount -a ; reboot -f” but discovered that “reboot -f” causes filesystem corruption in some situations (my EeePC running an encrypted LVM volume on an SD card had this problem). So I now use a regular “reboot“.
If no-one points out any serious flaws I plan to ask the release team to include this version of selinux-basics in Lenny. I believe that it will make it significantly easier to install SE Linux while also reducing the incidence of systems being damaged due to mistakes. If you edit the GRUB configuration file by hand then there is a risk of a typo making a system unbootable.
The package in question is already in my Lenny repository, see my previous post about Lenny SE Linux for details [1].
|
|