presentations about SE Linux

I have just read the Presentation Zen blog post about PowerPoint.

One of the interesting suggestions was that it’s not effective to present the same information twice, so you don’t have notes covering what you say. Having a diagram that gives the same information is effective though because it gives a different way of analyzing the data. I looked at a couple of sets of slides that I have written and noticed that the ratio of text slides to diagram slides was 6:1 and 3:1 in favor of text, and that wasn’t counting the first and last slides that have the title of the talk and a set of URLs respectively.

So it seems that I need more and better diagrams. I’ll include most of the diagrams I use in my current SE Linux talks in this post with some ideas on how to improve them. I would appreciate any suggestions that may be offered (either through blog comments or email).

The above diagram shows how the SE Linux identity limits the roles that may be selected, and how the role limits the domains that may be entered. Therefore the identity controls what the user may do and in this example the identity “root” means that the user has little access to the machine (a Play Machine configuration). I think that the above is reasonably effective and have been using it for a few years. I have considered a more complex diagram with the “staff_r” role included as well and possibly including the way that “newrole” can be used to change between roles. So I could have the above as slide #1 about identities and roles with a more detailed diagram following to replace a page of text about role transition.

The above diagram shows the domain transitions used in a typical system boot and login process. It includes the names of the types and a summary of the relevant policy rules used to implement the transitions. I also have another diagram that I have used which is the same but without the file types and policy. In the past I have never used both in the one talk – just used one of the two and had text to describe the information content of the other. To make greater use of diagrams I could start with the simple diagram and then have the following slide have all the detail.

The above diagram simply displays the MCS security model with ellipses representing processes and rectangles representing files.

The above diagram shows a simplified version of the MMCS policy. With MMCS each process has a range with the low level representing the minimum category set of files to which it is permitted to write and the high level representing the files that it may read and write. So to write to a file with the “HR” category the process must have a low level that’s no higher than “HR” and a high level that is equal or greater than “HR“. The full set of combinations of two categories with low and high levels means 10 different levels of access for processes which makes for a complex diagram. I need something other than plain text for this but the above diagram is overly complex and a full set is even more so. Maybe a table with process contexts on one axis, file contexts on another and access granted being one of “R“, “RW” or nothing?

I also have a MLS diagram in the same manner, but I now think it’s too awful to put on my blog. Any suggestions on how to effectively design a diagram for MLS? For those of you who don’t know how MLS works the basic concept is that every process has an “Effective Clearance” (AKA low level) which determines what it can write, it can’t write to anything below that because it might have read data from a file at it’s own level and it can’t read from a level higher than it’s own level. MLS also uses a high level for ranged processes and filesystem objects (but that’s when it gets really complex).

This last one is what I consider my most effective diagram. It shows the benefits of SE Linux in confining daemons in a clear and effective manner. Any suggestions for improvement (apart from fixing the varying text size which is due to a bug in Dia) would be appreciated.

The above diagrams are all on my SE Linux talks page, along with the Dia files that were used to create them. They may be used freely for non-commercial purposes.

If anyone has some SE Linux diagrams that they would like to share then please let me know, either through a blog comment, email, or a blog post syndicated on Planet SE Linux.

what is a BOF?

BOF stands for Birds Of a Feather, it’s an informal session run at a conference usually without any formal approval by the people who run the conference.

Often conferences have a white-board, wiki, or other place where conference delegates can leave notes for any reason. It is used for many purposes including arranging BOFs. To arrange a BOF you will usually write the title for the BOF and the name of the convenor (usually yourself if it’s your idea) and leave a space for interested people to sign their names. Even though there is usually no formal involvement of the conference organizers they will generally reserve some time for BOFs. Depending on the expected interest they will usually offer one or two slots of either 45 minutes or one hour. They will also often assist in allocating BOFs to rooms. But none of this is needed. All that you need to do is find a notice-board, state your intention to have a BOF at a time when not much else is happening and play it by ear!

My observation is that about half the ideas for BOFs actually happen, the rest don’t get enough interest. This is OK, one of the reasons for a BOF is to have a discussion about an area of technology that has an unknown level of interest. If no-one is interested then you offer the same thing the next year. If only a few people are interested then you discuss it over dinner. But sometimes you get 30+ people, you never know what to expect as many people don’t sign up – or have their first choice canceled and attend the next on the list!

To run a BOF you firstly need some level of expert knowledge in the field. I believe that the best plan is for a BOF to be a panel discussion where you have a significant portion of the people in the audience (between 5 and 15 people) speaking their opinions on the topic and the convener moderating the discussion. If things work in an ideal manner then the convener will merely be one member of the panel. However it’s generally expected that the person running the BOF can give an improvised lecture on the topic in case things don’t happen in an ideal manner. It’s also expected that the convener will have an agenda for a discussion drawn up so that if the panel method occurs they can ask a series of questions for members of the BOF to answer. My experience is that 8 simple questions will cover most of an hour.

One requirement for convening a BOF is that you be confident in speaking to an audience of unknown size, knowledge, and temperament. Although I haven’t seen it done it would be possible to have two people acting as joint conveners of a BOF. One person with the confidence to handle the audience and manage the agenda and another with the technical skills needed to speak authoritatively on the topic.

Some of the BOFs I have attended have had casual discussions, some have had heated arguments, and some ended up as lectures with the convener just talking about the topic. Each of these outcomes can work in terms of entertaining and educating the delegates.

But don’t feel afraid, one of the advantages of a BOF is that it’s a very casual affair, not only because of the nature of the event but also because it usually happens at the end of a long conference day. People will want to relax not have a high-intensity lecture. One problem that you can have when giving a formal lecture to an audience is nervous problems such as hyper-ventilating. This has happened to me before and it was really difficult to recover while continuing the lecture. If that happens during a BOF then you can just throw a question to the audience such as “could everyone in the room please give their opinion on X“, that will give you time for your nerves to recover while also allowing the audience to get to know each other a bit – it’s probably best to have at least one such question on your agenda in case it’s needed.

Note that the above is my personal opinion based on my own experience. I’m sure that lots of other people will disagree with me and write blog posts saying so. ;)

The facts which I expect no-one to dispute are:

  • BOFs are informal
  • Anyone can run one
  • You need an agenda
  • You need some level of expert knowledge of the topic

heartbeat – what defines a cluster?

In Debian bug 418210 there is discussion of what constitutes a cluster.

I believe that the node configuration lines in the config file /etc/ha.d/ha.cf should authoritatively define what is in the cluster and any broadcast packets from other nodes should be ignored.

Currently if you have two clusters sharing the same VLAN and they both use the same auth code then they will get confused about which node belongs to each cluster.

I set up a couple of clusters for testing (one Debian/Etch and the other Debian/unstable) under Xen using the same bridge device – naturally I could set up separate bridges – but why should I have to?

I gave each of them the same auth code (one was created by copying the block devices from the other – they have the same root password so there shouldn’t be a need for changing any other passwords). Then things all fell apart. They would correctly determine that they should each have two nodes in the cluster (mapping to the two node lines), but cluster 1 would get nodes ha1 and ha2-unstable even though it had node lines for ha1 and ha2.

I have been told that this is the way it’s supposed to be and I should just use different ports or different physical media.

I wonder how many companies have multiple Heartbeat installations on different VLANs such that a single mis-connected cable will make all hell break loose on their network…

questions regarding SE Linux

I just received a question about SE Linux via email. As I don’t want to post private messages containing material that’s globally useful I’ll answer through my blog:

> other than strict and targeted policies……other policies like
> RBAC, MCS, Type Enforcement are also there….how are these policies
> implemented

The two main policies are the strict policy and the targeted policy. The strict policy is the earliest and was originally known as the sample policy (but was given the name “strict” after targeted was developed).

The strict policy aims to give minimal privileges to all daemons. The targeted policy aims to restrict the programs that are most vulnerable (network facing daemons) and not restrict other programs (for ease of use). There is currently work in progress on combining those policies so the person who compiles the policy can determine which features of strict they desire.

RBAC means Role Based Access Control. The strict policy assigns users to roles and the role then limits the set of domains that can be entered. For example the user_r role does not permit the sysadm_t domain so a user who is only permitted to enter the user_r role can not perform sys-admin tasks. Like many terms RBAC is used in different manners, some people consider that it means direct control by role (EG role user_r can not write to /dev/hda), while SE Linux has a more indirect use of roles (role user_r can not run programs in domain sysadm_t or any other domain that allows writing to type fixed_disk_device_t – the type for /dev/hda). You may consider that the strict policy supports RBAC depending on which definition of the term you use.

Generally the targeted policy is not considered to support RBAC, although if you consider a role to merely be a container for a set of accesses that are permitted then a SE Linux domain could be considered a in the RBAC sense. I don’t think of targeted policy as being a RBAC implementation because all user sessions run in the domain unconfined_t which has no restriction. I think that to be considered RBAC a system must confine user logins.

Type enforcement is the primary access control mechanism for SE Linux. Every object that a process may access (including other processes) has a type assigned to it. The type of a process is known as a domain. The system has a policy database which for every combination of domain, type, and object class (which is one of dir, file, blk_file, etc – all the different types of object that a process may access) specifies whether the action is permitted or denied (default deny) and whether it is audited (default is to audit all denied operations and not audit permitted operations).

MCS is a confidentiality protection mechanism where each file has a set of categories assigned to it. The set may be empty, may contain all 1024 categories, or any sub-set. Each process has a set of categories that determines which files it may access. File access is granted if Unix permissions allow it, if the domain-type model allows it, and if MCS allows it (on an MCS system). I have just had an article on MCS published in Linux Journal.

MCS is an optional feature for people compiling Linux from source or for distribution vendors. For Red Hat Enterprise Linux, Fedora, and Debian the decision was made to include it, so the strict and targeted policies for those distributions include MCS.

There is another policy known as MLS. This is a policy build that comprises the strict policy plus Multi-Level Security. Multi-Level Security aims to give the highest confidentiality protection and comply with the LSPP (Labeled Security Protection Profile – roughly comparable to B1) Common Criteria certification. It would be possible to build a targeted policy with MLS but that wouldn’t make sense – why have the highest protection of confidentiality with anything less than the highest protection of integrity?

As for how the policies are implemented, I’m not about to write a tutorial on policy writing for a blog post, I’m sure that someone will post a link to a Tresys or Fedora web page in the comments. ;)

> there r some packages of linux in which some changes has been made
> to support linux……for eg:- coreutils, findutils

That is correct. Every program that launches a process on behalf of a user at a different privilege user (EG /bin/login, sshd and crond) and every program that creates files for processes running in different domains (EG logrotate creating new log files for multiple daemons) needs to be modified to support SE Linux. Also ls and ps were modified to show SE Linux contexts as well as the obvious programs in coreutils.

> ‘Z’ is the new thing that have been added to most of the
> utilities……wherever I search I get the changes made only in few
> utilities like ps, mv, cp, ls
>
> Can u help me by giving all the changes made in each of the utilities…..

Unfortunately I can’t. This has been identified as an issue and there is currently work in progress to determine the best way of managing this.

death threats against Kathy Sierra

The prominent blogger and author Kathy Sierra has recently cancelled a tutorial at a conference after receiving death threats.

Obviously this is a matter for the police to investigate – and the matter has been reported to them.

It’s also an issue that is causing a lot of discussion on the net. The strange thing is that a large portion of the discussion seems based on the idea that what happened to Kathy is somehow unusual. The sexual aspect of the attacks on Kathy is bizarre but campaigns of death threats are far from unusual in our society. The first post I saw to nail this is the I had death threats in high school blog entry. Death threats and campaigns of intimidation are standard practice in most high schools. After children are taught that such things are OK for six years straight it’s hardly a surprise that some of them act in the same manner outside school!

But I don’t expect anything to change. Columbine apparently didn’t convince anyone who matters that there is a serious problem in high-schools, I don’t expect anything else to.

I can clearly remember when I first heard about the Columbine massacre, a colleague told me about it and explained that he barracked for the killers due to his own experiences at high-school. While my former colleague probably had not given his statements much consideration, any level of support for serial-killers is something to be concerned about.

This is not to trivialise Kathy’s experience. But I think that discussion should be directed at more fundamental problems in society instead of one of the symptoms. If the causes are not addressed then such things will keep happening.

google-bank

Currently many people have Google advertising on their web sites, it may even be that a majority of the serious Internet users host Google advertising. Given that Google is already writing a cheque every month to many people, it wouldn’t be difficult for them to change the amount in response to a funds transfer request. Depositing a cheque in a foreign currency can incurr $25 in bank fees (that’s what the Commonwealth Bank of Australia charges me), this is a great impediment to international trade in small values. When Google already has an office in a country and writes cheques in the local currency it would be very easy to have that cheque include funds transfers too.

One significant advantage of Google payments would be the fact that Google doesn’t write cheques for less than $100, so someone who earns $2 per month through Google adverts will be waiting a long time before they get a cheque – but if someone has an item that costs a small amount of money (EG an online service that costs a few dollars a month) then the user would be enticed to use it.

Currently many people don’t place Google adverts because they believe that it would take them an unreasonably large amount of time to reach $100US. But if they could spend the money in small increments on other online services then it would be more enticing.

It seems to me that Google is the only organization that is both capable of running an International online small-payments system and which would be trusted by most people.

If you like this idea please post a comment.

images for a web site

When I first started putting pictures on my web site I used to delete the originals (at the time I only had a 3.2G hard drive in my main machine and used CDs for backup so I didn’t feel inclined to waste too much space). The problem is that I optimised the images for viewing on displays of the day (when 1024×768 was high resolution and I tried to get pictures down to 800×600 or less whenever possible). Also the program I was using at the time for scaling the images didn’t do it nearly as well as the Gimp does now.

Now when putting pictures on my web site I keep the original JPEG’s in a safe place so that if there are future changes to common display technology, net connection speed (particularly the speed of my server) or of technology for scaling and compressing images then I can re-do them to get a better result.

When saving images with Gimp I enable “Advanced Options”, this allows me to set a floating-point DCT method this saves about 400 bytes on disk and apparently gives a better image quality too – it’s not noticably slow on a Pentium-M 1.7GHz so they should probably make it the default. The next “Advanced” option to change is to turn off “Save EXIF data” (saves 1.9K) and “Save thumbnail” (can save almost 5K depending on the image).

The next thing to do when saving a JPEG is to enable the “Show Preview in image window” setting. This allows you to adjust the image quality while seeing the resulting image as well as the size, so you can determine which combination of file size and image quality is best for you. This is much easier than saving
a file and then running an image viewing program to inspect it!

As an aside, it would be convenient if the Gimp would reposition it’s “Save as” dialogue to not occlude the image window and would enable the preview option by default on machines with reasonably fast CPUs.

power saving

Adrian von Bidder made an interesting post in response to my post about Spanish wind power. He correctly points out that power sources that have seasonal variations and which may vary during the course of a day can not be used as the sole power source.

The ideal design would be to have wind power stations that are designed to have a peak power that is greater than the expected use for the country. Then when wind power is slightly below peak the entire use for the country could still be satisfied.

There are a number of power sources that can quickly ramp up, this includes hydro-electric and gas-fired power stations. Such forms of power generation could be used as backup for when wind and solar power are limited. Incidentally one thing to note about Solar power is that it is most effective during the day in summer – which is when there is the highest demand for electricity to run cooling systems. There is also an option for having the sun heat up rocks which can be used for generating electricity at night or at periods of peak demand. So eventually we could have all our energy needs supplied by solar and wind power.

If wind power was designed to exceed the demand at windy times there are a number of ways that it could be used. The first thing to do is to implement billing systems that vary the cost according to the supply. This information could be provided to customers via X10 (or a similar technology). Home appliances could take note of this information and perform power-hungry operations when it’s cheap. Your freezer could cool itself to -30C when electricity is cheap and allow the temperature to rise to -5C when it’s expensive. You could program your washing machine to start when electricity becomes cheap – usually a few hours delay before starting the washing is no inconvenience.

Ideally home power generation from solar and wind sources would be used. There is significant loss in the power lines that lead from power plants to the consumer, so there are efficiency benefits in generating power locally. A wind turbine for a home will give highly variable amounts of power, and the electricity use of a home also varies a lot. So batteries to store the power are required. When you have local battery storage you could use your batteries to power your home when electricity is expensive and use mains power when it’s cheap. Also if it was possible to feed power back to the main grid then home battery systems could be used to help power the main grid at expensive times (if the electricity company reimburses you for putting power back in the grid then you want such reimbursement to be done at the highest rate).

Adrian also mentioned turning devices off when leaving home. It is common practice in hotels that when entering your room you will insert your key in a holder by the door which acts as a master switch for all lights and some other electrical devices (such as the TV).

This same idea could be adopted for home use, not based on key storage (although this would be an option) but instead on a switch near the front door. Push a button and all lights turn off as do human-focussed appliances such as the TV and DVD player turn off (not the VCR), etc. There could also be a night option which would turn off the TV, DVD player, and most lights. Obviously at night you want bedroom and bathroom lights to still work but many things can be turned off.

This is all possible with today’s technology, small changes to usage patterns, and spending a little more money on technology. Currently you can get a basic solar power system for your house for about $10,000. That isn’t much when you spend $300,000 or more buying the house!

SE Linux on /.

The book SE Linux by Example has been reviewed on Slashdot.

The issue of Perl scripts was raised for discussion. It is of course true that a domain which is permitted to run the Perl interpreter can perform arbitrary system calls – it can therefore do anything that SE Linux permits that domain to do. This is in fact a demonstration of how SE Linux does the right thing! If you want to restrict what can be done when executing the Perl interpreter then you can have a domain_auto_trans() rule to have Perl run in a different domain.
Restricting Perl (as used by one particular program) is actually easier than restricting a complex application run by users such as Firefox. Users want to use Firefox for web browsing, local HTML file browsing, saving files that are downloaded from the web, running plugins, and more. Granting Firefox access to perform all those tasks means that it is not restricted from doing anything that the user can do.

A claim was made that a novice users would not understand how to use SE Linux. The fact is that they don’t need to. I know many novice computer users who are running SE Linux systems, it just works! It’s more advanced users that have to learn about SE Linux because they configure their machines more heavily.

The essential difference between path-based access control and Inode based access control is that the standard Unix commands to control file access (chmod, chown, and chgrp) all operate on Inodes. If a file has 1000 hard links then I can restrict access to all of them via a single chmod or chcon (the SE Linux command that is comparable to chmod) command. AppArmor does things differently and implements an access control model that is vastly different to the Unix traditions. SE Linux extends the Unix traditions with Mandatory Access Control.

Granting different levels of access to a file based on the name of the link which is used is a horror not a feature.

I wrote this as a blog entry rather than a /. comment because my lack of Karma means that less people will read my /. comments than my blog.

creating a new SE Linux policy module

Creating a simple SE Linux policy module is not difficult.

audit(1173571340.836:12855): avc: denied { execute } for pid=5678 comm=”spf-policy.pl” name=”hostname” dev=hda ino=1234 scontext=root:system_r:postfix_master_t:s0 tcontext=system_u:object_r:hostname_exec_t:s0 tclass=file

For example I had a server with the above messages in the kernel message log from the spf-policy program (run from Postfix) trying to run the “hostnme” program. So I ran the following command to generate a .te file (SE Linux policy source):

dmesg|grep spf.policy|audit2allow -m local > local.te

The -m option to audit2allow instructs it to create a policy module. The local.te file is below:

module local 1.0;

require {
      class file execute;
      type hostname_exec_t;
      type postfix_master_t;
      role system_r;
};

allow postfix_master_t hostname_exec_t:file execute;

Then I used the following commands to create a policy module and package it:

checkmodule -M -m -o local.mod local.te
semodule_package -o local.pp -m local.mod

The result was the object file local.pp and in intermediate file local.mod (which incidentally can be removed once the build is finished).

After creating the module I used the following command to link it with the running policy and load it into the kernel:

semodule -i ./local.pp