1

multiple ethernet devices in Xen

It seems that no-one has documented what needs to be done to correctly run multiple Ethernet devices (with one always being eth0 and the other always being eth1) in a Linux Xen configuration (or if it is documented then google wouldn’t find it for me).

vif = [ ‘mac=00:16:3e:00:01:01’, ‘mac=00:16:3e:00:02:01, bridge=xenbr1’ ]

Firstly I use a vif line such as the above in the Xen configuration. This means that there is one ethernet device with the hardware address of 00:16:3e:00:01:01 and another with the address of 00:16:3e:00:02:01. I just updated this section, the 00:16:3e prefix has officially been allocated to the Xen project for virtual machines. Therefore on your Xen installation you can do whatever you like with MAC addresses in that range without risk of competing with real hardware. The Xen code uses random MAC addresses in that range if you let it.

I have two bridge devices, xenbr0 and xenbr1. I only need to specify one as Xen can figure the other out.

Now when my domU’s boot they assign ethernet device names from the range eth0 to eth8. If there is only one virtual Ethernet device then it is always eth0 and things are easy. But for multiple devices I need to rename the interfaces.

eth0 mac 00:16:3e:00:01:01
eth1 mac 00:16:3e:00:02:01

This is done through the ifrename program (package name ifrename in Debian). I create a file named /etc/iftab with the above contents and then early in the boot process (before the interfaces are brought up) the devices will be renamed.

In the Red Hat model you edit the files such as /etc/sysconfig/networking/devices/ifcfg-eth0 and change the line that starts with HWADDR to cause a device rename on boot.

Update: the original version of this post used MAC addresses with a prefix of 00:00:00, the officially allocated prefix for Xen is 00:16:3e which I now use. Thanks to the person who commented about this.

2

installing Xen domU on Debian Etch

I have just been installing a Xen domU on Debian Etch. I’ll blog about installing dom0 later when I have a test system that I can re-install on (my production Xen machines have the dom0 set up already). The following documents a basic Xen domU (virtual machine) installation that has an IP address in the 10.0.0.0/8 private network address space and masquerades outbound network data. It is as general as possible.

lvcreate -n xen1 -L 2G /dev/vg

Firstly use the above command to create a block device for the domU, this can be a regular file but a LVM block device gives better performance. The above command is for a LV named xen1 on an LVM Volume Group named vg.

mke2fs -j /dev/vg/xen1

Then create the filesystem with the above command.

mount /dev/vg/xen1 /mnt/tmp
mount -o loop /tmp/debian-testing-i386-netinst.iso /mnt/cd
cd /mnt/tmp
debootstrap etch . file:///mnt/cd/
chroot . bin/bash
vi /etc/apt/sources.list /etc/hosts /etc/hostname
apt-get update
apt-get install libc6-xen linux-image-xen-686 openssh-server
apt-get dist-upgrade

Then perform the basic Debian install with the above commands. Make sure that you change to the correct directory before running the debootstrap command. The /etc/hosts and /etc/hostname files need to be edited to have the correct contents for the Xen image (the default is an empty /etc/hosts and /etc/hostname has the name of the parent machine). The file /etc/apt/sources.list needs to have the appropriate configuration for the version of Debian you use and for your preferred mirror. libc6-xen is needed to stop a large number of kernel warning messages on boot. It’s a little bit of work before you get the virtual machine working on the network so it’s best to do these commands (and other package installs) before the following steps. After the above type exit to leave the chroot and run umount /mnt/tmp.

lvcreate -n xen1-swap -L 128M /dev/vg
mkswap /dev/vg/xen1-swap

Create a swap device with the above commands.

auto xenbr0
iface xenbr0 inet static
pre-up brctl addbr xenbr0
post-down brctl delbr xenbr0
post-up iptables -t nat -F
post-up iptables -t nat -A POSTROUTING -o eth0 -s 10.1.0.0/24 -j MASQUERADE
address 10.1.0.1
netmask 255.255.255.0
bridge_fd 0
bridge_hello 0
bridge_stp off

Add the above to etc/network/interfaces and use the command ifup xenbr0 to enable it. Note that this masquerades all outbound data from the machine that has a source address in the 10.1.0.0/24 range.

net.ipv4.conf.default.forwarding=1

Put the above in /etc/sysctl.conf, run sysctl -p and echo 1 > /proc/sys/net/ipv4/conf/all/forwarding to enable it.

cp /boot/initrd.img-2.6.18-5-xen-686 /boot/xen-initrd-18-5.gz

Set up an initial initrd (actually initramfs) for the domU with a command such as the above. Once the Xen domU is working you can create the initrd from within it which gives a smaller image.

kernel = "/boot/vmlinuz-2.6.18-5-xen-686"
ramdisk = "/boot/xen-initrd-18-5.gz"
memory = 64
name = "xen1"
vif = [ "" ]
disk = [ "phy:/dev/vg/xen1,hda,w", "phy:/dev/vg/xen1-swap,hdb,w" ]
root = "/dev/hda ro"
extra = "2 selinux=1 enforcing=0"

The above is a sample Xen config file that can go in /etc/xen/xen1. Note that this will discover an appropriate bridge device by default, if you only plan to have one bridge then it’s quite safe, if you want multiple bridges then things will be a little more complex. Also note that there are two block devices created as /dev/hda and /dev/hdb, obviously if we wanted to have a dozen block devices then we would want to make them separate partitions with a virtual partition table. But in most cases a domU will be a simple install and won’t need more than two block devices.

xm create -c xen1

Now start the Xen domU with the above command. The -c option means to take the Xen console (use ^] to detach). After that you can login as root at the Xen console with no password, now is a good time to set the password.

Run the command apt-get install udev, this could not be done in the chroot before as it might mess up the dom0 environment. Edit /etc/inittab and disable gettys on tty2 to tty6, I don’t know if it’s possible to use them (the default and only option for xen console commands is tty1) and in any case you would not want 6, saving a few getty processes will save some memory.

Now you should have a basically functional Xen domU. Of course a pre-requisite for this is having a machine with a working dom0 installation. But the dom0 part is easier (and I will document it in a future blog post).

free software liason?

In my previous work as a sys-admin I have worked for a number of companies that depend heavily on free software. If you use a commercially supported distribution such as Red Hat Enterprise Linux then you get high quality technical support (much higher than you expect from closed-source companies), but this still doesn’t provide as much as you might desire as it is reactive support (once you notice a problem you report it). Red Hat has a Technical Account Manager offering that provides a higher level of support and there is also a Professional Services organization that can provide customised versions of the software. But the TAM and GPS offerings are mostly aimed at the larger customers (they are quite expensive).

It seems to me that a viable option for companies with smaller budgets is to have an employee dedicated to enhancing free software and getting changes accepted upstream. For a company that has a team of 5+ sys-admins the cost of a developer dedicated to such software development tasks should be saved many times over by the greater productivity of the sys-admins and the greater reliability of the servers.

This is not to criticise commercial offerings such as Red Hat’s TAM and GPS services, a dedicated free software developer could work with the Red Hat TAM and GPS people thus allowing the company to get the most value for money from the Red Hat consultants.

If using a free distribution such as Debian the case for a dedicated liason with the free software community is even stronger, as there is no formal support organization that compares to the Red Hat support (there are a variety of small companies that provide commercial support, but I am not aware of a 24*7 help desk or anything similar). If you have someone employed full-time as a free software developer then they can provide most of your support. It would probably make sense for a company that has mission critical servers running Debian to employ a Debian developer, a large number of Debian developers already work as sys-admins and finding one who is looking for a new job should not be difficult. There are more companies that would benefit from having DDs as employees than there are DDs, this isn’t an obstacle to hiring them as most hiring managers don’t realise the technical issues involved.

This is not to say that a company which can’t hire a DD should use a different distribution, merely that their operations will not be as efficient as they might be.

memes that damage debian

The Debian project is afflicted with several damaging memes. One that is causing problems at the moment is the idea that life should be fair. Unfortunately life is inherently unfair, it’s not fair that those of us who were born in first-world countries (the majority of Debian developers) have so many more opportunities than most people who are born in “developing” countries, and things just continue to be unfair as you go through life. Unfair things will happen to you, deal with it and do what is necessary to have a productive life!

When one developer has regular long-term disputes with many other developers the conclusion is that the one person who can’t get along is at fault. We can debate about whether one or two significant disputes are a criteria for this or whether having a dozen developers disagreeing with them is the final point. But the fact is that if there is a large group of people who work together well and an individual who can’t work with any of them then there is only one realistic option, the individual needs to go find some people that they can work with – they can resign or be expelled. The fact that something slightly unfair might have happened a year ago is no reason for pandering to an anti-social developer. The fact that expelling a developer for being anti-social is unfair to them is no reason for damaging the productivity of all Debian developers.

Another problemmatic meme is the idea that we have to tolerate everyone – even those who are intolerant (known as the Limp Liberal meme). When someone has no tolerance for others (EG being racist or practicing sexual discrimination) then they have no place in a community such as Debian. They need to be removed sooner rather than later. All Debian developers know the problems caused by deferring such expulsion.

The final damaging meme that I have observed is you can’t force a volunteer to do any work. On it’s own that statement is OK, but the interpretation commonly used in Debian is you can’t take their job away from them either. The most common example of this is when a developer is not maintaining a package and someone else does an NMU (non-maintainer upload) to fix a bug, the developer then flames the person who did this (usually to fix a severe bug). It seems to be believed that a Debian developer owns their packages and has a right to prevent other people from working on them. This attitude also extends to all other aspects of Debian, there are many positions of responsibility in Debian that are not being adequately performed and for which volunteers are offering to help out but being refused.

The idea of the GPL is that when a program is not being developed adequately it can be taken over by another person. However when that program is in a Debian package the developer who owns it can refuse to allow this.

music for children

Adam Rosi-Kessel made an interesting post about They Might Be Giants producing children’s music because their original fan base are now old enough to have children.

From casual inspection of the crowds at events such as Linux Conf AU it seems to me that many serious Linux people are also at the right age to have young children, and several blogs that are syndicated on Linux Planets provide evidence of this. Therefore it seems that there is a market for Linux related children’s music.

Many aspiring artists complain about the difficulty of establishing a reputation. I think that if someone was to release OGG and FLAC recordings of a children’s version of the Free Software Song under a Creative Commons license then they would get some immediate publicity through the blog space and Linux conferences which could then be used to drive commercial sales of children’s music.

While on the topic, it would be good to have a set of children’s songs and nursery rhymes to teach children from a young age about the community standards that we share in the Free Software community. There is no shortage of propaganda that opposes our community standards, the idea that sharing all music and software is a crime is being widely promoted to children.

4

SE Linux on Debian in 5 minutes

Following from my 5 minute OSDC talk yesterday on 5 security improvements needed in Linux distributions I gave a 5 minute talk on installing SE Linux on Debian etch. To display the notes I formatted them such that they were in 24 line pages and used less at a virtual console to display them. The ultra-light laptop I was using has only 64M of RAM which isn’t enough for a modern X environment and I couldn’t be bothered getting something like Familiar going on it.

After base install you install the policy and the selinux-basics package:

# apt-get install selinux-basics selinux-policy-refpolicy-targeted
The following extra packages will be installed:
checkpolicy libsemanage1 mime-support policycoreutils python python-minimal
python-selinux python-semanage python-support python2.4 python2.4-minimal
selinux-utils
Suggested packages:
python-doc python-tk python-profiler python2.4-doc logcheck syslog-summary
The following NEW packages will be installed:
checkpolicy libsemanage1 mime-support policycoreutils python python-minimal
python-selinux python-semanage python-support python2.4 python2.4-minimal
selinux-basics selinux-policy-refpolicy-targeted selinux-utils
0 upgraded, 14 newly installed, 0 to remove and 0 not upgraded.
Need to get 6362kB of archives.
After unpacking 41.5MB of additional disk space will be used.
Do you want to continue [Y/n]?

The package install process also configures the policy for the machine. The next step is to label the filesystems, this took 26 seconds on my Celeron 500MHz laptop with 20,000 files on an old IDE disk. The time is in proportion to number of files, often bottlenecked on CPU. A more common install might have 5* as many files with a 5* faster CPU so 30 seconds is probably common for labelling. See the following:

# fixfiles relabel

Files in the /tmp directory may be labeled incorrectly, this command
can remove all files in /tmp.  If you choose to remove files from /tmp,
a reboot will be required after completion.

Do you wish to clean out the /tmp directory [N]? y
Cleaning out /tmp
/sbin/setfiles:  labeling files under /
matchpathcon_filespec_eval:  hash table stats: 14599 elements, 14245/65536 buckets used, longest chain length 2
/sbin/setfiles:  labeling files under /boot
matchpathcon_filespec_eval:  hash table stats: 19 elements, 19/65536 buckets used, longest chain length 1
/sbin/setfiles:  Done.

The next step is to edit /boot/grub/menu.list to enable SE Linux, auditing, and put it in enforcing mode:

title   Debian GNU/Linux, kernel 2.6.17-2-686
root    (hd0,1)
kernel  /vmlinuz-2.6.17-2-686 root=/dev/x selinux=1 audit=1 ro enforcing=1
initrd  /initrd.img-2.6.17-2-686

Then reboot.

After rebooting view the context of your shell, note that the login shell will have a domain of unconfined_t when the targeted policy is used:

# id -Z
system_u:system_r:unconfined_t

Now let’s view all processes that are confined:

# ps axZ |grep -v unconfined_t|grep -v kernel_t|grep -v initrc_t
LABEL                             PID TTY   STAT   TIME COMMAND
system_u:system_r:init_t            1 ?     Ss     0:02 init [2]
system_u:system_r:udev_t         1999 ?     S.s    0:01 udevd --daemon
system_u:system_r:syslogd_t      3306 ?     Ss     0:00 /sbin/syslogd
system_u:system_r:klogd_t        3312 ?     Ss     0:00 /sbin/klogd -x
system_u:system_r:apmd_t         3372 ?     Ss     0:00 /usr/sbin/acpid -c /etc
system_u:system_r:gpm_t          3376 ?     Ss     0:00 /usr/sbin/gpm -m /dev/i
system_u:system_r:crond_t        3402 ?     Ss     0:00 /usr/sbin/cron
system_u:system_r:local_login_t  3423 tty1  Ss     0:00 /bin/login --
system_u:system_r:local_login_t  3424 tty2  Ss     0:00 /bin/login --
system_u:system_r:getty_t        3425 tty3  Ss+    0:00 /sbin/getty 38400 tty3
system_u:system_r:getty_t        3426 tty4  Ss+    0:00 /sbin/getty 38400 tty4
system_u:system_r:getty_t        3429 tty5  Ss+    0:00 /sbin/getty 38400 tty5
system_u:system_r:getty_t        3430 tty6  Ss+    0:00 /sbin/getty 38400 tty6
system_u:system_r:dhcpc_t        3672 ?     S.s    0:00 dhclient3 -pf /var/run/

The initial install of policy inserts modules to match installed software, if you install new software then you need to add new modules with the semodule command:

# semodule -i /usr/share/selinux/refpolicy-targeted/apache.pp
security:  3 users, 7 roles, 824 types, 67 bools
security:  58 classes, 11813 rules
audit(1165532434.664:21): policy loaded auid=4294967295
# semodule -i /usr/share/selinux/refpolicy-targeted/bind.pp
security:  3 users, 7 roles, 836 types, 68 bools
security:  58 classes, 12240 rules
audit(1165532467.874:22): policy loaded auid=4294967295

Note that the security and audit messages come from the kernel via printk, it is displayed on console login but you need to view the system log if logged in via ssh or running an xterm. Now you have to relabel the files that are related to the new policy:

# restorecon -R -v /etc /usr/sbin /var/run /var/log
restorecon reset /etc/bind context system_u:object_r:etc_t->system_u:object_r:named_zone_t
restorecon reset /etc/bind/named.conf context system_u:object_r:etc_t->system_u:object_r:named_conf_t
[...]
restorecon reset /etc/apache2 context system_u:object_r:etc_t->system_u:object_r:httpd_config_t
restorecon reset /etc/apache2/httpd.conf context system_u:object_r:etc_runtime_t->system_u:object_r:httpd_config_t
[...]
restorecon reset /usr/sbin/named context system_u:object_r:sbin_t->system_u:object_r:named_exec_t
restorecon reset /usr/sbin/apache2 context system_u:object_r:sbin_t->system_u:object_r:httpd_exec_t
restorecon reset /usr/sbin/rndc context system_u:object_r:sbin_t->system_u:object_r:ndc_exec_t
restorecon reset /usr/sbin/named-checkconf context system_u:object_r:sbin_t->system_u:object_r:named_checkconf_exec_t
[...]
restorecon reset /var/run/bind context system_u:object_r:var_run_t->system_u:object_r:named_var_run_t
restorecon reset /var/run/bind/run context system_u:object_r:var_run_t->system_u:object_r:named_var_run_t
restorecon reset /var/run/bind/run/named.pid context system_u:object_r:initrc_var_run_t->system_u:object_r:named_var_run_t
restorecon reset /var/run/motd context system_u:object_r:initrc_var_run_t->system_u:object_r:var_run_t
restorecon reset /var/run/apache2 context system_u:object_r:var_run_t->system_u:object_r:httpd_var_run_t
restorecon reset /var/run/apache2/cgisock.3558 context system_u:object_r:var_run_t->system_u:object_r:httpd_var_run_t
restorecon reset /var/run/apache2.pid context system_u:object_r:initrc_var_run_t->system_u:object_r:httpd_var_run_t
restorecon reset /var/log/apache2 context system_u:object_r:var_log_t->system_u:object_r:httpd_log_t
restorecon reset /var/log/apache2/error.log context system_u:object_r:var_log_t->system_u:object_r:httpd_log_t
restorecon reset /var/log/apache2/access.log context system_u:object_r:var_log_t->system_u:object_r:httpd_log_t

The -v option to restorecon causes it to give verbose output concerning it’s operations. Often you won’t do it in real use, but it’s good to illustrate the use.

Now you have to restart the daemons:

# killall -9 apache2
# /etc/init.d/apache2 start
Starting web server (apache2)....
# /etc/init.d/bind9 restart
Stopping domain name service...: bind.
Starting domain name service...: bind.

Apache and BIND now run in confined domains, see the following ps output:

system_u:system_r:httpd_t   3833 ?     Ss     0:00 /usr/sbin/apache2 -k start
system_u:system_r:httpd_t   3834 ?     S      0:00 /usr/sbin/apache2 -k start
system_u:system_r:httpd_t   3839 ?     Sl     0:00 /usr/sbin/apache2 -k start
system_u:system_r:httpd_t   3841 ?     Sl     0:00 /usr/sbin/apache2 -k start
system_u:system_r:named_t   3917 ?     Ssl    0:00 /usr/sbin/named -u bind

It’s not particularly difficult. I covered the actual install of SE Linux in about 1.5 minutes. I had considered just ending my talk there on a note of “it’s so easy I don’t need 5 minutes to talk about it” but decided that it was best to cover something that you need to do once it’s installed.

If you want to know more about SE Linux then ask on the mailing list (see http://www.nsa.gov/selinux for subscription details), or ask on #selinux on freenode.

some advice for job seekers

A member of the free software community recently sent me their CV and asked for assistance in getting a job. Some of my suggestions are globally applicable so I’m blogging them.

Firstly I recommend that a job seeker doesn’t publish their CV on the net in an obvious place. Often you want to give different versions to different people, and you don’t necessarily want everyone to know about the work you do. I can’t imagine any situation in which a potential employer might view a CV on the net if it’s available but not ask for one if it isn’t there. If you are intensively looking for work (IE you are currently between jobs) then I recommend having a copy of your CV in a hidden URL on your site. This means that if you happen to meet a potential employer you can give them a URL so that they can get your CV quickly, but the general public can’t view it. A final problem with publishing your CV is that it may cause disputes with former colleagues (EG if you describe yourself as the most skilled programmer in the team then a former colleague who believes themself to be more skillful might disagree).

Next don’t put your picture on your CV. In some jurisdictions it’s apparently illegal for a hiring officer to consider your appearance. If there are many CVs put forward for the position then it may be easier to just discard yours because of this. There is absolutely no benefit to having the picture, unless of course you are applying for a job as an actor. Incidentally I’ve considered applying for work as an movie extra. The amount of effort involved is often minimal (EG pretend to drink beer in the back of a bar scene) and the pay is reasonable. It seems like a good thing to do when between computer contracts.

I write my CV in HTML and refuse to convert it. If a recruiting agent can’t manage to use IE to print my CV then they are not competent enough to represent me. If a hiring manager can’t manage to view my CV with IE then I don’t want to report to them. However I recommend against using HTML features that make a document act in any way unlike a word-processor file. There should be no frames or CSS files so there is only one file to email, and the text should be all on one page so the PGDN and PGUP keys can scroll through all the content. Tables, bold, and italic are good, fonts are a minor risk. Colors are bad.

Recruiting agents will often demand that your CV be targeted for the position that you are applying for. I often had complaints such as “I see only sys-admin skills not programming”. To solve this I wrote my CV in M4 and used a Makefile to generate multiple versions at the same time. If a recruiter wants a version of my CV emphasising C programming and using Linux then I’ve already got one ready!

These are just a few thoughts on the topic based on a CV that I just saw. I may write more articles about getting jobs in the computer industry if there is interest.

when you can’t get along with other developers

Many years ago I was involved in a free software development project with write access to the source tree. For reasons that are not relevant to this post (and which I hope all the participants would regard as trivial after so much time has passed) I had a disagreement with one of the more senior developers. This disagreement continued to the stage where I was threatened with expulsion from the project.

At that time I was faced with a decision, I could have tried to fight the process, and I might have succeeded and kept my position in that project. But doing so would have wasted a lot of time from many people, and might have caused enough productivity loss for enough people to outweigh my contributions to the project for the immediate future. But this didn’t seem very productive.

So I requested that my write access to the source tree be removed as I was going to leave the project and unused accounts are a security risk.

I never looked back, I worked on a number of other projects after that time (of which SE Linux is one) and the results of those projects were good for everyone. If I had stayed in the project where things weren’t working out then it would have involved many flames, distraction from productive work for everyone, and generally not much good.

The reason I mention this now (after many years have passed) is because in another project I see someone else faced with the same choice I made who is making the wrong decision. The people who are on the same private mailing list as me will all know who I am referring to. The individual in question is appearently suffering health problems as a result of stress caused by their inability to deal with the situation where they can’t get along with other people.

My advice to that person was to leave gracefully and find something else to work on. If you don’t get along with people and make a big fuss about it then they will only be more glad when they finally get rid of you. Running flame-wars over a period of 6 months to try and get accepted by a team that you don’t get along with will not do any good, but it will convince observers that removing you is a good idea.

1

economics of a computer store (why they don’t stock what you want)

day 39 of the beard

In some mailing list discussions recently some people demonstrated a lack of knowledge of the economics of a shop. Having run a shop for a few years (an Internet Cafe) I have some practical knowledge of this. I will focus on small businesses in this article, but the same economic principles apply to large corporations too.

When running a shop the main problem you have is in managing stock. There are two ways of getting stock, one is to have wholesalers give it to you for a period in which you can try to sell it and you pay for it when it’s sold, this is probably quite rare (I don’t know of an example of it being done – and probably no retailer wants to talk about it in case they lose it). Often retailers consider themselves to be privileged if they are permitted to pay for hardware one month after they receive it! The more common way of getting stock is simply to buy it and hope you can sell it in a reasonable period of time (often the wholesaler will offer to buy the stock back at a 10% discount if you can’t sell it).

To buy stock you need money, this can come from money that has accrued in the business account (if things are going really well) or from a mortgage taken out by the business owner if things aren’t going so well. For small businesses things usually don’t go so well so the money used to buy stock is borrowed at an interest rate of about 7% or 8% (I’m using numbers based on the current economic conditions in Australia, different numbers apply to different countries and different times but the same principles apply). The ideal situation is when there is money in the company bank account to cover the purchase of all stock, this means that the cost of owning stock is that you miss out on the 5.5% interest that the money will get in a term deposit.

Almost all stock has a use-by date of some form. Some items have a very short expiry (EG milk used to make hot chocolate in an Internet cafe, some have a moderate expiry date (computer systems become almost unsellable in about 18 months and lose value steadily month after month), but in the computer industry nothing has a long expiry date.

Let’s assume for the sake of discussion that you want to run a small computer store that is open to passing trade (this means that you must have stock for an immediate sale). Let’s assume that all items of computer hardware lose half their value over the period of 20 months at a steady rate of 2.5% of the original price per month (I think that most computer hardware loses value faster than that, but it’s just an assumption to illustrate the point).

The next major issue is the profit margin on each sale. If you can make a 20% profit on a sale then an item that has lost 10% of it’s value while gathering dust in your store will still be profitable. However the profit margins on computer sales are very small due to having a small number of major manufacturers (Intel, AMD, nVidia, ATI, Seagate, and WD) that have almost cartel positions in their markets and there being little to differentiate the stores apart from price. I have been told that 3% profit is typical for retail computer hardware sales by the small companies nowadays! Now if the stock will lose 2.5% of it’s value per month, you pay 0.5% interest per month and you make a 3% profit then if an item remains in stock for a month then you lose money. So on average (by value) you need to have stock spending significantly less than a month in your store. Cheap items such as low-quality cases and PSUs can stay in stock for a while. More expensive items such as new CPUs and the motherboards to house them must be moved quickly.

What’s the first thing that you do to reduce stock? You can keep stocks low, but there is a limit to how low you can go without losing sales. The next thing to do is to not stock items that customers won’t often buy or items where there is a similar item that you can stock as a substitute. The classic example of this is hard drives, a customer will want a certain capacity for a certain price – if their preferred brand is not in stock they will almost always take a different brand if it has the same capacity at the same price. Stores often advertise prices on multiple brands of hard drive in each capacity, but often only try to keep one brand in stock.

Of course this is a problem for the more fussy buyer. If you want to buy two identical parts from the same store on different days you might discover that they don’t have the stock on the second day and that they instead offer you something equivalent. Not only do retailers have issues with managing their investment in stock but wholesalers have the same problem. So if a retailer runs out of WD drives and discovers that their preferred wholesaler has also run out of WD drives then they just buy a different brand – most customers don’t care anyway.

There are some companies I deal with that have a business model based on services. One of them sells hardware to customers at cost, but charges them for the time spend assembling them, transporting them, etc. The potential for a 3% profit on the hardware isn’t worth persuing, they prefer to just charge for work and also save themselves the sales effort. Another company I know operates almost exclusively on the basis of ordering parts when customers request them (but still make a small profit margin on the sales), this means that the customer can be invoiced as soon as the hardware arrives. The down-side to this is that wholesalers have the same stock issues and they sometimes have excessive delays before the wholesaler can deliver the hardware.

Dell is the real winner out of this. As they operate by mail-order they don’t need to have the stock immediately available, they have a few days to deliver it which gets them time to arrange the supply. They can also have a central warehouse per region which reduces the stock requirements again. A 3% profit on items that rapidly decrease in value makes it almost impossible to sustain a small business. But an organization such as Dell can sustain a successful business at that level.

Of course the down-side for the end-user is that Dell doesn’t want to have too many models as that just makes it more complex for the sales channel. Also they have deals with major suppliers which presumably give them deep discounts in exchange for not selling rival products (this is why some brands of parts are conspicuously absent from Dell systems).

10 years ago there used to be a small computer store in every shopping area. Now in Australia there are a few large stores (which often only have a small section devoted to computers) and mail-order. There seems to be much less choice in computer hardware than there was, but it is much cheaper.

PS I’ve attached a picture of day 39 of the beard.

more about securing an office

My post about securing an office received many comments, so many that I had to write another blog entry to respond to them and also add some other things I didn’t think of before.

One suggestion was to use pam_usb to store passwords on a USB device. It sounds like it’s worth considering, but really we need public key encryption. I don’t want to have a USB device full of keys, I want a USB device that runs GPG and can decrypt data on demand – the data it decrypts could be a key to unlock an entire filesystem. One thing to note is that USB 2.0 has a bandwidth of 30MB/s while the IDE hard drive in my Thinkpad can sustain 38MB/s reads (at the start – it would be slower near the end). This means that I would approximately halve the throughput on large IOs by sending all the data to a USB device for encryption or decryption. Given that such bulk IO is rare this is feasible. There are a number of devices on the market that support public-key encryption, I would be surprised if any of them can deliver the performance required to encrypt all the data on a hard drive. But this will happen eventually.

Bill made a really good point about firewire. I had considered mentioning it in my post but refrained due to a lack of knowledge of the technology (it’s something that I would disable on my own machines but in the past I couldn’t recommend that others disable without more information). Could someone please describe precisely which 1394 (AKA Firewire) modules should be disabled for a secure system? If you don’t need Firewire then it’s probably best to just disable it entirely.

To enable encryption in Fedora Core 6 you need something like the following in /etc/crypttab:

home_crypt /dev/hdaX /path/to/key
swap /dev/hdaX /dev/random swap

Debian uses the same format for /etc/crypttab.

The Peregrine blog entry in response to my entry made some really good points. I wasn’t aware of what SUSE had done as I haven’t done much with SUSE in the past. I’m currently being paid to do some SUSE work so I will learn more about what SUSE offers, but given the SUSE/MS deal I’m unlikely to use it when I don’t have to. Before anyone asks, I don’t work for SUSE and given what they have just done I will have to reject any offer of employment that might come from them.

I had forgotten about rsh and telnet. Surely those protocols are dead now? I use telnet as a convenient TCP server test tool (netcat isn’t installed on all machines) and never use rsh. But Lamont was correct to mention them as there may be some people still doing such things.

The Peregrine blog made an interesting point about Kerberised NFS being encrypted, I wasn’t aware of this and I will have to investigate it. I haven’t used Kerberos in the past because most networks I work on don’t have a central list of accounts, and often there is no one trusted host.

I strongly disagree with the comment about iSCSI and AoE “Neither protocol provides security mechanisms, which is a good thing. If they did, the additional overhead would affect their performance“. Lack of security mechanisms allows replay attacks. For example if an attacker compromises a non-root account on a machine that uses such a protocol for it’s root filesystem, the victim might change their password but the attacker could change the data back to it’s original values even it if was encrypted. Encryption needs to have sequence numbers embedded to be effective, this is well known – the current dmcrypt code (used by cryptsetup) encrypts each block with the block ID number so that blocks can not be re-arranged by someone who can’t decrypt them (a weakness of some earlier disk encryption systems). When block encryption is extended to a network storage system I believe that the block ID number needs to be used as well as a sequence ID number to prevent reordering of requests. CPU performance has been increasing more rapidly than hard drive performance for a long time. Some fairly expensive SAN hardware is limited to 40MB/s (I won’t name the vendor here but please note that it’s not a company that I have worked for), while there is faster SAN hardware out there I think it’s reasonable to consider 40MB/s as adequate IO performance. A quick test indicates that the 1.7GHz Pentium-M CPU in my Thinkpad can decrypt data at a rate of 23MB/s. So to get reasonable speed with encryption from a SAN you might require a CPU which is twice as fast as in my Thinkpad for every client (which means most desktop machines sold for the last two years and probably all new laptops now other than the OLPC machine). You would also require a significant amount of CPU power at the server if multiple clients were to sustain such speeds. This might be justification for making encryption optional or for having faster (and therefore less effective) algorithms as an option.

I believe that the lack of built-in security in the AoE and iSCSI protocols gives a significant weakness to the security of the system which can’t be fully addressed. The CPU requirements for such encryption can be met with current hardware even when using a strong algorithm such as AES. There are iSCSI accellerator cards being developed, such cards could also have built in encryption if there was a standard algorithm. This would allow good performance on both the client and the server without requiring the main CPU.

Finally the Peregrine blog entry recommended Counterpane. Bruce Schneier is possibly the most widely respected computer security expert. Everything he does is good. I didn’t mention his company in my previous post because it was aimed at people who are on a strict budget. I didn’t bother mentioning any item that requires much money, and I don’t expect Counterpane to be cheap.

Simon noted that developing a clear threat model is the first step. This is absolutely correct, however most organizations don’t have any real idea. When advising such organizations I usually just invent a few possible ways that someone with the same resources and knowledge as I might attack them and ask whether such threats seem reasonable, generally they agree that such things should be prevented and I give further advice based on that. It’s not ideal, but advising clients who don’t know what they want will never give an ideal result.

One thing that I forgot to mention is the fact that good security relies on something you have as well as something you know. For logging in it’s ideal to use a hardware security token. RSA sells tokens that display a pseudo-random number every minute, the server knows the algorithm used to generate the numbers and can verify that the number entered was generated in the last minute or two. Such tokens are sold at low prices to large corporations (I can’t quote prices, but one of my clients had prices that made them affordable for securing home networks), I will have to discover what their prices are to small companies and individuals (I have applied to evaluate the RSA hardware). Another option is a GPG smart-card, I already have a GPG card and just need to get a reader (this has been on my to-do list for a while). The GPG card has the advantage of being based on free software.

One thing I have believed for some time is that Debian should issue such tokens to all developers, I’m sure that purchasing ~1200 tokens would get a good price for Debian and the security benefits are worth it. The use of such tokens might have prevented the Debian server crack of 2003 or the Debian server crack of 2006. The Free Software Foundation Fellowship of Europe issues GPG cards to it’s members, incidentally the FSFE is a worthy organisation that I am considering joining.