Archives

Categories

cooling

Recently there has been some really hot weather in Melbourne that made me search for alternate methods of cooling.

The first and easiest method I discovered is to keep a 2L bottle of water in my car. After it’s been parked in the sun on a hot day I pour the water over the windows. The energy required to evaporate water is 2500 Joules per gram, this means that the 500ml that probably evaporates from my car (I guess that 1.5L is split on the ground) would remove 1.25MJ of energy.from my car – this makes a significant difference to the effectiveness of the air-conditioning (the glass windows being the largest hot mass that can easily conduct heat into the cabin).

It would be good if car designers could incorporate this feature. Every car has a system to spray water on the wind-screen to wash it, if that could be activated without the wipers then it would cool the car significantly. Hatch-back cars have the same on the rear window, and it would not be difficult at the design stage to implement the same for the side windows too.

The next thing I have experimented with is storing some ice in a room that can’t be reached by my home air-conditioning system. Melting ice absorbes 333 Joules per gram. An adult who is not doing any physical activity will produce about 100W of heat, that is 360KJ per hour. Melting a kilo of ice will abrorb 333KJ per hour, if the amount of energy absorbed when the melt-water approaches room temperature is factored in then a kilo of ice comes close to absorbing the heat energy of an adult at rest. Therefore 10Kg of ice stored in your bedroom will prevent you from heating it by your body heat during the course of a night.

In some quick testing I found that 10Kg of ice in three medium sized containers would make a small room up to two degrees cooler than the rest of the house. The ice buckets also have water condense on them. In a future experiement I will measure the amount of condensation and try and estimate the decrease in the humidity. Lower humidity makes a room feel cooler as sweat will evaporate more easily. Ice costs me $3 per 5Kg bag, so for $6 I can make a hot night significantly more bearable. In a typical year there are about 20 unbearably hot nights in Melbourne. So for $120 I can make one room cooler on the worst days of summer
without the annoying noise of an air-conditioner (the choice of not sleeping due to heat or not sleeping due to noise sucks).

The density of dry air at 0C and a pressure of 101.325 kPa is 1.293 g/L.

A small bedroom might have an area of 3M*3M and be 2.5M high giving a volume of 22.5M^3 == 22,500L. 22,500 * 1.293 = 29092.500g of air.

One Joule can raise the temperature of one gram of cool dry air by 1C.

Therefore when a kilo of ice melts it would be able to cool the air in such a room by more than 10 degrees C! The results I observe are much smaller than that, obviously the walls, floor, ceiling, and furnishings in the room also have some thermal energy, and as the insulation is not perfect some heat will get in from other rooms and from outside the house.

If you have something important to do the next day then spending $6 or $12 on ice the night before is probably a good investment. It might even be possible to get your employer to pay for it, I’m sure that paying for ice would provide better benefits in employee productivity than many things that companies spend money on.

Xephyr

As part of my work on Xen I’ve been playing with Xephyr (a replacement for Xnest). My plan is to use Xen instances for running different versions of desktop environments. You can’t just ssh -X to a Xen image and run things. One problem is that some programs such as Firefox do strange things to try and ensure that you only have one instance running. Another problem is with security, the X11 security extensions don’t seem to do much good. A quick test indicates that a ssh -X session can’t copy the window contents of a ssh -Y session, but can copy the contents of all windows run in the KDE environment. So this extension to X (and the matching ssh support) seem to do little good.

One thing I want to do is to have a Xen image for running Firefox with risky extenstions such as Flash and keep it separate from my main desktop for security and managability.

Xephyr :1 -auth ~/.Xauth-Xephyr -reset -terminate -screen 1280×1024

My plan is to use a command such as the above to run the virtual screen. That means to have a screen resolution of 1280×1024, to terminate the X server when the last client exits (both the -reset and the -terminate options are required for this), to be display :1 and listen with TCP (the default), and to use an authority file named ~/.Xauth-Xephyr.

xauth generate :1 .

The first problem is how to generate the auth file, the xauth utility is documented as doing it via the above command. But this really connects to a running X server and copies the auth data from it.

The solution (as pointed out to me by Dr. Brian May) is to be found in the startx script which solves this problem. The way to do it is to use the add :1 . $COOKIE command in xauth to create the auth file used by the X server, and to generate the cookie with the mcookie program.

In ~/.ssh/config:
Host server
SendEnv DISPLAY

In /etc/ssh/sshd_config:
AcceptEnv DISPLAY

The next requirement is to tell the remote machine (which incidentally doesn’t need to be a Xen virtual machine, it can be any untrusted host that contains X applications you want to run) which display to use. The first thing to do is to ssh to the machine in question and run the xauth program to add the same cookie as is used for the X server. Then the DISPLAY environment variable can be sent across the link by setting the ~/.ssh/config file at the client end to have the above settings (where server is the name of the host we will connect to via SSH) and in the sshd_config file on the server have the line AcceptEnv DISPLAY to accept the DISPLAY environment variable. It would have been a little easier to configure if I had added the auth entry to the main ~/.Xauthority file and used the command DISPLAY=:1 ssh -X server, this would be the desired configuration when operating over an untrusted network. But when talking to a local Xen instance it gives better performance to not encrypt the X data.

The following script will generate an xauth entry, run a 1280×1024 resolution Xephyr session, and connect to the root account on machine server and run the twm window manager. Xephyr will exit when all X applications end. Note that you probably want to use passwordless authentication on the server as typing a password twice to start the session would be a drag.

#!/bin/sh

COOKIE=`mcookie`
FILE=~/.Xauth-Xephyr
rm -f $FILE
#echo “add 10.1.0.1:1 . $COOKIE” | xauth
ssh root@server “echo \”add 10.1.0.1:1 . $COOKIE\” | xauth”
echo “add :1 . $COOKIE” | xauth -f $FILE
Xephyr :1 -auth $FILE -reset -terminate -screen 1280×1024 $* &
DISPLAY=10.1.0.1:1 ssh root@server twm
wait

document storage

I have been asked for advice about long-term storage of documents. I decided to blog about it because my thoughts may be useful to others, and because if I get something wrong then surely people will correct me. ;)

Many organizations are looking at using computers for storing all documents. This gives savings on the costs of storing paper – the promise of the paperless office is being fulfilled. There are some potential issues about whether a signature in a PDF file that’s scanned from paper is valid – but I guess it’s the same as a signature on a FAX and everyone seems to accept FAXed contracts.

The technical problem is how to reliably store data long-term. The problem is that all modern methods of storing data will degrade over time. Anything less than engraving a message in stone, gold, or platinum and burying it will have some data loss eventually.

If the documents that need to be archived have no special requirements and if you have a good backup system in place (testing backup media, off-site storage in case of disaster, multiple sets of hardware that can read the backup media in case of hardware failure, etc) then you might be able to just store the documents on a server and include them in the backup plan. The regular backups should cater for replacing media over the long term. If however there is a significant amount of data or the data has confidentiality requirements that preclude having it all online all the time then you need a separate infrastructure for such storage.

Regular backup systems have to deal with files being deleted from storage and files that have their contents changed. For a document archiving system no file will ever be changed once it has been created and no file will be deleted. This allows some simplifications to the backup strategy. For example if you have multiple terabytes of documents backed up by tape and stored off-site you could use CD-ROMs or other media for storing recent changes. It would be very easy for an employee to grab a couple of CDs before rushing out of a burning building, but grabbing a set of tapes (or the correct tape from a large set) may not be possible.

It would be possible to use a tape library system as the primary storage for documents. If a large organization was going to implement this a few years ago that might be a good option. Nowadays storage is getting increasingly large and cheap. Terabytes are available in desktop PCs and hundreds of terabytes are available for server storage. So having the primary document store on a server with a decent amount of space and then making tape backups for storage in secure locations seems viable.

One thing to note about such document storage is that having everything on a server allows a much larger amount of data to be accessed and copied more easily than on paper. Sorting through a billion paper documents and copying the thousand most useful ones would be a difficult task for someone who was involved in industrial espionage. Finding the most useful files when they are indexed on a server should be quite easy and copying a few thousand is also easy (one thousand scanned documents of medium size should fit on a USB memory stick – much smaller than a few reams of copied documents).

Finally documents have to be archived in publicly documented file formats that can be easily read in the future. The PDF specification is well known and there are multiple programs that can display data in such files, another good option for scanned documents is JPEG. Proprietary formats such as MS-Word should never be used, you never know whether you will be able to read them in four years, let alone the seven years for which many documents must be retained or the 20-30 years that some documents must be retained.

core files

The issue of core file management has come up for discussion again in the SE Linux list.

I believe that there are two essential security requirements for managing core files, one is that the complete security context of the crashing process is stored (to the greatest possible extent), and the other is that processes with different security contexts be prevented from discovering that a process dumped core (when attacking a daemon it would be helpful to know when you made one of it’s processes dump core).

The core file will have the same UID and GID as the process that crashed. It’s impossible to maintain the complete security context of the crashing process in this manner as Unix permissions support multiple supplementary groups and Unix filesystems only support one GID. So the supplementary groups are lost.

There is also a sysctl kernel.core_pattern which specifies the name of the core file. This supports a number of modifiers, EG the value “core.%p.%u.%g” would give a file named “core.PID.UID.GID“. It would be good to have a modification to the kernel code in question to allow the SE Linux context to be included in this (maybe %z).

To preserve the SE Linux context of the crashing process with current kernel code we need to have a unique type for each process that dumps core, this merely requires that each domain have an automatic transition rule for creating files in the directory chosen for core dumps. In the default configuration we have core files dumped in the current directory of the process. This may be /tmp or some other common location which allows an attacker to discover which process is dumping core (due to the directory being world readable) and in the case of SE Linux there may be multiple domains that are permitted to create files in /tmp with the same context which gets in the way of using such a common directory for core files.

The traditional Unix functionality is to have core files dumped in the current directory. Obviously we can’t break this by default. But for systems where security is desired I believe that the correct thing to do is to use a directory such as /var/core for core files, this can be easily achieved by creating the directory as mode 1733 (so that any user can create core files but no-one but the sys-admin can read them) and then setting the core_pattern sysctl to specify that all core files go in that directory. The next improvement is to have a poly-instantiated directory for /var/core such that each login user has their own version. That way the user in question could see the core files created by their own processes while system core files and core files for other users would be in different directories. Poly-instantation is easier to implement for core files than it is for /tmp (and the other directories for which it is desirable) because there is much less access to such a directory. When things operate correctly core files are not generated, and users never need to access each other’s core files directly (they are mode 0600 so this isn’t possible anyway).

This area will require a moderate amount of coding before it works in the ideal manner. I aim to briefly describe the issues only in this post.

monitors for developers

Michael Davies recently blogged that all developers should have big screens. This news has been around for a while, the most publicity for the idea came from Microsoft Research where they did a study showing that for certain tasks a 50% performance increase could be gained from a larger monitor.

If you consider that a good software developer will get paid about $100K and it’s widely regarded that in a corporate environment the entire costs for a worker (including management, office space, etc) is double their base salary then you consider each developer to be worth $200K per annum. Therefore larger and more monitors could potentially give a benefit in excess of $100K per annum (we assume that the value provided by the developer is greater than their salary – apart from the dot-com boom that’s usually the case).

It’s quite obvious that you can get a really great monitor configuration for significantly less than $100K and that it will remain quite current for significantly more than a year (monitor technology advances comparatively slowly so a good monitor should last at least four years).

Some time ago I researched this matter for a client. I convinced all the managers in my area, I convinced a bunch of colleagues to buy bigger monitors for their homes (and bought one myself), but unfortunately senior management saw it as a waste of money. I was trying to convince them that people who were being paid >$100,000 should be each assigned a $400 monitor. Sadly they believed that spending the equivalent of less than a day’s wages per employee was not justified.

If I was in a management position I would allocate an amount of money for each developer to spend on hardware or conferences at their own discretion. I would make that amont of money a percentage of the salary of each employee, and I would also allow them to assign some of their share to a colleague if they had a good reason for it (EG if a new hire needed something expensive that would exceed their budget for the first year). I think that people who are good programmers are in the best position to judge what can best be done to improve their own productivity, and that allowing them to make their own choices is good for morale.

On a more technical level I have a problem with getting a big monitor. I do most of my work on a laptop because I travel a lot. I don’t travel as much as I did while living in Europe but still do a lot of coding in many strange places. I started writing my Postal benchmark in the hotel restaurant of a Bastion hotel in Utrecht during one of the worst recorded storms in Europe (the restaurant had huge windows and it was inspirational for coding). I wrote the first version of my ZCAV benchmark in Denver airport while waiting for a friend.

What I need is a good way of moving open windows from my laptop to a big external display and then back again. I don’t want to logout before moving my machine. With a Macintosh this is quite possible (I’m using a OS/X machine while working for a client and the only thing that has impressed me is the monitor support). With Linux things aren’t so easy, it’s supposed to be possible but I haven’t heard any unqualified success stories yet.

I guess I could try setting up XDMCP between my laptop and a desktop machine with some big displays and logout before moving.

Any suggestions?

Update, here are some comments from the original version of this post as Blogspot:

  • Anonymous said:
    You want xrandr. The newest version, 1.2 (needed on client, server, and drivers), will allow you to extend your X session onto a new display and back off of it without reconfiguring anything. However, even the current version can switch you onto and off of an external monitor if you set it up in advance. Just set up the appropriate Virtual area, and the right set of modes to include and exclude the external monitor. When you have it set up right, you will see in xrandr’s list of modes both the mode to use the internal display only, with its resolution, and the mode to use both displays, with the combined resolution. For instance, I have a 1400×1050 internal LCD and a 1680×1050 external LCD, so I see modes for 1400×1050, 1680×1050, and 3080×1050.
  • Praveen Kumar said:
    What you are saying is absolutely true. I have experienced the improved productivity myself. I would recommend people running dual screen setup on their laptop. I am running dual screen setup where my laptop LCD runs a screen at 1024×768 and an external monitor (21″) runs other screen at 1600×1200 using xinerama. It paid well so far.
  • Anonymous said:
    I’m using “synergy” at home. I’ve 2 PCs and use both with just one keyboard/mouse, and I usually put my laptop between both screens, and an exit-hook of dhclient automatically starts synergy on my laptop if I’m at home so this third screen is just inserted between the two others..
  • Anonymous said:
    Because the DPI varies so greatly between displays (little high-res laptop vs. external so large that pixels are sizable), I think it’s necessary to configure each monitor with its own DPI in X. This requires a multiple X screen setup, xinerama won’t do.
  • Berge Schwebs Bjørlo said:
    You could take a look at Xdmx (http://dmx.sourceforge.net). It’s a multihead Xinerama-like X-server. Which basically means you can connect X-servers on different machines together and have xinerama-over-network. Pretty neat. People have been using it for making seriously large screens (http://www.evl.uic.edu/cavern/lambdavision/).
    Please give a shout if you get it working (-:
  • Lionel Porcheron said:
    I have experienced xrandr (which integrates quite well with Gnome) recentely and it works well. The only drawback is when you want to switch from “extend screen” with an external monitor to “mirror screen” for a presentation: you need to edit xorg.conf actually (If I am correct). Otherwise, it works quite well.
  • Søren Hansen said:
    You might want to take a peek at xmove. It’s kind of like screen, but for X.

Windows Vista

There’s a blog about Windows Vista as the Free Software Foundation site. Not much content yet apart from RSS links but it should have some potential in future.

I am not planning on tracking Vista in detail (not enough time), but if you want to track such things then the FSF site should be useful.

Xen shared storage

disk = [ ‘phy:/dev/vg/xen1,hda,w’, ‘phy:/dev/vg/xen1-swap,hdb,w’, ‘phy:/dev/vg/xen1-drbd,hdc,w’, ‘phy:/dev/vg/san,hdd,w!’ ]

For some work that I am doing I am trying to simulate a cluster that uses fiber channel SAN storage (among other things). The above is the disk line I’m using for one of my cluster nodes, hda and hdb are the root and swap disks for a cluster node, hdc is a DRBD store (DRBD allows a RAID-1 to be run across the cluster nodes via TCP), and hdd is a SAN volume. The important thing to note is the “w!” mode for the device, this means write access is granted even in situations whre Xen thinks it’s unwise (IE it’s being used by another Xen node or is mounted on the dom0). I’ve briefly tested this by making a filesystem on /dev/hdd on one node, copying data to it, then umounting it and mounting it on another node to read the data.

There are some filesystems that support having multiple nodes mounting the same device at the same time, these include CXFS, GFS, and probably some others. It would be possible to run one of those filesystems across nodes of a Xen cluster. However that isn’t my aim at this time. I merely want to have one active node mount the filesystem while the others are on standby.

One thing that needs to be solved for Xen clusters is fencing. When a node of a cluster is misbehaving it needs to be denied access to the hardware in case it recovers some hours later and starts writing to a device that is now being used by another node. AFAIK the only way of doing this is via the xm destroy command. Probably the only way of doing this is to have a cluster node ssh to the dom0 and then run a setuid program that calls xm destroy.

multiple ethernet devices in Xen

It seems that no-one has documented what needs to be done to correctly run multiple Ethernet devices (with one always being eth0 and the other always being eth1) in a Linux Xen configuration (or if it is documented then google wouldn’t find it for me).

vif = [ ‘mac=00:16:3e:00:01:01’, ‘mac=00:16:3e:00:02:01, bridge=xenbr1’ ]

Firstly I use a vif line such as the above in the Xen configuration. This means that there is one ethernet device with the hardware address of 00:16:3e:00:01:01 and another with the address of 00:16:3e:00:02:01. I just updated this section, the 00:16:3e prefix has officially been allocated to the Xen project for virtual machines. Therefore on your Xen installation you can do whatever you like with MAC addresses in that range without risk of competing with real hardware. The Xen code uses random MAC addresses in that range if you let it.

I have two bridge devices, xenbr0 and xenbr1. I only need to specify one as Xen can figure the other out.

Now when my domU’s boot they assign ethernet device names from the range eth0 to eth8. If there is only one virtual Ethernet device then it is always eth0 and things are easy. But for multiple devices I need to rename the interfaces.

eth0 mac 00:16:3e:00:01:01
eth1 mac 00:16:3e:00:02:01

This is done through the ifrename program (package name ifrename in Debian). I create a file named /etc/iftab with the above contents and then early in the boot process (before the interfaces are brought up) the devices will be renamed.

In the Red Hat model you edit the files such as /etc/sysconfig/networking/devices/ifcfg-eth0 and change the line that starts with HWADDR to cause a device rename on boot.

Update: the original version of this post used MAC addresses with a prefix of 00:00:00, the officially allocated prefix for Xen is 00:16:3e which I now use. Thanks to the person who commented about this.

installing Xen domU on Debian Etch

I have just been installing a Xen domU on Debian Etch. I’ll blog about installing dom0 later when I have a test system that I can re-install on (my production Xen machines have the dom0 set up already). The following documents a basic Xen domU (virtual machine) installation that has an IP address in the 10.0.0.0/8 private network address space and masquerades outbound network data. It is as general as possible.

lvcreate -n xen1 -L 2G /dev/vg

Firstly use the above command to create a block device for the domU, this can be a regular file but a LVM block device gives better performance. The above command is for a LV named xen1 on an LVM Volume Group named vg.

mke2fs -j /dev/vg/xen1

Then create the filesystem with the above command.

mount /dev/vg/xen1 /mnt/tmp
mount -o loop /tmp/debian-testing-i386-netinst.iso /mnt/cd
cd /mnt/tmp
debootstrap etch . file:///mnt/cd/
chroot . bin/bash
vi /etc/apt/sources.list /etc/hosts /etc/hostname
apt-get update
apt-get install libc6-xen linux-image-xen-686 openssh-server
apt-get dist-upgrade

Then perform the basic Debian install with the above commands. Make sure that you change to the correct directory before running the debootstrap command. The /etc/hosts and /etc/hostname files need to be edited to have the correct contents for the Xen image (the default is an empty /etc/hosts and /etc/hostname has the name of the parent machine). The file /etc/apt/sources.list needs to have the appropriate configuration for the version of Debian you use and for your preferred mirror. libc6-xen is needed to stop a large number of kernel warning messages on boot. It’s a little bit of work before you get the virtual machine working on the network so it’s best to do these commands (and other package installs) before the following steps. After the above type exit to leave the chroot and run umount /mnt/tmp.

lvcreate -n xen1-swap -L 128M /dev/vg
mkswap /dev/vg/xen1-swap

Create a swap device with the above commands.

auto xenbr0
iface xenbr0 inet static
pre-up brctl addbr xenbr0
post-down brctl delbr xenbr0
post-up iptables -t nat -F
post-up iptables -t nat -A POSTROUTING -o eth0 -s 10.1.0.0/24 -j MASQUERADE
address 10.1.0.1
netmask 255.255.255.0
bridge_fd 0
bridge_hello 0
bridge_stp off

Add the above to etc/network/interfaces and use the command ifup xenbr0 to enable it. Note that this masquerades all outbound data from the machine that has a source address in the 10.1.0.0/24 range.

net.ipv4.conf.default.forwarding=1

Put the above in /etc/sysctl.conf, run sysctl -p and echo 1 > /proc/sys/net/ipv4/conf/all/forwarding to enable it.

cp /boot/initrd.img-2.6.18-5-xen-686 /boot/xen-initrd-18-5.gz

Set up an initial initrd (actually initramfs) for the domU with a command such as the above. Once the Xen domU is working you can create the initrd from within it which gives a smaller image.

kernel = "/boot/vmlinuz-2.6.18-5-xen-686"
ramdisk = "/boot/xen-initrd-18-5.gz"
memory = 64
name = "xen1"
vif = [ "" ]
disk = [ "phy:/dev/vg/xen1,hda,w", "phy:/dev/vg/xen1-swap,hdb,w" ]
root = "/dev/hda ro"
extra = "2 selinux=1 enforcing=0"

The above is a sample Xen config file that can go in /etc/xen/xen1. Note that this will discover an appropriate bridge device by default, if you only plan to have one bridge then it’s quite safe, if you want multiple bridges then things will be a little more complex. Also note that there are two block devices created as /dev/hda and /dev/hdb, obviously if we wanted to have a dozen block devices then we would want to make them separate partitions with a virtual partition table. But in most cases a domU will be a simple install and won’t need more than two block devices.

xm create -c xen1

Now start the Xen domU with the above command. The -c option means to take the Xen console (use ^] to detach). After that you can login as root at the Xen console with no password, now is a good time to set the password.

Run the command apt-get install udev, this could not be done in the chroot before as it might mess up the dom0 environment. Edit /etc/inittab and disable gettys on tty2 to tty6, I don’t know if it’s possible to use them (the default and only option for xen console commands is tty1) and in any case you would not want 6, saving a few getty processes will save some memory.

Now you should have a basically functional Xen domU. Of course a pre-requisite for this is having a machine with a working dom0 installation. But the dom0 part is easier (and I will document it in a future blog post).

free software liason?

In my previous work as a sys-admin I have worked for a number of companies that depend heavily on free software. If you use a commercially supported distribution such as Red Hat Enterprise Linux then you get high quality technical support (much higher than you expect from closed-source companies), but this still doesn’t provide as much as you might desire as it is reactive support (once you notice a problem you report it). Red Hat has a Technical Account Manager offering that provides a higher level of support and there is also a Professional Services organization that can provide customised versions of the software. But the TAM and GPS offerings are mostly aimed at the larger customers (they are quite expensive).

It seems to me that a viable option for companies with smaller budgets is to have an employee dedicated to enhancing free software and getting changes accepted upstream. For a company that has a team of 5+ sys-admins the cost of a developer dedicated to such software development tasks should be saved many times over by the greater productivity of the sys-admins and the greater reliability of the servers.

This is not to criticise commercial offerings such as Red Hat’s TAM and GPS services, a dedicated free software developer could work with the Red Hat TAM and GPS people thus allowing the company to get the most value for money from the Red Hat consultants.

If using a free distribution such as Debian the case for a dedicated liason with the free software community is even stronger, as there is no formal support organization that compares to the Red Hat support (there are a variety of small companies that provide commercial support, but I am not aware of a 24*7 help desk or anything similar). If you have someone employed full-time as a free software developer then they can provide most of your support. It would probably make sense for a company that has mission critical servers running Debian to employ a Debian developer, a large number of Debian developers already work as sys-admins and finding one who is looking for a new job should not be difficult. There are more companies that would benefit from having DDs as employees than there are DDs, this isn’t an obstacle to hiring them as most hiring managers don’t realise the technical issues involved.

This is not to say that a company which can’t hire a DD should use a different distribution, merely that their operations will not be as efficient as they might be.