DOSing Windows Vista

Chris Samual writes a good summary of Peter Gutmann’s analysis of the cost of Vista (in terms of DRM).

The following paragraph in the article however seemed more interesting to me:
Once a weakness is found in a particular driver or device, that driver will have its signature revoked by Microsoft, which means that it will cease to function (details on this are a bit vague here, presumably some minimum functionality like generic 640×480 VGA support will still be available in order for the system to boot). This means that a report of a compromise of a particular driver or device will cause all support for that device worldwide to be turned off until a fix can be found.

Now just imagine that you want to cause widespread disruption – a DOS (Denial Of Service) attack against Windows users. What better option than to cause most of them to have hardware that’s not acceptable to the OS? I expect that there will be many instances of security holes in drivers and hardware being concealed by MS because they can’t afford the PR problems resulting from making millions of machines cease functioning. But just imagine that someone finds hardware vulnerabilities in a couple of common graphics drivers or pieces of hardware and publicly releases exploits shortly before a major holiday. If it’s public enough (EG posted to a few mailing lists and faxed to some major newspapers) then MS would be forced to invoke the DRM measures or lose face in a significant way. Just imagine the results of stopping 1/3 of machines working just before Christmas!

Of course after the first few incidents people will learn. It shouldn’t be difficult to configure a firewall to prevent all access to MS servers so that they can’t revoke access, after all our PCs are important enough to us that we don’t want some jerk in Redmond just turning them off. Of course disabling connections to MS would also disable security updates – but we all know that usability is more important than security to the vast majority of users (witness the number of people who happily keep using a machine that they know to be infected with a virus or trojan).

If this happens and firewalling MS servers becomes a common action, I wonder if MS will attempt the typical malware techniques of using servers in other countries with random port numbers to get past firewalls. Maybe Windows updates could be spread virally between PCs, this method would allow infecting machines that aren’t connected to the net via laptops.

Finally, I recommend that people who are interested in such things read Eastern Standard Tribe by Cory Doctorow, he has some interesting ideas about people DOSing corporations that they work for which seem surprisingly similar to what MS is doing to itself. I’ll post about my own observations of corporations DOSing themselves in the near future.

installing Debian Etch

A few days ago I installed Debian/Etch on my Thinkpad. One of the reasons for converting from Fedora to Debian is that I need to run Xen and Fedora doesn’t support non-PAE machines with Xen. Ironically it’s hardware supplied to me by Red Hat (Thinkpad T41p) that is lacks PAE support and forces me to switch to Debian. I thought about just buying a new dual-core 64bit laptop, but that seems a bit extravagant as my current machine works well for everything else.

Feeling adventurous I decided to use the graphical mode of the installer. I found it a little confusing, at each stage you can double-click on an item or click on the continue button to cause the action to be performed. The partitioning section was a little unclear too, but given that it has more features than any other partitioning system I’ve seen I wasn’t too worried (options of creating a degraded RAID array and for inserting a LUKS encryption layer at any level are really nice). The option to take a screen-shot at any time was also a handy feature (I haven’t yet inspected the PNG files to see what they look like).

Another nice feature was the way that the GUI restarts after a crash. While it was annoying that the GUI started crashing on me (and would have prevented a less experienced user from completing the install) the fact that it didn’t entirely abort meant that I could work around the problem.

I have not yet filed any bug reports against the installer because I have not done a repeatable install (there is a limit to how much testing I will do on my most important machine). In the next few days I plan to do a few tests of the graphical installer on test hardware for the operations that are important to me and file appropriate bug reports. I encourage others to do the same, the graphical mode of the installer and the new encryption and RAID features are significant improvements to Debian and we want them to work well.

I have realised that it won’t be possible to get SE Linux as good as I desire before the Etch release, even if the release is delayed again. I’m not sure how many fixes can go in after the release (I hope that we could move to a model similar to RHEL – but doubt that it will happen). So I now plan to maintain my own repository of Etch SE Linux packages and for other packages which need changes to make them work in the best possible manner with SE Linux. I will append something like “.se1” to the version of the packages in question, this means that they will be replaced if a security update is released for the official package. Apart from the SE Linux policy packages (for which any security updates will surely involve me) the changes I am going to make will not be major and will be of less importance than a security update.

I will also add other modified and new packages to my repository that increase the general security of Etch. Apart from SE Linux all the changes I intend to host will be minimal cost issues (IE they won’t break things or increase the difficulty of sys-admin tasks), and the SE Linux related changes will not break anything on non-SE systems. So someone who wants general security improvements without using SE Linux might still find my repository useful.

encryption speed – Debian vs Fedora

I’m in the process of converting my Fedora/rawhide laptop to Debian.

On Fedora the AES encrypted filesystems deliver about 38MB/s read speed according to dd. On Debian the speed is 2.4MB/s when running Xen and 2.7MB/s when not running Xen. The tests were done on the same block device.

Debian uses a SMP kernel (there are no non-SMP kernels in Debian), but I don’t expect this to give an order of magnitude performance drop. Both systems use i686 optimised kernels.

Update: As suggested I replaced the aes module with the aes_586 module. Unfortunately it made no apparent difference.

Update2: As suggested by a comment I checked the drive settings with hdparm and discovered that my hard drive was not using DMA. After I configured the initramfs to load the piix driver first it all started working correctly. Thanks for all the suggestions, I’ll post some benchmarks of encryption performance in a future blog entry.

some questions about disk encryption

On a mailing list some questions were asked about disk encryption, I decided to blog the answer for the benefit of others:

What type of encryption would be the strongest? the uncrackable if you will? im not interested in DES as this is a US govt recommendation – IDEA seems good but what kernel module implements this?

The US government (which incidentally employs some of the best cryptologists in the world) recommends encryption methods for data that is important to US interests (US military and banking operations for starters). Why wouldn’t you want to follow those recommendations? Do you think that they are putting back-doors in their own systems?

If they were putting in back-doors do you think that they would use them (and potentially reveal their methods) for something as unimportant as your data?

I think that if the US military wanted to apply a serious effort to breaking the encryption on your data then you would have an assortment of other things to worry about, most of which would be more important to you than the integrity of your data.

I’ve read some good things about keeping a usb key for system boot so that anything on the computer itself is unreadable without the key – but thats simply just a physical object – I’d like both the system to ask for the passphrase for the key as well as needing the usb key

I believe that can be done with LUKS, however it seemed broken last time I experimented with it so I’ve stuck with the older operation of cryptsetup.

What kind of overheads does something like this entangle? – will my system crawl because of the constant IO load of the disk?

My laptop has a Pentium-M 1.7GHz and a typical laptop drive. The ratio of CPU power to hard drive speed is reasonable. For most operations I don’t notice the overhead of encryption, the only problem is when performing CPU intensive IO operations (such as bzip compression of large files). When an application and the kernel both want to use a lot of CPU time then things can get slow.

More recent machines have a much higher ratio of CPU power to disk IO as CPU technology has been advancing much faster than disk technology. A high-end desktop system might have 2-3x the IO capacity
of my machine, but a single core would have 2-3x the computer power of the CPU in my laptop and for any system you might desire nowadays 2 cores is the minimum. Single-core machines are still on sale and still work well for many people – I am still deploying Pentium-3 machines in new installations, but for machines that make people drool it’s all dual-core in laptops and one or two dual-core CPUs in desktop systems (with quad core CPUs on sale soon).

If you want to encrypt data on a P3 system with a RAID array (EG a P3 server) then you should expect some performance loss. But for a typical modern desktop system you shouldn’t expect to notice any overhead.

Debian SE Linux policy bug

checkmodule -m -o local.mod local.te
semodule_package -o local.pp -m local.mod
semodule -u local.pp

Save the following policy as local.te and then run the above commands to make semodule work correctly and to also allow restorecon to access the console on boot.

module local 1.0;

require {
        class chr_file { read write };
        class fd use;
        type restorecon_t;
        type tmpfs_t;
        type initrc_t;
        type semanage_t;
        role system_r;
};

allow restorecon_t tmpfs_t:chr_file { read write };
allow semanage_t initrc_t:fd use;
4

SE Linux on Debian in 5 minutes

Following from my 5 minute OSDC talk yesterday on 5 security improvements needed in Linux distributions I gave a 5 minute talk on installing SE Linux on Debian etch. To display the notes I formatted them such that they were in 24 line pages and used less at a virtual console to display them. The ultra-light laptop I was using has only 64M of RAM which isn’t enough for a modern X environment and I couldn’t be bothered getting something like Familiar going on it.

After base install you install the policy and the selinux-basics package:

# apt-get install selinux-basics selinux-policy-refpolicy-targeted
The following extra packages will be installed:
checkpolicy libsemanage1 mime-support policycoreutils python python-minimal
python-selinux python-semanage python-support python2.4 python2.4-minimal
selinux-utils
Suggested packages:
python-doc python-tk python-profiler python2.4-doc logcheck syslog-summary
The following NEW packages will be installed:
checkpolicy libsemanage1 mime-support policycoreutils python python-minimal
python-selinux python-semanage python-support python2.4 python2.4-minimal
selinux-basics selinux-policy-refpolicy-targeted selinux-utils
0 upgraded, 14 newly installed, 0 to remove and 0 not upgraded.
Need to get 6362kB of archives.
After unpacking 41.5MB of additional disk space will be used.
Do you want to continue [Y/n]?

The package install process also configures the policy for the machine. The next step is to label the filesystems, this took 26 seconds on my Celeron 500MHz laptop with 20,000 files on an old IDE disk. The time is in proportion to number of files, often bottlenecked on CPU. A more common install might have 5* as many files with a 5* faster CPU so 30 seconds is probably common for labelling. See the following:

# fixfiles relabel

Files in the /tmp directory may be labeled incorrectly, this command
can remove all files in /tmp.  If you choose to remove files from /tmp,
a reboot will be required after completion.

Do you wish to clean out the /tmp directory [N]? y
Cleaning out /tmp
/sbin/setfiles:  labeling files under /
matchpathcon_filespec_eval:  hash table stats: 14599 elements, 14245/65536 buckets used, longest chain length 2
/sbin/setfiles:  labeling files under /boot
matchpathcon_filespec_eval:  hash table stats: 19 elements, 19/65536 buckets used, longest chain length 1
/sbin/setfiles:  Done.

The next step is to edit /boot/grub/menu.list to enable SE Linux, auditing, and put it in enforcing mode:

title   Debian GNU/Linux, kernel 2.6.17-2-686
root    (hd0,1)
kernel  /vmlinuz-2.6.17-2-686 root=/dev/x selinux=1 audit=1 ro enforcing=1
initrd  /initrd.img-2.6.17-2-686

Then reboot.

After rebooting view the context of your shell, note that the login shell will have a domain of unconfined_t when the targeted policy is used:

# id -Z
system_u:system_r:unconfined_t

Now let’s view all processes that are confined:

# ps axZ |grep -v unconfined_t|grep -v kernel_t|grep -v initrc_t
LABEL                             PID TTY   STAT   TIME COMMAND
system_u:system_r:init_t            1 ?     Ss     0:02 init [2]
system_u:system_r:udev_t         1999 ?     S.s    0:01 udevd --daemon
system_u:system_r:syslogd_t      3306 ?     Ss     0:00 /sbin/syslogd
system_u:system_r:klogd_t        3312 ?     Ss     0:00 /sbin/klogd -x
system_u:system_r:apmd_t         3372 ?     Ss     0:00 /usr/sbin/acpid -c /etc
system_u:system_r:gpm_t          3376 ?     Ss     0:00 /usr/sbin/gpm -m /dev/i
system_u:system_r:crond_t        3402 ?     Ss     0:00 /usr/sbin/cron
system_u:system_r:local_login_t  3423 tty1  Ss     0:00 /bin/login --
system_u:system_r:local_login_t  3424 tty2  Ss     0:00 /bin/login --
system_u:system_r:getty_t        3425 tty3  Ss+    0:00 /sbin/getty 38400 tty3
system_u:system_r:getty_t        3426 tty4  Ss+    0:00 /sbin/getty 38400 tty4
system_u:system_r:getty_t        3429 tty5  Ss+    0:00 /sbin/getty 38400 tty5
system_u:system_r:getty_t        3430 tty6  Ss+    0:00 /sbin/getty 38400 tty6
system_u:system_r:dhcpc_t        3672 ?     S.s    0:00 dhclient3 -pf /var/run/

The initial install of policy inserts modules to match installed software, if you install new software then you need to add new modules with the semodule command:

# semodule -i /usr/share/selinux/refpolicy-targeted/apache.pp
security:  3 users, 7 roles, 824 types, 67 bools
security:  58 classes, 11813 rules
audit(1165532434.664:21): policy loaded auid=4294967295
# semodule -i /usr/share/selinux/refpolicy-targeted/bind.pp
security:  3 users, 7 roles, 836 types, 68 bools
security:  58 classes, 12240 rules
audit(1165532467.874:22): policy loaded auid=4294967295

Note that the security and audit messages come from the kernel via printk, it is displayed on console login but you need to view the system log if logged in via ssh or running an xterm. Now you have to relabel the files that are related to the new policy:

# restorecon -R -v /etc /usr/sbin /var/run /var/log
restorecon reset /etc/bind context system_u:object_r:etc_t->system_u:object_r:named_zone_t
restorecon reset /etc/bind/named.conf context system_u:object_r:etc_t->system_u:object_r:named_conf_t
[...]
restorecon reset /etc/apache2 context system_u:object_r:etc_t->system_u:object_r:httpd_config_t
restorecon reset /etc/apache2/httpd.conf context system_u:object_r:etc_runtime_t->system_u:object_r:httpd_config_t
[...]
restorecon reset /usr/sbin/named context system_u:object_r:sbin_t->system_u:object_r:named_exec_t
restorecon reset /usr/sbin/apache2 context system_u:object_r:sbin_t->system_u:object_r:httpd_exec_t
restorecon reset /usr/sbin/rndc context system_u:object_r:sbin_t->system_u:object_r:ndc_exec_t
restorecon reset /usr/sbin/named-checkconf context system_u:object_r:sbin_t->system_u:object_r:named_checkconf_exec_t
[...]
restorecon reset /var/run/bind context system_u:object_r:var_run_t->system_u:object_r:named_var_run_t
restorecon reset /var/run/bind/run context system_u:object_r:var_run_t->system_u:object_r:named_var_run_t
restorecon reset /var/run/bind/run/named.pid context system_u:object_r:initrc_var_run_t->system_u:object_r:named_var_run_t
restorecon reset /var/run/motd context system_u:object_r:initrc_var_run_t->system_u:object_r:var_run_t
restorecon reset /var/run/apache2 context system_u:object_r:var_run_t->system_u:object_r:httpd_var_run_t
restorecon reset /var/run/apache2/cgisock.3558 context system_u:object_r:var_run_t->system_u:object_r:httpd_var_run_t
restorecon reset /var/run/apache2.pid context system_u:object_r:initrc_var_run_t->system_u:object_r:httpd_var_run_t
restorecon reset /var/log/apache2 context system_u:object_r:var_log_t->system_u:object_r:httpd_log_t
restorecon reset /var/log/apache2/error.log context system_u:object_r:var_log_t->system_u:object_r:httpd_log_t
restorecon reset /var/log/apache2/access.log context system_u:object_r:var_log_t->system_u:object_r:httpd_log_t

The -v option to restorecon causes it to give verbose output concerning it’s operations. Often you won’t do it in real use, but it’s good to illustrate the use.

Now you have to restart the daemons:

# killall -9 apache2
# /etc/init.d/apache2 start
Starting web server (apache2)....
# /etc/init.d/bind9 restart
Stopping domain name service...: bind.
Starting domain name service...: bind.

Apache and BIND now run in confined domains, see the following ps output:

system_u:system_r:httpd_t   3833 ?     Ss     0:00 /usr/sbin/apache2 -k start
system_u:system_r:httpd_t   3834 ?     S      0:00 /usr/sbin/apache2 -k start
system_u:system_r:httpd_t   3839 ?     Sl     0:00 /usr/sbin/apache2 -k start
system_u:system_r:httpd_t   3841 ?     Sl     0:00 /usr/sbin/apache2 -k start
system_u:system_r:named_t   3917 ?     Ssl    0:00 /usr/sbin/named -u bind

It’s not particularly difficult. I covered the actual install of SE Linux in about 1.5 minutes. I had considered just ending my talk there on a note of “it’s so easy I don’t need 5 minutes to talk about it” but decided that it was best to cover something that you need to do once it’s installed.

If you want to know more about SE Linux then ask on the mailing list (see http://www.nsa.gov/selinux for subscription details), or ask on #selinux on freenode.

OSDC

Yesterday I gave a presentation at OSDC in Melbourne about my Postal mail server benchmark suite. The paper was about my new benchmark program BHM for testing the performance of mail relay systems and some of the things I learned by running it. I will put the paper on my Postal site in the near future and also I’ll release a new version of Postal with the code in question very soon.

Today at OSDC I gave a 5 minute talk on 5 things that need to be improved in the security of Linux distributions.

  1. The fact that unprivileged programs often inherit the controlling tty of privileged programs which permits them to use the TIOCSTI ioctl to insert characters in the keyboard buffer. I noted that with runuser and a subtle change to su things have been significantly improved in this regard in Fedora, but other distributions need work (and Fedora can go further in this regard).
  2. A polyinstantiated /tmp should be an option that is easy to configure for a novice sys-admin. There have been too many attacks on data confidentiality and system integrity based on file name race conditions in /tmp, this needs to be fixed and must be fixable by novice sys-admins.
  3. The capability system needs to be extended. 31 capabilities is not enough and the significant number of operations that are permitted by CAP_SYS_ADMIN leads to granting excessive privilege to programs.
  4. The use of Xen on servers such that a domU is always used for applications should become common. Then if a compromise is suspected there will be better options for investigation.
  5. SE Linux needs to be used more, particularly the strict policy and MCS. Use of the strict policy often reveals security flaws in other programs.

I’ll blog about each of these in detail at some future time.

biometrics and passwords

In a comment on my post more about securing an office someone suggested using biometrics. The positive aspect of biometrics is that they can’t be lost, no-one is accidentally going to leave a finger or an eye in their car while they go to a party while other authentication devices are regularly lost in such a manner.

The down-side is that having your finger or eye stolen would be a lot less pleasant than having a USB device, swipe-card, key, or other security device stolen. I think that it’s good to have an option of surrendering your key when under threat (for the person who might be attacked at least).

Rumor has it that some biometric sensors look for signs of life (EG temperature and pulse), but I believe that these could be faked with a suitable amount of effort. A finger attached to a mini heart/lung machine should make it possible to pass the temperature and pulse checks (although I don’t think that I have access to any data that is important enough to justify such effort on the part of an attacker).

One thing that biometrics could be useful for is screen-blankers. It would be good to be able to have a screen-blanker for your computer that operates when you go to get a coffee. For a period of 10 minutes after leaving a biometric method could be used to re-enable access. After that time a different method would ave to be used. This gives the convenience of biometrics for when you need it most (the many short trips away from your computer that you make during the day) but removes the benefit for an attacker who might consider removing part of your body. Also I am not convinced in the general security of biometrics. There are claims that you can make a finger based on a fingerprint which can fool a biometric sensor. If those claims are correct then a biometric sensor would still work for a coffee break (presumably you are not far away and will be back soon, and other people are in the area). The coffee break security is usually to prevent casual snooping such as colleagues who want to see what was on your screen but not actually do anything invasive to get it. Another benefit of biometrics for a screen saver is that although I trust people in the same office as me (whichever office that may be) not to try anything when they might get caught I still don’t want them shoulder-surfing my password. Replacing the trivial authentication cases with a fingerprint reader would prevent that.

In the KDE 1.x days I had a shell script launched when the lid closed on my laptop which would lock the screen (the screen-saver ran in the background and a signal could make it lock the screen). This meant that I could merely close the lid of my laptop to lock the screen, this is fast and easy and also is not immediately recognised as locking the screen. Some people get offended if you lock your laptop screen when in their presence as they think that you should trust them enough to leave your most secret data open to them (generally people who aren’t serious about computers – I’m sure that the same people would happily lock their diary if I was ever in the same room as it). Being able to lock the screen in a non-obvious way is a security benefit.

Regarding the comment about using a USB device to store passwords, there are two problems with this, one is that all passwords will be available all the time, this means a program that is permitted to access password A would also be given access to password B. The other is that the passwords can be accessed easily. The ideal solution is to have an encryption device that uses public key cryptography and stores the private keys on the device with no way of removing them. It would also permit the user to authorise each transaction.

I would like to see a USB device that stores multiple GPG keys and implements the GPG algorithm (with no way for anyone with less resources than the NSA to extract the keys). The device would have a display and a couple of buttons. When it is accessed it would display messages such as “signing attempt on key 1” and allow me to press a button to authorise or reject that operation.

This means that if I insert the key to sign an email I won’t have a background trojan start issuing sign and decrypt commands. The only viable attack that would be permitted is the case where I want to sign a message and my message is sent to /dev/null and a message from an attacker is signed again. The non-arrival of my original message would hopefully alert me to this problem. I am not aware of any hardware which supports these functions.

Also I have just received a couple of RSA SecurID tokens as a sample. An RSA representative phoned me to ask about my use of the tokens, I said that I am an independent consultant and I have been having trouble getting my clients to accept my recommendations to use such devices and that I want to implement them on a test network so that I can give more detailed advice to my clients and hopefully get them to improve their security. For some reason the RSA rep found that funny, but I got my sample hardware so it’s fine.

more about securing an office

My post about securing an office received many comments, so many that I had to write another blog entry to respond to them and also add some other things I didn’t think of before.

One suggestion was to use pam_usb to store passwords on a USB device. It sounds like it’s worth considering, but really we need public key encryption. I don’t want to have a USB device full of keys, I want a USB device that runs GPG and can decrypt data on demand – the data it decrypts could be a key to unlock an entire filesystem. One thing to note is that USB 2.0 has a bandwidth of 30MB/s while the IDE hard drive in my Thinkpad can sustain 38MB/s reads (at the start – it would be slower near the end). This means that I would approximately halve the throughput on large IOs by sending all the data to a USB device for encryption or decryption. Given that such bulk IO is rare this is feasible. There are a number of devices on the market that support public-key encryption, I would be surprised if any of them can deliver the performance required to encrypt all the data on a hard drive. But this will happen eventually.

Bill made a really good point about firewire. I had considered mentioning it in my post but refrained due to a lack of knowledge of the technology (it’s something that I would disable on my own machines but in the past I couldn’t recommend that others disable without more information). Could someone please describe precisely which 1394 (AKA Firewire) modules should be disabled for a secure system? If you don’t need Firewire then it’s probably best to just disable it entirely.

To enable encryption in Fedora Core 6 you need something like the following in /etc/crypttab:

home_crypt /dev/hdaX /path/to/key
swap /dev/hdaX /dev/random swap

Debian uses the same format for /etc/crypttab.

The Peregrine blog entry in response to my entry made some really good points. I wasn’t aware of what SUSE had done as I haven’t done much with SUSE in the past. I’m currently being paid to do some SUSE work so I will learn more about what SUSE offers, but given the SUSE/MS deal I’m unlikely to use it when I don’t have to. Before anyone asks, I don’t work for SUSE and given what they have just done I will have to reject any offer of employment that might come from them.

I had forgotten about rsh and telnet. Surely those protocols are dead now? I use telnet as a convenient TCP server test tool (netcat isn’t installed on all machines) and never use rsh. But Lamont was correct to mention them as there may be some people still doing such things.

The Peregrine blog made an interesting point about Kerberised NFS being encrypted, I wasn’t aware of this and I will have to investigate it. I haven’t used Kerberos in the past because most networks I work on don’t have a central list of accounts, and often there is no one trusted host.

I strongly disagree with the comment about iSCSI and AoE “Neither protocol provides security mechanisms, which is a good thing. If they did, the additional overhead would affect their performance“. Lack of security mechanisms allows replay attacks. For example if an attacker compromises a non-root account on a machine that uses such a protocol for it’s root filesystem, the victim might change their password but the attacker could change the data back to it’s original values even it if was encrypted. Encryption needs to have sequence numbers embedded to be effective, this is well known – the current dmcrypt code (used by cryptsetup) encrypts each block with the block ID number so that blocks can not be re-arranged by someone who can’t decrypt them (a weakness of some earlier disk encryption systems). When block encryption is extended to a network storage system I believe that the block ID number needs to be used as well as a sequence ID number to prevent reordering of requests. CPU performance has been increasing more rapidly than hard drive performance for a long time. Some fairly expensive SAN hardware is limited to 40MB/s (I won’t name the vendor here but please note that it’s not a company that I have worked for), while there is faster SAN hardware out there I think it’s reasonable to consider 40MB/s as adequate IO performance. A quick test indicates that the 1.7GHz Pentium-M CPU in my Thinkpad can decrypt data at a rate of 23MB/s. So to get reasonable speed with encryption from a SAN you might require a CPU which is twice as fast as in my Thinkpad for every client (which means most desktop machines sold for the last two years and probably all new laptops now other than the OLPC machine). You would also require a significant amount of CPU power at the server if multiple clients were to sustain such speeds. This might be justification for making encryption optional or for having faster (and therefore less effective) algorithms as an option.

I believe that the lack of built-in security in the AoE and iSCSI protocols gives a significant weakness to the security of the system which can’t be fully addressed. The CPU requirements for such encryption can be met with current hardware even when using a strong algorithm such as AES. There are iSCSI accellerator cards being developed, such cards could also have built in encryption if there was a standard algorithm. This would allow good performance on both the client and the server without requiring the main CPU.

Finally the Peregrine blog entry recommended Counterpane. Bruce Schneier is possibly the most widely respected computer security expert. Everything he does is good. I didn’t mention his company in my previous post because it was aimed at people who are on a strict budget. I didn’t bother mentioning any item that requires much money, and I don’t expect Counterpane to be cheap.

Simon noted that developing a clear threat model is the first step. This is absolutely correct, however most organizations don’t have any real idea. When advising such organizations I usually just invent a few possible ways that someone with the same resources and knowledge as I might attack them and ask whether such threats seem reasonable, generally they agree that such things should be prevented and I give further advice based on that. It’s not ideal, but advising clients who don’t know what they want will never give an ideal result.

One thing that I forgot to mention is the fact that good security relies on something you have as well as something you know. For logging in it’s ideal to use a hardware security token. RSA sells tokens that display a pseudo-random number every minute, the server knows the algorithm used to generate the numbers and can verify that the number entered was generated in the last minute or two. Such tokens are sold at low prices to large corporations (I can’t quote prices, but one of my clients had prices that made them affordable for securing home networks), I will have to discover what their prices are to small companies and individuals (I have applied to evaluate the RSA hardware). Another option is a GPG smart-card, I already have a GPG card and just need to get a reader (this has been on my to-do list for a while). The GPG card has the advantage of being based on free software.

One thing I have believed for some time is that Debian should issue such tokens to all developers, I’m sure that purchasing ~1200 tokens would get a good price for Debian and the security benefits are worth it. The use of such tokens might have prevented the Debian server crack of 2003 or the Debian server crack of 2006. The Free Software Foundation Fellowship of Europe issues GPG cards to it’s members, incidentally the FSFE is a worthy organisation that I am considering joining.

a good security design for an office

day 32 of the beard

One issue that is rarely considered is how to deal with office break-ins for the purpose of espionage. I believe that this issue has been solved reasonably well for military systems, but many of the military solutions do not apply well to civilian systems – particularly the use of scary dudes with guns. Also most office environments don’t have the budget for any serious security, so we want to improve things a bit without extra cost. Finally the police aren’t interested in crimes where an office is burgled for small amounts of cash and items of minor value, it gets lost in the noise of junky burglaries, so prevention is the only option.

Having heard more information about such break-ins than I can report, I’ll note a few things that can be done to improve the situation – some of which I’ve implemented in production.

The most obvious threat model is theft of hard drives. The solution to this is to encrypt all data on the drives. The first level of this is to simply encrypt the partitions used for data, support for this is available in Fedora Core 6 and has been in Debian for some time. The more difficult feature is encrypting the root filesystem, encrypting root means that important system files such as /etc/shadow are encrypted. Also if the root filesystem is encrypted then an attacker can’t trivially subvert the system by replacing binaries. An unencrypted root filesystem on a machine that is left turned off overnight (or for which an unexpected reboot won’t be treated seriously) allows an attacker to remove the drive, replace important system files and then re-install it. If the machine is booted from removable media (EG USB key) which contains the kernel and the key for decrypting the root filesystem then such attacks are not possible. Debian/unstable supports an encrypted root filesystem, but last time I tried the installer there did not appear to be any good support for booting from USB (but given the flexibility of the installer I think it’s within the range of the available configuration options). I have run Fedora systems with an encrypted root filesystem for a few years, but I had to do some gross hacks that were not of a quality that would be accepted. With the recent addition of support for encrypted filesystems in Fedora it seems likely that some such patches could be accepted – I would be happy to share my work with anyone who wants to do the extra work to make it acceptable for Fedora.

Once the data is encrypted on disk the next thing you want to do is to make the machines as secure as possible. This means keeping up to date with security patches even on internal networks. I think that a viable attack method is to install a small VIA based system in the switch cabinet (no-one looks for new equipment appearing without explanation) that sniffs an internal (and therefore trusted) network and proxies it to a public network. This isn’t just an issue of securing applications, it also means avoiding insecure protocols such as NFS and AoE for data that is important for your secrecy or system integrity.

An option for using NFS is to encrypt it with IPSEC or similar technology. AoE can be encrypted with cryptsetup in the same way as you encrypt hard drive partitions, it doesn’t use IP so IPSEC won’t work but it is a regular block device so anything that encrypts block devices will work. I have been wondering about how well replay attacks might work on an encrypted AoE or iSCSI device.

Security technologies such as SE Linux are good to have as well. An attacker who knows that a server has encrypted hard drives might try cracking it instead. A thief who has stolen a laptop and knows that it has an encrypted drive can keep it running until future vulnerabilities are discovered in any daemons that accept data from the network (of course if you have enough technology you could sniff the necessary data from the system bus and from RAM while it’s running – but most attackers won’t have such resources). I have considered running a program on my laptop that would shut it down if for a period of 48 hours I didn’t login or un-blank the screen, that would mean that if it was stolen then the thief would have 48 hours to try and crack it.

Prevent access to some hardware that you don’t need. If you allow the system to load all USB drivers then maybe a bug in such a driver could be exploited to crack it. Remember that in a default configuration USB drivers will be loaded when a device is inserted (which is under control of an attacker) and the device will use data from the attacker’s hardware (data of low integrity being accessed by code that has ultimate privilege). Turning off all USB access is an option that I have implemented in the past. I have not figured out a convenient way of disabling all USB modules other than the few that I need, I have considered writing a shell script to delete the unwanted modules that I can run after upgrading my kernel package.

Once these things have been done the next issue is securing hardware. Devices to monitor keyboard presses have been used to steal passwords. The only solution I can imagine for this is to use laptops on people’s desks and then store them in a safe overnight, unfortunately laptops are still quite a bit more expensive than desktop machines and consequently they are mostly used as status symbols in offices. Please let me know if you have a better idea for solving the key-logging problem.

For servers there is also a problem with keyboard sniffing. Maybe storing the server’s keyboard in a safe would be a good idea.

Security monitoring systems are a good idea, unfortunately they can be extremely expensive. There has already been at least one recorded case of a webcam being used to catch a burglar. I believe that this has a lot of potential. Get a webcam server setup with some USB hubs and cameras and you can monitor a small office from all angles. When the office is empty you can have it GPG encrypt pictures and send them off-site for review in the case of burglary. You could also brick the server into a wall (or make it extremely physically secure in other ways) so that the full photo record would be available in the case of damaged phone lines, and to give more pictures than the upload bandwidth of an ADSL link would allow (512Kb/s doesn’t allow uploading many pictures – no-where near the capacity of a few high-resolution web-cams).

This is just a few random thoughts, some things I’ve done, some things I plan to do, and some that just sound like fun. I expect comments telling me that I have missed some things. I may end up writing a series of articles on this topic.

PS I’ve uploaded day 32 of the beard (which was taken yesterday). Last night at a LUV meeting I was asked to stand in front of the audience to show them my beard. I had imagined that they might have seen it enough through my blog, but apparently not.