Storing a GPG key

Chris Lamb has suggested storing a GPG key on a RAID-5 device [1]. The idea is that it can be stored on several physical block devices such that losing just one will not give the key to an attacker.

A default GPG secret key will be about 1.2K in size (3 sectors of a hard drive). A minimal key (with 1024 bit DSA keypair) will be 0.9K (2 sectors). I expect that very few people have secret keys greater than 4K in size.

To create a software RAID-5 device under Linux the tool mdadm is used. The default chunk size is 64K, so a 1.2K file will probably be on a single device. If you use the -c option of mdadm to specify a smaller chunk size then the smallest that is accepted is 4K which still permits a default GPG secret key to be on a single device. The Ext2 and Ext3 filesystems will always align such file data to a 4K boundary unless the device is smaller than a certain size (or a special mkfs option is used) to give a 1K block size for the filesystem. If an Ext2 or Ext3 filesystem is used with 1K blocks then you might get a 1.2K file split across multiple 4K RAID chunks.

So storing a GPG key on RAID-5 won’t prevent an attacker who steals one part from getting all the most valuable data. It will make it more inconvenient for them (if you are lucky it will prevent them getting all the data) and it will also make it difficult for the owner of the GPG key to determine which of the devices actually contains the secret data (probably all of them will end up having copies if you edit the secret key).

Now if RAID-5 did allow chunk sizes that were smaller than the secret key or if you have Ext2/3 with 1K blocks and get lucky with file fragmentation then the problem still isn’t solved. The reason is that you don’t require N-1 of the N disks to get some useful data out of a RAID-5 array (run strings on one element of a RAID-5 array to verify this). A single disk on it’s own will have some data that can be used, as file(1) can recognise GPG secret keys so you could just copy 1K chunks of data into separate files and use file to determine which (if any) has the data in question.

The really exciting question is, what do you get if you have the first 1K of a 1.2K GPG secret key? If it could be proved that the first 1K does not give an attacker any advantage then this might provide some benefit. But I think that this is a very dubious assumption, when dealing with such things it’s best to assume the worst. Assume that an attacker who has 1K out of 1.2K of secret data has the ability to reconstruct the rest. In that case the Linux kernel RAID-5 provides no benefit for storing a GPG secret key.

Just try not to get your devices that contain secret data stolen. Maybe a watch with a built-in USB device is a good idea. Thieves seem to be targetting mobile phones instead of watches nowadays and something that’s strapped to your wrist is difficult to lose.

12 comments to Storing a GPG key

  • Simon McVittie

    libgfshare gets this splitting behaviour right (on a per-byte level, and accompanied by a proof that it works), and is in Debian unstable (I maintain it).

  • James Utter

    libgfshare is designed specifically for the purpose of splitting a secret key into redundant pieces.

  • Jan Hudec

    What would work, but requires some extra management is this:

    – from /dev/random, read a file of the same length as the key and store it in one file on one device
    – xor the file with the key and store the result on another device
    – when you need the key, xor the two files and store the decoded key on tmpfs so it is not written to disk anywhere
    – this way if the adversary gets either of the two files, it’s completely useless to them
    – such solution obviously requires some extra tool to reconstruct the key for GPG
    – can be extended to more than 2 files and getting any subset of them would still be completely useless

  • Steven

    The problems you describe can easily be worked around by padding every byte of the key with 15 bytes of random data. Basically the opposite function of a compression algorithm, the “blowup” algorithm. I do not know if there are standard tools for this purpose in linux, but it can of course be easily scripted in YFSL (your favorite scripting language)….

    Still, it is of course all proof-of-concept thinking here (-:

  • I think putting GPG keys on a secure usb crypto token is the way to go.

  • etbe

    libgfshare sounds like a really good thing!

    Jan: I had the same idea, it seems that it’s effectively a 2 of 2 implementation of what libgfshare does.

    cstamas: Most “secure” USB devices aren’t. There is however the smart-card implementations on USB, but that means that you can only use the USB device for GPG stuff.

  • etbe: By saying crypto token I meant smart-card.

  • etbe

    cstamas: A smart-card by design is a single object that will only contain the key. Therefore by design it can not be used to solve the problem that Chris was trying to solve.

  • etbe: I generated my keys on a disconnected machine (no net, no disk Livecd) make a backup to a CD and then put it on a smartcard.

    If I loose my smartcard without my PIN it is useless and I can make a new one based on a backup. Am I missing a point Chris idea’s solve?

  • etbe

    cstamas: Does your smart-card have a rate-limit on the PIN attempts or does it wipe it’s data after N bad attempts?

    In theory if you have a good GPG pass-phrase the same might be said for losing a USB device which contains it, you merely restore from backup and keep working.

    I think that a better way of attacking is probably to crack the machine which is used to sign things, that way you get the PIN too.

  • etbe: Yes: after 3 bad attempts and you are locked out.

    If the machine with the smartcard get cracked and the token is unlocked (PIN entere) or the attacker knows it he can use your key as long as your token is inserted. But he will never get your key as it can never ever leave the card.

    When you do crypto, your card will do the crypto operations.

  • etbe

    Steinar H. Gunderson suggests “Shamir secret sharing”.