Linux, politics, and other interesting things
Danny Angus writes about the potential threat posed by small storage devices with large capacity . His post was prompted by a BBC article about Hitachi’s plans for new hard drives , they are aiming for 4TB of data on a single drive by 2011 and a 1TB laptop drive. One thing I noticed about the article is that they made the false claim that current drives are limited to 1TB, the storage capacity is determined by the total surface area which is proportional to the square of the radius and the height of the drive (AFAIK there are no practical limits to the number of platters apart from the height of the drive). So if a 5.25 inch hard drive was to be manufactured with today’s technology it should get a capacity equivalent to at least three times the capacity of the larger 3.5 inch drive.
The reason that 5.25 inch drives are not manufactured is that for best performance you want multiple spindles so that multiple operations can be performed concurrently. Using 3.5 inch drives in servers allows the use of more disks for the same amount of space in the rack and the same amount of power. The latest trend is towards 2.5 inch (Small Form Factor AKA SFF) disks for servers to allow more drives for better performance. With 3.5 inch disks a 1U system was limited to 3 disks and a 2U system was often limited to 4 or 5 disks. But with 2.5 inch drives a 2U server can have 10 drives or more. I know of one hardware vendor that plans to entirely cease using 3.5 inch drives and claims that 2.5 inch disks will give better performance, capacity, and power use!
In regard to Danny’s claim (which is entirely correct) about the threat posed by insiders. I don’t believe that a laptop with 1TB of capacity is the threat. In a server room people notice where laptops get connected and there are often strictly enforced policies about connecting machines that don’t belong to the company. I believe that the greatest threat is posed by USB flash devices. For example let’s consider a database with customer name (~20B), birth-date (10B), address (~80B), phone number (~12B), card type (1B), card number (16B), card expiry (5B), and card CVV code (3B). That’s ~155 bytes per record in CSV or TSV format. If you have data for a million customers that’s 155M uncompressed and probably about 50M when compressed with gzip or WinZip (depending on which platform is being ripped). No-one even sells a USB flash device that is smaller than 50M, I recently bought a 2G flash device that was physically very small and cheap (it was in the bargain bin).
The next issue is, what data might be worth stealing that is large enough to not fit on a USB device? I guess that if you want to copy entire network file shares from a corporation then you would need more than the 16G that seems to be the maximum capacity of a USB device at the moment. Another theoretical possibility would be to copy the entire mail spool of a medium to large ISP. For the case of a corporate file server you could probably get the data at reasonable speed, 1TB of data would take 10,000 seconds or 2.8 hours to transfer at gigabit Ethernet speeds (if you max out a GigE link – it could be as much as five times that if the network is congested or if the server is slow). It’s doable, but it would be a rather tense three or more hours waiting by an illegally connected laptop. For the mail server of a large ISP there is often no chance of getting anywhere near line speed, it’s lots of small reads and seek performance is the bottleneck, such servers are usually running close to capacity (and trying to copy data fast would hurt performance and draw unwanted attention).
Another possibility might be to copy the storage of an Intranet search device. If a company has a Google appliance or similar device indexing much of their secret data then copying the indexes would be very useful. It would allow offline searches of the corporate data to prepare a list of files to retrieve later.
It would probably be more useful to get online access to the data from a remote site. I expect that an unethical person could sell remote access to someone who is out of range of extradition. All that would be required would be to intentionally leave a flaw in the security of the system. In most large corporations this could be done in a way that is impossible to prove. For example if management decrees that the Internet servers run some software that is known to be of low quality then a hostile insider could make configuration changes to increase the risk – it would look like an innocent mistake if the problem was ever discovered (the blame would entirely go to the buggy software and the person who recommended it).
A large part of the solution to this problem is to hire good employees. The common checks performed grudgingly by financial companies are grossly inadequate for this. Checking whether a potential employee has a criminal record does not prevent hiring criminals, it merely prevents hiring unsuccessful criminals and people who have not yet been tempted enough! The best way to assess whether HR people are being smart about this is to ask them for an estimate of how many criminals are employed by the company. If you have a company that’s not incredibly small then it’s inevitable that some criminals will be employed. Anyone who thinks that it is possible to avoid hiring criminals simply isn’t thinking about the issues. I may write more about this issue in a future post.
Another significant part of the solution to the problem is to grant minimum privileges to access data. Everyone should only be granted access to data that they need for their work so that the only people who can really compromise the company are senior managers and sys-admins, and for best security different departments or groups should have different sys-admin teams and separate server rooms. Of course this does increase the cost of doing business, and probably most managers would rather have it be cheap than secure.