Security is Impossible

The Scope of the Problem

Security is inherently complex because of the large number of ways of circumventing it. For example Internet facing servers have been successfully attacked based on vulnerabilities in the OS, the server application, public key generation, DNS, SSL key certificates (and many other programs and algorithms in use), as well as the infrastructure and employees of all companies in the chain. When all those layers work reasonably well (not perfectly but well enough to not obviously be the weakest link) there are attacks on the end user systems that access the servers (such as the trojan horse programs used to attack PCs used for online banking).

My Area of Interest

The area of security that interests me is Linux software development. There are many related areas such as documentation and default configurations to make it easier for people to secure their systems (instead of insecure systems being the default option) which are all important.

There are also many related fields such as ensuring that all people with relevant access are trustworthy. There are many interesting problems to solve in such areas most of which aren’t a good match for my skills or just require more time than I have available.

I sometimes write blog posts commenting on random security problems in other areas. Sometimes I hope to inspire people to new research, sometimes I hope to just inform users who can consider the issues when implementing solutions to security problems.


In the software development side there are ongoing problems of bugs in code that weaken security. The fact that the main area of concern for people who are interested in securing systems is fixing bugs is an indication that the problem of software quality needs a lot of work at the moment.

The other area that gets a reasonable amount of obvious work is in access control. Again it’s an area that needs a lot of work, but the fact that we’re not done with that is an indication of how far there is to go in generally improving computer security.

Authenticating Software Releases

There have been cases where source code repositories have been compromised to introduce trojan horse code, the ones I’ve read about have been discovered reasonably quickly with little harm done – but there could be some which weren’t discovered. Of course it’s likely that such attacks will be discovered because someone will have the original and the copies can be compared.

Repositories of binaries are a bigger problem, it’s not always possible to recompile a program and get a binary which checks out as being identical (larger programs often include the build time in the binary). Even for build processes which don’t include such data it can be very difficult to determine the integrity of a build process. For example programs compiled with different versions of libraries, header files, or compilers will usually differ slightly.

As most developers frequently change the versions of such software they will often be unable to verify their own binaries and any automated verification of such binaries will be impossible for anyone else. So if a developer’s workstation was compromised without their knowledge it might be impossible for them to later check whether they released trojan binaries – without just running the binaries in question and looking for undesired behavior.

The problem of verifying past binaries is solvable for large software companies, Linux distributions, and all other organisations that have the resources to keep old versions of all binaries and libraries used to build software. For proprietary software companies the verification process would have to start with faith in the vendor of their OS and compiler doing the right thing. For Linux distributions and other organisations based on free software it would start by having the source to everything which can then be verified in theory – although in practice verifying all source for compilers, the OS, and libraries would be a huge undertaking.


There is a well documented history of military espionage, people who are sworn to secrecy have been subverted by money, blackmail, and by having political beliefs which don’t agree with their government. The history of corporate espionage is less well documented but as corporations perform less stringent background checks than military organisations I think it’s safe to assume that corporate espionage is much more common.

Presumably any government organisation that can have any success at subverting employees of a foreign government can be much more successful in subverting programmers (either in companies such as Microsoft or in the FOSS community). One factor that makes it easier to launch such attacks is the global nature of software development. Government jobs that involve access to secret data have requirements about where the applicant was born and has lived, corporate jobs and volunteer positions in free software development don’t have such requirements.

The effort involved in subverting an existing employee of a software company or contributor to free software or the effort involved in getting an agent accepted in such a project would be quite small when compared to a nuclear weapons program. Therefore I think we should assume that every country which is capable of developing nuclear weapons (even North Korea) can do such things if they wish.

Would the government of such a country want to subvert a major software project that is used by hundreds of millions of people? I can imagine ways that such things could benefit a government and while there would be costs for such actions (both in local politics and international relations) it seems most likely that some governments would consider it to be worth the risk – and North Korea doesn’t seem to have much to lose.


We would like every computer to be like a castle with a strong wall separating them from the bad things which can’t be breached in ways that aren’t obvious. But the way things are progressing with increasingly complex systems depending on more people and other systems it’s becoming more like biology than engineering. We can think of important government systems as being comparable to the way people with compromised immune systems are isolated from any risk of catching a disease, the consequences of an infection are worse so greater isolation measures are required.

For regular desktop PCs getting infected by a trojan is often regarded as being similar to getting a cold in winter. People just accept that their PC will be infected on occasion and don’t bother making any serious effort to prevent it. After an infection is discovered the user (or their management for a corporate PC) tend not to be particularly worried about data loss in spite of some high profile data leaks from companies that do security work and the ongoing attacks against online banking and webcam spying on home PCs. I don’t know what it will take for users to start taking security risks seriously.

I think that a secure boot is a good step in the right direction, but it’s a long way from being able to magically solve all security problems. I’ve previously described some of the ways that secure boot won’t save you [1].

The problems of subverting developers don’t seem to be an immediate concern (although we should consider the possibility that it might be happening already without anyone noticing). The ongoing trend is that the value of computers in society is steadily increasing which therefore increases the rewards for criminals and spy agencies who can compromise them. Therefore it seems that we will definitely face the problems of subverted developers if we can adequately address the current technical problems related to flaws in software and inadequate access control. We just need to fix some of the problems which are exploited more easily to force the attackers to use the more difficult and expensive attacks. Note that it is a really good thing to make attacks more difficult, that decreases the number of organisations that are capable of attack even though it won’t stop determined attackers.

For end user systems the major problem seems to be related to running random programs from the Internet without a security model that adequately protects the system. Both Android and iOS make good efforts at protecting a system in the face of random hostile applications, but they have both been shown to fail in practice (it might be a good idea to have a phone for games that is separate from the phone used for phone calls etc). More research into OS security is needed to address this. But in the mean time users need to refrain from playing games and viewing porn on systems that are used for work, Internet banking, and other important things. While PCs are small and cheap enough that having separate PCs for important and unimportant tasks is practical it seems that most users don’t regard the problems as being serious enough to be worth the effort.

2 comments to Security is Impossible

  • A person I did know used to repeat: “security is a practice” and “it’s all about trusting levels”.

    Across my career, I’ve face something which makes me very sad:

    In the enterprise world, security is “just another aspect more” of the project. Which many times fights with “company policies”, with “directions”, with “performance”, with “usability”, or with “profitability” (this last one, being one of the big opponents in some cases).

    I feel I’ve had always more secure systems at home, than at work, by such fact.

    I had an “offline” network at home. Because I didn’t trust the hardware being psychically connected to an untrusted network, even with supposed “filters”, and no wifi in such offline network. Didn’t have nothing to hide, but if I had the time and space, I think I could do it again.

    And many “spread in the media” security incidents, are related to such fact (and how it affects to system maintenance).

    Even the automated worms are based in such fact historically and for any platform.

    An interesting and well written article.


  • etbe

    Brendan Scott wrote an interesting blog post about this. His idea that governments might stop using a project because it was once subverted doesn’t seem likely, changing projects for anything significant is very difficult. Also having a project subverted isn’t necessarily that much worse than a routine security flaw in a commonly used product.

    His conclusion that a project might benefit from being subverted once is interesting and plausible. But it would be better if government agencies got involved in QA before such things happen.