Archives

Categories

Biba and BLP for Network Services

Michael Janke has written an interesting article about data flows in networks [1], he describes how data from the Internet should be considered to have low integrity (he refers to it as “untrusted”) and that as you get closer to the more important parts of the system it needs to be of higher integrity.

It seems to me that his ideas are very similar in concept to the Biba Integrity Model [2]. The Biba model is based around the idea that a process can only write data to a resource that is of equal or lower integrity and only read data from a resource that is of equal or higher integrity, this is often summarised as “no read-down and no write-up“. In a full implementation of Biba the OS would label all data (including network data) as to it’s integrity level and prevent any communication that violates the model (except of course for certain privileged programs – for example the file or database that stores user passwords must have high integrity but any user can run the program to change their password). A full Biba implementation would not work for a typical Internet service, but considering some of the concepts of Biba while designing an Internet service should lead to a much better design (as demonstrated in Michael’s post).

While considering the application of Biba to network design it makes sense to also consider consider the Bell LaPadula model (BLP) [3]. In computer systems designed for military use a combination of Biba and BLP is not uncommon, while a strict combination of those technologies would be an almost insurmountable obstacle to development of Internet services I think it’s worth considering the concepts.

BLP is a system that is primarily designed around the goal of protecting data confidentiality. Every process (subject) has a sensitivity label (often called a “clearance”) which is comprised of a sensitivity level and a set of categories and every resource that a process might access (object) also has a sensitivity label (often called a “classification”). If the clearance of the subject dominates the classification of the object (IE the level is equal or greater and the set of categories is a super-set) then read access is permitted, if the clearance of the subject is dominated by the classification of the object then write access is permitted, and the clearance and classification have to be equal for read/write access to be permitted. This is often summarised as “no write-down and no read-up“.

SGI has published a lot of documentation for their Trusted Irix (TRIX) product on the net, the section about mandatory access control covers Biba and BBLP [4]. I recommend that most people who read my blog not read the description of how Biba and BLP works, it will just give you nightmares.

The complexity of either Biba or BLP (including categories) is probably too great for consideration when designing network services which have much lower confidentiality requirements (even the loss of a few million credit card numbers is trivial compared to some of the potential results of leaks of confidential military data). But a simpler case of BLP with only levels is worth considering. You might have credit card numbers stored in a database classified as “Top Secret” and not allow less privileged processes to read from it. The data about customers addresses and phone numbers might be classified as “Secret” and all the other data might merely be “Classified”.

One way of using the concepts of Biba and BLP in the design of a complex system would be to label every process and data store in the system according to it’s integrity and classification/clearance. Then for the situations where data flows to processes with lower clearance the code could be well designed and audited to ensure that it does not leak data. For situations where data of low integrity (EG data from a web browser) is received by a process of high integrity (EG the login screen) the code would have to be designed and audited to ensure that it correctly parsed the data and didn’t allow SQL injection or other potential attacks.

I expect that many people who have experience with Biba and BLP will be rolling their eyes while reading this. The situation that we are dealing with in regard to PHP and SQL attacks over the Internet is quite different to the environments where proper implementations of Biba and BLP are deployed. We need to do what we can to try and improve things, and I think that the best way of improving things in terms of web application security would involve thinking about clearance and integrity as separate issues in the design phase.

4 comments to Biba and BLP for Network Services

  • Russel –

    The reference to integrity is the correct interpretation of how we build this part of our security model. I’ll probably use that term in in the future.

    The integrity that I’m referring to though is at the ‘system’ level. Our experience has been that (for example) maintaining the integrity of the operating system that hosts an application after the application has been compromised from an internet sourced attack is essential to limiting the scope of an intrusion and recovering quickly from an intrusion.

    As far as the application of the BLP model for data confidentiality, I suspect that we are doing something similar in concept with a part of our model that puts entire application stacks in separate network security containers based on the confidentiality requirements of the data that is handled by that application. The rules for crossing containers with network connections might be somewhat similar to BLP, where prohibit applications with lower data confidentiality requirements from directly connecting to server/applications/databases with higher confidentiality requirements (no read-up). We do however permit write-down as the preferred means of sending data from more confidential to less confidential systems, under the assumption that if the more confidential app is compromised, the data confidentiality is already lost. This is clearly a result of our thinking that the Internet is the problem, not our own employees. If we were to assume that data confidentiality from persons downloading databases to notebooks and loosing them is the main consideration, then restricting write-down makes a lot of sense.

    I’ll post that part (and other parts) of our model soon.

    The separation of integrity from confidentiality when building the security model sounds valid, and will probably make our model clearer and easier to explain and maintain.

    I’ll have to admit that the models that we use are based on experience unwinding and investigating far too many compromised applications and systems, not on basic research. Had I done more research, I’d have less experience and probably have been better off.

  • Glen Turner

    I don’t really buy Michael’s idea. Mainly because of the handwaving about what happens at a trust boundary. Obviously this depends upon the application, but I would have expected a set of criteria that the trust boundary would meet before tagging the traffic with a greater classification.

    The second problem I’ve got is that the threat model ignores denial of service. The remark “The boundaries are also assumed to have audit logs of some kind” set my alarms ringing. One of the most expensive operations you can do is to write an audit message, the expense increasing as you become more paranoid. In a real-world system you simply can’t document each boundary crossing without handing the attacker a simple way to stop your service.

    The third problem related to the handwaving threat model is this: “Traffic from high trust to low trust is presumed trusted.” This gives attackers carte-blanche to stop one ring away from the core. Often this is enough for the attacker’s purpose. For example, enough to send SMTP spam, or to use (and thus implicate) the machine as a stepping stone to obscure the source of an attack on another machine. The model needs notions of traceability so that this traffic of odd origin is defeated.

  • Glen –

    Thanks for the critical comment. I intentionally kept the blog post very general in an attempt to keep the focus on the concept of trust boundaries and levels within and around an application stack as the critical security concept for mitigating intrusions, rather than implementation details that are highly dependent on specific applications and technology stacks.

    My experience is that few if any application vendors or developers consider anything outside the application container (Tomcat, JBoss) when designing their application. They assume that the application run time environment will have full file system and network access, and that the application will use a database account with DBA/DBO privs. And of course when you query them for details on network, database and file system security you tend to get bland stares followed by ‘we don’t support….’.

    The epidemic of SQL injection successes and the ease at which they seem to propagate to non-SQL servers in data centers is the evidence. SQL injection attacks that upload executeables to blobs in the database, run the exe’s directly from the database and use them to attack nearby machines, for example.

    In our case, the criteria for the trust boundary is largely set by the various state and federal regulations that we must meet, combined with our own internal rules for what constitutes a boundary. (For example – we always assume at least one boundary between the component that stores the data and the component that interacts with the user (the app server), and we build a network access control layer between those components.

    For DOS prevention from external sources we front end critical applications with tools that mitigate DOS attacks up to a reasonable threshold. And yes, database audit logs are expensive and difficult to maintain. In cases where the log volume is high and the application still needs an audit trail, we look towards the application to maintain the trail rather than the database. And we do occasionally suffer DOS’s that break our logging infrastructure.

    Logging is highly application and platform dependent though. Or experience is that for web servers, load balancers, firewalls & network devices, logging thousands of log messages per second is doable on commodity hardware with open source tools and a bit of care. For things like Windows event logs, and database audit records, logging at high rates can be a problem.

    To your third point, we assume that unauthorized access to a higher trusted component automatically triggers an security event that makes the same intruders access to lower trusted componenets largely irrelevant. If they own the database, they own the data, no matter what happens to the app servers. Or goal is to prevent database ownership from propagating to other databases, to the operating system or to the management infrastructure.

    For something inherently insecure like SMTP, we corral the service into an untrusted network segment and block it from accessing more trusted servers/services. And if it happens to get compromised, we make sure that the compromise is contained to within the low trust network segment, and hopefully have enough logging in place to determine the extent of the compromise.

  • etbe

    Michael: One essential point about BLP is that read-up and write-down are considered to be the same thing. It doesn’t matter if a program with low clearance requests highly classified data or whether a process with high clearance offers it.

    http://en.wikipedia.org/wiki/Mandatory_access_control
    http://en.wikipedia.org/wiki/Discretionary_Access_Control

    If you try to distinguish between read-up and write-down then you have a Discretionary Access Control (DAC) system not a Mandatory Access Control (MAC) system (the URLs above explain the terms). As we know DAC does work reasonably well in many situations, and if using a Unix system without SE Linux (or an equivalent technology) then it’s all you have got. One of the benefits of MAC (both the concept as well as the pure implementations) is that if a process with high clearance reads some data from a low integrity source and gets it’s state corrupted then it’s ability to leak the secret data is significantly diminished.

    As for the issue of whether the problem is Internet users or hostile employees, let’s not assume that all employees have a significant level of access. Let’s also keep in mind the fact that the design ideas we are discussing can be applied to an Intranet just as well as they can be applied to the Internet.

    Some of the ideas that come from research do not apply well to real-world systems. BLP and BIBA enforced by a MAC system can make things difficult to use.

    Glen: What happens at a trust boundary is always an issue, no matter how you deal with things. There have been numerous exploits of SUID root programs on Unix systems due to that trust boundary being mishandled in various ways.

    The value in preventing a DOS attack depends greatly on the environment. The auditd program (that is used to collect SE Linux audit messages) has a configuration option to allow it to immediately halt the system if it runs out of disk space. Some users desire that a system immediately and unconditionally halt (possibly losing application data in the process) if it can’t write audit messages. Such users would be dealing with systems where they have ways of dealing with people who try DOS attacks (EG the system is in a locked room and anyone who gets in and does bad things gets a court-martial).

    http://tools.ietf.org/html/rfc4954

    Michael: SMTP is not entirely inherently insecure. You can have SSL Auth (see the above URL) implemented on internal servers. Then only processes which are permitted to send email could send spam. A further protection would be to have an email policy server which limits where each machine can send email. If a server is only supposed to send logwatch messages then only permit it to send email to the sys-admin.