Using the “ulimit” controls over process resource use it is possible to limit RAM for processes and to limit the number of processes per UID. The problem is that this often is only good for accidental problems not dealing with malicious acts.
For a multi-user machine each user needs to be allowed to have two processes to be able to do anything (IE the shell and a command that they execute). A more practical limit is five processes for a single shell session (one or two background jobs, a foreground job where one process pipes data to another, and the shell). But even five processes is rather small (a single Unix pipe can have more than that). A shell server probably needs a limit of 20 processes per user if each user will have the possibility of running multiple logins. For running the occasional memory intensive process such as GCC the per-process memory limit needs to be at least 20M, if the user was to compile big C++ programs then 100M may be needed (I’ve seen a G++ process use more than 90M of memory when compiling a KDE source file). This means that a single user who can launch 20 processes which can each use 20M of memory could use 400M of memory, if they have each process write to a pages in a random order then 400M of RAM would be essentially occupied by that user.
If a shell server had 512M of RAM (which until quite recently was considered a lot of memory – the first multi-user Linux machine I ran on the net had 4M of RAM) then 400M of that could be consumed by a single hostile user. Leaving 100M for the real users might make the machine unusable. Note that the “hostile user” category also encompasses someone who gets fooled by the “here’s a cool program you should run” trick (which is common in universities).
I put my first SE Linux Play Machine [1] on the net in the middle of 2002 and immediately faced problems with DOS attacks. I think that the machine had 128M of RAM and because the concept was new (and SE Linux itself was new and mysterious) many people wanted to login. Having 20 shell users logged in at one time was not uncommon, so a limit of 50 processes for users was minimal. Given that GCC was a necessary part of the service (users wanted to compile their own programs to test various aspects of SE Linux) the memory limit per process had to be high. The point of the Play Machine was to demonstrate that “root” was restricted by SE Linux such that even if all Unix access control methods failed then SE Linux would still control access (with the caveat that a kernel bug still makes you lose). So as all users logged into the same account (root) the process limit had to be adequate to handle all their needs, 50 processes was effectively the bare minimum. 50 processes with 5M of memory each is more than enough to cause a machine with 128M of RAM to swap to death.
One thing to note is that root owned system processes count towards the ulimit for user processes as SE Linux does not have any resource usage controls. The aim of the SE Linux project is access control not protection against covert channels [2]. This makes it a little harder to restrict things as the number of processes run by daemons such as Postfix varies a little over time so the limits have to be a little higher to compensate, while Postfix is run with no limits the processes that it creates apply to the global limit when determining whether user processes can call fork().
So it was essentially impossible to implement any resource limits on my Play Machine that would prevent a DOS. I changed the MOTD (message of the day – displayed at login time) to inform people that a DOS attack is the wrong thing to do. I implemented some resource limits but didn’t seriously expect them to help much (the machine was DOSed daily).
Recently I had a user of my Play Machine accidentally DOS it and ask whether I should install any resource limits. After considering the issue I realised that I can actually do so in a useful manner nowadays. My latest Play Machine is a Xen DomU which I have now assigned 300M of RAM, I have configured the limit for root processes to be 45, as the system and my login comprise about 30 processes that leaves 15 for unprivileged (user_r) logins. Of recent times my Play Machine hasn’t been getting a lot of interest, having two people logged in at the same time is unusual so 15 processes should be plenty. Each process is limited to 20M of memory so overflowing the 300M of RAM should take a moderate amount of effort.
Recently I intentionally have not used swap space on that machine to save on noise when there’s a DOS attack (on the assumption that the DOS attack would succeed regardless of the amount of swap). Now that I have put resource limits in place I have installed 400M of swap space. A hostile user can easily prevent other unprivileged users from logging in by keeping enough long-running processes active – but they could achieve the same goal by having a program kill users shells as soon as they login (which a few people did in the early days). But it should not be trivial for them to prevent me from logging in via a simple memory or process DOS attack.
Update: It was email discussion with Cam McKenzie that prompted this blog psot.
> A more practical limit is five processes for a single shell session (one or two background jobs, a foreground job where one process pipes data to another, and the shell).
Why is a process limit needed at all?
Can’t you just limit the basic resources (CPU time, RAM, storage and network)?
> For running the occasional memory intensive process such as GCC the per-process memory limit needs to be at least 20M, if the user was to compile big C++ programs then 100M may be needed (I’ve seen a G++ process use more than 90M of memory when compiling a KDE source file). This means that a single user who can launch 20 processes which can each use 20M of memory could use 400M of memory,
Why can’t you just set a per-user limit instead of a per-process limit?
Olaf: These basic resources simply can’t be limited on a per-user basis with a stock Linux kernel. There have been experimental patches to allow overall controls of per-user memory and RAM use (which have never come close to being accepted in kernel.org AFAIK) and it is possible to limit overall network use per user with iptables and cron jobs (but it’s painful to do and not the problem I face).
If I could set a per-user limit I would. Of course I still face the issue of root:user_r not being equivalent to root:system_r or root:sysadm_r.
It’s like Linux is not suitable for systems where users can’t be trusted. I guess this is one of the reasons why virtual systems are so popular, because they do provide this kind of controls/limits.
An OT question: what’s your opinion on world-readable home dirs?
Olaf: Trust is a relative thing. In university environments it was always common to have systems configured such that one user could break things for others. An occasional breakage was a learning experience, breakage attributed to malice could be punished.
I recall one occasion at university when I accidentally DOSed an important time-sharing system (a bug in my code combined with a bug in the ulimit controls which were apparently applied per session not per UID). As soon as I realised my mistake (minutes before other students realised) I packed up my stuff, ran to the station, and took the first train home. Social pressure from other students did a reasonable job of policing the system.
Virtual systems do help. But the total lack of IO bandwidth controls still allows the potential for DOS attacks unless you have separate spindles.
As for world-readable home directories, that is only suitable for certain small projects. If you have a machine purchased for a small group of people to use with a single project then it makes some sense. In all other cases (the vast majority of cases) it’s a bad idea.
> Olaf: Trust is a relative thing. In university environments it was always common to have systems configured such that one user could break things for others. An occasional breakage was a learning experience, breakage attributed to malice could be punished.
But what is the reason for that? Is it so nice that users can break stuff for others or does the system simply provide inadequate controls to prevent it (easily)?
> Social pressure from other students did a reasonable job of policing the system.
Depending on social pressure for security is not desirable if it can be avoided.
> As for world-readable home directories, that is only suitable for certain small projects. If you have a machine purchased for a small group of people to use with a single project then it makes some sense. In all other cases (the vast majority of cases) it’s a bad idea.
Shouldn’t the defaults be optimized for the majority of cases?
Olaf: I agree that it would be good to have such access controls as you describe and that it would be good to make it extremely difficult to break things for others.
But such goals conflict with other goals (such as making things fast), and we miss out on such features.
I agree that the defaults should be optimised for the majority of cases in most situations, except where the majority case will be dangerous for the minority. The Debian default of creating mode 755 home directories is a bad idea IMHO.
“50 processes with 5M of memory each is more than enough to cause a machine with 128M of RAM to swap to death”
Could you please let me how you were able to set per process memory usage. In limits.conf, I see “memlock – max locked-in-memory address space (KB)” and “rss – max resident set size (KB)”, which one do i need to set… and what is the difference and significance between these two …
Scenario: I have a 4GB ram RHEL login server, here I want to put a restriction like — any user process should not take more than 512MB of RAM….
is it possible this way?
Thanks,
Sudheendra