Archives

Categories

Out Of Memory Errors and Apache

I’ve been having problems with one of my Xen virtual servers crashing with kernel error messages regarding OOM conditions. One thing I had been meaning to do is to determine how to make a core dump of a Xen domain and then get data such as the process list from it. But tonight I ended up catching the machine after the problem occurred but before the kernel gave OOM messages so I could log in to fix things.

I discovered the following issues:

  1. 10+ instances of spf-policy.pl (a Postfix program to check the Sender Policy Framework data for a message that is being received), most in D state.
  2. All my Planet news feeds being updated simultaneously (four of them taking 20M each takes a bite out of 256M for a virtual machine).
  3. Over 100 Apache processes running in D state.

I think that there is a bug with having so many instances spf-policy.pl, I’ve also been seeing warning messages from Postfix about timeouts when running it.

For the Planet feeds I changed my cron jobs to space them out. Now unless one job takes 40 minutes to run there will be no chance of having them all run at the same time.

For Apache I changed the maximum number of processes from 150 to 40 and changed the maximum number of requests that a client may satisfy to 100 (it used to be a lot higher). If more than 40 requests come in at the same time then the excess ones will wait in the TCP connection backlog (of 511 entries) until a worker process is ready to service the request. While keeping the connections waiting is not always ideal, it’s better than killing the entire machine!

Finally I installed my memlockd program so that next time I have paging problems the process of logging in and fixing them will be a little faster. Memlockd locks a specified set of files into RAM so that they won’t be paged out when memory runs low. This can make a dramatic difference to the time taken to login to a system that is paging heavily. It also can run ldd on executables to discover the shared objects that they need so it can lock them too.

5 comments to Out Of Memory Errors and Apache

  • niq

    Sounds like a badly inefficient Apache setup. Are you running some buggy third-party modules? If not, use a threaded MPM to give you much more efficient memory use with large numbers of clients, and drop the limit on maxrequestsperchild (100 should only be if you have a big leak)!

  • I’d recommend dumping mod_php and going to the threaded MPM (as niq says) using libapache2-mod-fcgid to run PHP processes.

  • Eric Wong

    I’d stick nginx in front of Apache as a reverse proxy to spoon-feed slow
    clients. This way Apache can spawn far fewer children because nginx will
    fully buffer HTTP requests and responses without needing a process or thread
    for each one.

    Standard Apache configurations usually require processes/threads because
    slow /client/ connections are the bottleneck.

  • etbe

    niq and Chris: Thanks for the suggestion, I will investigate the multi-threaded MPM, but as PHP doesn’t work with it I will have to do some other things to get it working (maybe fcgid). I had been thinking of doing something with PHP anyway as I don’t want it running under the same UID as Apache (I want to have WordPress and Mediawiki running under different UIDs).

    Eric: I’m not familiar with nginx and it probably won’t be my choice, but the idea is a really good one. I have worked on machines with Squid as a proxy for Apache servers before. I will investigate the possibility of having one Squid instance on the Dom0 caching all the content for Apache installations in all the DomU’s. I expect that I can reduce the amount of RAM for each DomU by at least 16M by doing this (and 32M for the bigger DomU’s). If this works out as well as I hope I can probably get an extra DomU on the machine without buying extra hardware!

  • Killeader

    Hi, can you help me to learn how to install Memlockd on Debian sarge? The package is no compatible with this version. Tnx!