Swap Space

There is a wide-spread myth that swap space should be twice the size of RAM. This might have provided some benefit when 16M of RAM was a lot and disks had average access times of 20ms. Now disks can have average access times less than 10ms but RAM has increased to 1G for small machines and 8G or more for large machines. Multiplying the seek performance of disks by a factor of two to five while increasing the amount of data stored by a factor of close to 1000 is obviously not going to work well for performance.

A Linux machine with 16M of RAM and 32M of swap MIGHT work acceptably for some applications (although when I was running Linux machines with 16M of RAM I found that if swap use exceeded about 16M then the machine became so slow that a reboot was often needed). But a Linux machine with 8G of RAM and 16G of swap is almost certain to be unusable long before the swap space is exhausted. Therefore giving the machine less swap space and having processes be killed (or malloc() calls fail – depending on the configuration and some other factors) is probably going to be a better situation.

There are factors that can alleviate the problems such as RAID controllers that implement write-back caching in hardware, but this only has a small impact on the performance requirements of paging. The 512M of cache RAM that you might find on a RAID controller won’t make that much impact on the IO requirements of 8G or 16G of swap.

I often make the swap space on a Linux machine equal the size of RAM (when RAM is less than 1G) and be half the size of RAM for RAM sizes from 2G to 4G. For machines with more than 4G of RAM I will probably stick to a maximum of 2G of swap. I am not convinced that any mass storage system that I have used can handle the load from more than 2G of swap space in active use.

The reason for the myths about swap space size are due to some old versions of Unix that used to allocate a page of disk space for every page of virtual memory. Therefore having swap space less than or equal to the size of RAM was impossible and having swap space less than twice the size of RAM was probably a waste of effort (see this reference [1]). However Linux has never worked this way, in Linux the virtual memory size is the size of RAM plus the size of the swap space. So while the “double the size of RAM” rule of thumb gave virtual memory twice the size of physical RAM on some older versions of Unix it gave three times the size of RAM on Linux! Also swap spaces smaller than RAM have always worked well on Linux (I once ran a Linux machine with 8M of RAM and used a floppy disk as a swap device).

As far as I recall some time ago (I can’t remember how long) the Linux kernel would by default permit overcommitting of memory. For example if a program tried to malloc() 1G of memory on a machine that had 64M of RAM and 128M of swap then the system call would succeed. However if the program actually tried to use that memory then it would end up getting killed.

The current policy is that /proc/sys/vm/overcommit_memory determines what happens when memory is overcommitted, the default value 0 means that the kernel will estimate how much RAM and swap is available and reject memory allocation requests that exceed that value. A value of 1 means that all memory allocation requests will succeed (you could have dozens of processes each malloc 2G of RAM on a machine with 128M of RAM and 128M of swap). A value of 2 means that a different policy will be followed, incidentally my test results don’t match the documentation for value 2.

Now if you run a machine with /proc/sys/vm/overcommit_memory set to 0 then you have an incentive to use a moderately large amount of swap, safe in the knowledge that many applications will allocate memory that they don’t use, so the fact that the machine would deliver unacceptably low performance if all the swap was used might not be a problem. In this case the ideal size for swap might be the amount that is usable (based on the storage speed) plus a percentage of the RAM size to cater for programs that allocate memory and never use it. By “moderately large” I mean something significantly less than twice the size of RAM for all machines less than 7 years old.

If you run a machine with /proc/sys/vm/overcommit_memory set to 1 then the requirements for swap space should decrease, but the potential for the kernel to run out of memory and kill some processes is increased (not that it’s impossible to have this happen when /proc/sys/vm/overcommit_memory is set to 0).

The debian-administration.org site has an article about a package to create a swap file at boot [2] with the aim of making it always be twice the size of RAM. I believe that this is a bad idea, the amount of swap which can be used with decent performance is a small fraction of the storage size on modern systems and often less than the size of RAM. Increasing the amount of RAM will not increase the swap performance, so increasing the swap space is not going to do any good.

39 thoughts on “Swap Space”

  1. Just wondering: Why swap anyways, these days? Ususally when my system starts using swap, something is wrong, and I’d rather have the kernel kill the process _before_ my system starts paging and becomes unresponsive.

  2. For desktop machines, having the swap space set to more than the RAM size can be useful for hibernation, which stores a copy of what resides in RAM in the swap space.
    Also, when you use tmpfs, which is likely to be swapped-out if you start using a lot of physical memory for applications, swap is useful.

  3. Joachim: You are correct that with the cheap and large memory modules available today there are many systems that don’t really need to swap. However even if a machine doesn’t really need to swap (the processes use less RAM than there is in the system) you can still gain a small improvement in performance by swapping out unused processes and using more memory for cache.

    Glandium: Good points about hibernation and tmpfs.

  4. I think for desktop machines a lot swap can make sense. My Desktop has 2G ram and 4G swap and i’m constantly using more than 2G of the swap space…
    I agree if the used 4G would be in the working set at one time my system would be unusable. But most of the swaped out pages belong to processes that at only sit at the X11 message loop idling in a very small part of the code, so i only get very bad lags when switching to a application i didn’t use recently…
    Btw, not soo long ago the kernel was able to schedule mplayer and amarok to keep up with playing musik and video even when deeply trashing on application change, recent kernel seem to be a lot worse at that.
    For servers that have roughly working set == used memory using much swap constantly is likely a bad idea.

  5. Swap is useful as a safety belt. When you get an application that starts eating up all your system memory, you’ll notice that your machine suddenly became slow, and, hopefully, will be able to kill that app before your swap gets exhausted. Without swap, at this point the machine usually becomes completely nonresponsive, and the kernel’s OOM killer doesn’t always trigger as soon as you want, and it doesn’t always kill the process you want to kill.

    (Also, if you’ve a laptop and want to use hibernation, make sure your swap size is greater than the RAM size.)

  6. | in Linux the virtual memory size is the size of RAM plus the size of the swap space.

    This is not true for all kernels. For early 2.4 kernels (up to 2.4.10) having less swap than RAM was basically useless. And at that time the swap=2xRAM was the official recommendation.


  7. Pingback: www.enchilame.com
  8. Pingback: meneame.net
  9. Martin: My laptop (my main “desktop” machine) has 1.5G of RAM and 800M of swap. It runs Xen and Dom0 (where all the “desktop” stuff happens) usually has 1.2G of RAM. I find that when it gets to more than 400M of swap in use the performance drops significantly and I have never used 800M of swap in regular use. What do you do with your desktop machine to use so much memory?

    Andreas: The first link suggests that a machine with 128M of RAM might need 128M or 196M of swap running 2.4.x compared to 64M for 2.4.x – but Linus notes that he never actually tested that. The last reference you cited explains the issue best, pages which are swapped in will still have a reserved space in the swap. This is quite different from the description of ancient BSD systems (my first reference) which apparently reserved pages when memory was allocated. In Linux 2.4.x space would be reserved only if a page was swapped out. So pages which never needed to be swapped or which were locked soon after they were allocated would not have reserved space in the swap area.

    Linux (AFAIK) has never paged out read-only pages (IE pages from an executable on disk). I believe that some Unix systems would write executable pages to the swap on the theory that it might be faster than the filesystem containing the executables, Linux simply discards the pages in question. The RAM used on a Linux kernel falls into three categories, unpagable (locked application memory, kernel memory, and buffers for DMA), executable pages (discarded not paged out), and pagable application memory. The non-pagable part may be significant on small machines, for example one of my Xen instances has 256M of RAM and the kernel reserves 14M at boot – and the same machine has 9M of RAM locked for essential processes via memlockd.

  10. As an HP servers/linux support engineer, i am frequently asked about how customers should set their swaps. I always say it depends on how much virtual memory their applications need to allocate. It could be correct to set 2 or 3 times ram size, or 1 time as well. In servers, i prefer to have a system swapping than killing processes. It will be easier to understand that more RAM memory is necessary, or change some VM configuration.

    Linux uses swap differently from older unix´es, sysadmins should take cake of swapiness use more than worrying about swap size, since disks are much cheaper than memory nowadays.

  11. Celso: Disks have always been cheaper than RAM. The recent change has been the ratio of disk seek speed to RAM size. Paging is very seek intensive and therefore the maximum size of swap that is usable will be related to the seek times – which have not improved much over the last 10 years (compared to almost everything else in the computer industry).

    For some applications 3 times the RAM size will work, for others having no swap at all is the correct thing to do. I will write more about this in a future blog post (this is obviously a topic that is of interest so I will provide more information for the readers).

  12. Pingback: www.teknear.com
  13. Russell: The argument about seek times is not particularly compelling; I’d have thought that that plays a fairly minor role in the question of how much swap space to allocate. For handhelds, the issues to consider are very different; let’s ignore them for the moment. For desktop machines with plentiful hard disk space, the question is at what point it’s best to let the oom killer kill processes. This in turn is a matter of how much thrashing is happening rather than how much stuff is lying unused on swap space.

    Obviously multi-user computers can afford to have lots of stuff in swap space without hurting performance, because some of those users (and their processes) may be inactive.

    Similar comments apply even to single-user computers when running a graphical environment (X) where it’s easy to have many inactive programs in other workspaces.

    For many systems, the question is moot, in that they never run out of memory even with little or no swap allocated.

    My usage of memory-hogging programs is very different from most people’s, but, FWIW, I find that even if I have a single process whose virtual memory size is 2-3 times RAM size, then the system can still be responsive; it just depends on the memory access patterns of that program. Thus, I find that 2x is still about the right ratio for me for how big swap space should be compared to RAM, and in particular I’ve found that 1.5x is too small. Of course, other people’s needs may vary. All the same, I really suggest toning down the claims of the initial post.

  14. Peter: For most serious storage nowadays seek time is the most important factor in overall performance. Disks that can sustain 80MB/s are common, even the smallest hardware RAID devices can saturate a 4Gb/s (400MB/s) FC link and RAID arrays with multiple FC links are not uncommon.

    You are correct to note that stuff which is unused is not a factor, but there is currently no support for having the amount of IO for swap be a factor in the OOM killer.

    As for inactive programs in other workspaces, it’s a bit of a bummer when you switch sessions and suddenly your machine starts swapping without end.

    If you have only a single memory hungry process then the correct amount of swap is determined entirely by that program.

    The fact is that the 2*RAM advice is based on ancient versions of Unix and has never applied to Linux. I don’t believe that there is any cause to “tone down” any claims, merely to note exceptions to the general rule.

    The fact that the right amount of swap for a system could be anything from nothing to 10X RAM is entirely irrelevant when considering the validity of the “use 2*RAM” advice which is bandied about by people who don’t understand the issues involved.

  15. Pingback: Swap « Guadalinex
  16. Personalmente tengo 2gb de ram y 4gb de swap porque? Porque el autor de la nota usa su LinuX para navegar por internet a lo sumo (no se enojen) prueben poniendole un DOOM3 en ULTRA + wine con fotoshop CS y algun archivo grande. Todo depende para que usaran el sistema! Pero para el uso general que se le da a las maquinas con LinuX en oficinas estoy de acuerdo en que mas de 2gb de swap es al pedo, aparte teniendo un disco de 250-500gb ahorrar espacio para swap?! Saludos!

  17. Google translation of wintch:
    Personally I 2gb of ram and 4gb swap? Because the author of the note uses his LinuX to surf the Internet at most (not angry) poniéndole tested in a DOOM3 ULTRA + CS and fotoshop wine with some big file. Everything depends to use the system! But for general use which gives the machines in offices with LinuX I agree that over 2gb swap is to fart, besides having a 250-disc 500gb save space for swap?! Greetings!

    I agree that there are specific cases where unusual amounts of swap are required. What I really object to is the use of formulas without understanding them or considering the implications. Myth based computing is a bad thing.

    I second the “fart on excess swap use” sentiment!

Comments are closed.