Donate

Categories

Advert

XHTML

Valid XHTML 1.0 Transitional

Swap Space and SSD

In 2007 I wrote a blog post about swap space [1]. The main point of that article was to debunk the claim that Linux needs a swap space twice as large as main memory (in summary such advice is based on BSD Unix systems and has never applied to Linux and that most storage devices aren’t fast enough for large swap). That post was picked up by Barrapunto (Spanish Slashdot) and became one of the most popular posts I’ve written [2].

In the past 7 years things have changed. Back then 2G of RAM was still a reasonable amount and 4G was a lot for a desktop system or laptop. Now there are even phones with 3G of RAM, 4G is about the minimum for any new desktop or laptop, and desktop/laptop systems with 16G aren’t that uncommon. Another significant development is the use of SSDs which dramatically improve speed for some operations (mainly seeks).

As SATA SSDs for desktop use start at about $110 I think it’s safe to assume that everyone who wants a fast desktop system has one. As a major limiting factor in swap use is the seek performance of the storage the use of SSDs should allow greater swap use. My main desktop system has 4G of RAM (it’s an older Intel 64bit system and doesn’t support more) and has 4G of swap space on an Intel SSD. My work flow involves having dozens of Chromium tabs open at the same time, usually performance starts to drop when I get to about 3.5G of swap in use.

While SSD generally has excellent random IO performance the contiguous IO performance often isn’t much better than hard drives. My Intel SSDSC2CT12 300i 128G can do over 5000 random seeks per second but for sustained contiguous filesystem IO can only do 225M/s for writes and 274M/s for reads. The contiguous IO performance is less than twice as good as a cheap 3TB SATA disk. It also seems that the performance of SSDs aren’t as consistent as that of hard drives, when a hard drive delivers a certain level of performance then it can generally do so 24*7 but a SSD will sometimes reduce performance to move blocks around (the erase block size is usually a lot larger than the filesystem block size).

It’s obvious that SSDs allow significantly better swap performance and therefore make it viable to run a system with more swap in use but that doesn’t allow unlimited swap. Even when using programs like Chromium (which seems to allocate huge amounts of RAM that aren’t used much) it doesn’t seem viable to have swap be much bigger than 4G on a system with 4G of RAM. Now I could buy another SSD and use two swap spaces for double the overall throughput (which would still be cheaper than buying a PC that supports 8G of RAM), but that still wouldn’t solve all problems.

One issue I have been having on occasion is BTRFS failing to allocate kernel memory when managing snapshots. I’m not sure if this would be solved by adding more RAM as it could be an issue of RAM fragmentation – I won’t file a bug report about this until some of the other BTRFS bugs are fixed. Another problem I have had is when running Minecraft the driver for my ATI video card fails to allocate contiguous kernel memory, this is one that almost certainly wouldn’t be solved by just adding more swap – but might be solved if I tweaked the kernel to be more aggressive about swapping out data.

In 2007 when using hard drives for swap I found that the maximum space that could be used with reasonable performance for typical desktop operations was something less than 2G. Now with a SSD the limit for usable swap seems to be something like 4G on a system with 4G of RAM. On a system with only 2G of RAM that might allow the system to be usable with swap being twice as large as RAM, but with the amounts of RAM in modern PCs it seems that even SSD doesn’t allow using a swap space larger than RAM for typical use unless it’s being used for hibernation.

Conclusion

It seems that nothing has significantly changed in the last 7 years. We have more RAM, faster storage, and applications that are more memory hungry. The end result is that swap still isn’t very usable for anything other than hibernation if it’s larger than RAM.

It would be nice if application developers could stop increasing the use of RAM. Currently it seems that the RAM requirements for Linux desktop use are about 3 years behind the RAM requirements for Windows. This is convenient as a PC is fully depreciated according to the tax office after 3 years. This makes it easy to get 3 year old PCs cheaply (or sometimes for free as rubbish) which work really well for Linux. But it would be nice if we could be 4 or 5 years behind Windows in terms of hardware requirements to reduce the hardware requirements for Linux users even further.

12 comments to Swap Space and SSD

  • I really like the idea of making things work on a just-replaced computer. Because free computers.

    Is it my imagination, or are people and companies keeping their MS-Windows computers longer? I know that the depreciation cycle hasn’t changed (and it takes a long time to change the law) but there seem to be plenty of 5+ year old PCs even in companies with decent budgets.

  • Anon

    One issue you don’t mention here and in the old post is hibernation. Since it doesn’t seem to be possible to have a dedicated hibernation partition you have to have a swap partition as large as your RAM, if you want to or not. I have 4G RAM in my laptop and in order to have reliable hibernation I’m forced to keep a 4G swap partition around which I otherwise don’t want and need and that really kind of sucks.

  • etbe

    Actually I mention hibernation twice in this post.

  • Andrew Dorney

    I’d be more concerned about using an SSD for swap space given that SSDs have a significantly smaller amount of writes before they die compared to a regular HDD.

  • etbe

    http://etbe.coker.com.au/2012/05/22/another-usb-flash-failure/
    http://etbe.coker.com.au/2012/08/28/ssd-workstation/

    Andrew: USB-Flash devices wear out quickly, I’ve written about this at the above URL. I believe that SATA SSD devices are designed to sustain a significant amount of writes, I’ve had 3 of them in service since mid 2012 without any problems. On the workstation I’m using to write this comment writes to the root filesystem outnumber the writes to swap by a factor of 42 so if it does fail then it probably won’t be due to swapping.

    Admittedly if I was using a filesystem other than BTRFS the number of writes might be smaller (copy-on-write and snapshots cause more writes). But I want the features of BTRFS.

  • Andrew Dorney

    > On the workstation I’m using to write this comment writes to the root filesystem outnumber the writes to swap by a factor of 42

    Extra BTRFS writes aside, that’s impressive. I assumed swap was used a lot more than that, even on systems with plenty of RAM, for “more urgent than the regular file system but less urgent than RAM” lookups. Concerns allayed!

  • My main PC, that I use most of the time for surfing around, reading online and writing is actually a pimped FSC Futro S500 thin-client with 2x 1.2 GHz.
    It has two GB RAM only, and practically it never swaps.
    It is an embedded desktop if you like, with a power-consumption of less than 25 Watt.
    Instead of a hard disk it has a CompactFlash-card. It is possible to use the GIMP, but in practice you would rather keep to doing office-work and use the other heavier AMD-PC for things that do require computational power.
    I am bearing in mind the idea of using an Xbox 360 with a fast SSD for quick swapping, because it has 512 MB of RAM only, but a very fast PowerPC-CPU.
    The Xbox itself is available at quite low cost, then a modchip is neccessary in order to be able to install and boot Linux and a fast SSD. It is quite adventurous and not really affordable right now, but probably I will try that out when there is a good occasion some time.

  • etbe

    http://etbe.coker.com.au/2014/04/27/swap-breaking-ssd/

    I’ve written a blog post doing the maths related to swap use and SSD.

  • I’m not sure swap is really useful anymore. Too often when systems are thrashing, it will grind to a halt, and it’s easier to simply reboot than wait it out. Having the OOM killer do its thing is generally preferrable.

    Now, Linux doesn’t actually do swapping, but paging. I’m not sure if it manages to collate reads into consecutive, large reads or if it actually loads one page at a time (with a potential 10ms seek time for every read – probably not?), but I wonder if swapping might be better? Just pick a large process, write it all to disk, and keep running the rest of the system for a bit without it.

  • etbe

    Ketil: If you have swap available then there is almost always some used. Many applications write to heap pages that are not accessed again afterwards, this includes some long lived daemons. When such data is paged out it allows more memory for disk caching and probably improves performance a little. Admittedly a system with only a dozen megs of swap used probably isn’t getting a lot of benefit, but there is a theoretical benefit.

    My EeePC 701 has 1G of RAM and 128M of swap. It doesn’t swap enough to risk damaging the SD card (which probably won’t last nearly as long as the SATA SSDs I’m using in workstations) but it allows a bit of extra caching which is good considering how horribly slow the internal storage of the 701 is.

  • Duncan

    etbe> If you have swap available then there is almost always some used.

    Not really. RAM is cheap these days, and from my experience, once you reach something approaching 2 GiB plus 1.5 GiB per core (so 8 gigs on my quad-core, 11 gigs on my 6-core), most of the time there’s gigs of RAM free even with cache using as much as it can. IOW, absolutely zero swapping.

    My current main system is a 16 GiB 6-core, and true to that formula, I rarely go below 5 GiB RAM entirely free (that is, with cache and buffers included in used, not in free), so there’s absolutely zero swap pressure and no swap usage.

    In fact, current usage is ~1.6 GiB apps, rounding error (2 MiB) buffers, and ~2.9 GiB cache, so ~4.5 GiB used, thus over 11 GiB entirely free and unused. (That’s with firefox, a konsole, smplayer, pan and two copies of claws-mail, among other things, running on a Gentoo/KDE4 desktop but with unnecessary USE flags including semantic-desktop turned off at build time, so runtime usage is somewhat lower than it would be on the binary distributions that tend to enable pretty much everything at build time, thus loading more into memory for the same apps.)

    With those numbers I wasn’t seeing any swap usage at all, so I finally disabled it in my kernel build and didn’t setup an partitions for it at my last storage upgrade. (This system didn’t work well with hibernate anyway, so no swap needed for that, tho it works very well with suspend2ram.)

  • etbe

    Duncan: Firstly that does depend a bit on the kernel, the paging algorithms tend to change a bit from time to time. Seeing a system with little load that has ~100K of swap in use isn’t that uncommon, sometimes the kernel swaps out some pages to make more room for cache.

    8G isn’t that much for a modern workstation. Recent versions of KDE and Chrome/Chromium can use that easily if you use Kmail etc.

    16G for a personal workstation is a lot and as you note swap isn’t really needed for that.