Archives

Categories

SSD for a Workstation

SSDs have been dropping in price recently so I just bought four Intel 120G devices for $115 each. I installed the first one for my mother in law who had been complaining about system performance. Her system boot time went from 90 seconds to 20 seconds and a KDE login went from about 35 seconds to about 10 seconds. The real problem that she had reported was occasional excessive application delay, while it wasn’t possible to diagnose that properly I think it was a combination of her MUA doing synchronous writes while other programs such as Chromium were doing things. To avoid the possibility of a CPU performance problem I replaced her 1.8GHz E4300 system with a 2.66GHz E7300 that I got from a junk pile (it’s amazing what’s discarded nowadays).

I also installed a SSD in my own workstation (a 2.4GHz E4600). The boot time went down from 45s on Ext4 without an encrypted root to 27s with root on BTRFS including the time taken to enter the encryption password (maybe about 23s excluding my typing time). The improvement wasn’t as great, but that’s because my workstation does some things on bootup that aren’t dependent on disk IO such as enabling a bridge with STP (making every workstation a bridge is quieter than using switches). KDE login went from about 27s to about 12s and the time taken to start Chromium and have it be usable (rather than blocking on disk IO) went from 30 seconds to an almost instant response (maybe a few seconds)! Tests on another system indicates that Chromium startup could be improved a lot by purging history, but I don’t want to do that. It’s unfortunate that Chromium only supports deleting recent history (to remove incriminating entries) but doesn’t support deleting ancient history that just isn’t useful.

I didn’t try to seriously benchmark the SSD (changing from Ext4 to BTRFS on my system would significantly reduce the accuracy of the results), I have plans for doing that on more important workloads in the near future. For the moment the most casual tests have shown a significant performance benefit so it’s clear that an SSD is the correct storage option for any new workstation which doesn’t need more than 120G of storage space. $115 for SSD vs $35 for HDD is a fairly easy choice for a new system. For larger storage the price of hard drives increases more slowly than that of SSD.

In spite of the performance benefits I doubt that I will gain a real benefit from this in the next year. The time taken to install the SSD equates to dozens of boot cycles which given a typical workstation uptime in excess of a month is unlikely to happen soon. One minor benefit is that deleting messages in Kmail is an instant operation which saves a little annoyance and there will be other occasional benefits.

One significant extra benefit is that an SSD is quiet and dissipates less heat which might allow the system cooling fans to run more slowly. As noisy computers annoy me an SSD is a luxury feature. Also it’s good to test new technologies that my clients may need.

The next thing on my todo list is to do some tests of ZFS with SSD for L2ARC and ZIL.

13 comments to SSD for a Workstation

  • If you need more than 120 GiB, a common pattern is to use one SSD for / and one HDD for /home, which does not need these fast read times.

  • Kevin Krammer

    I did such a replacement about two years ago and while I certainly likes the speed up I found that the best part for me was the reduction in noise.
    The laptop’s fan basically never spun up during normal usage after the switch and that was the only movable component left so I had a totally silent machine.

  • Julian Andres Klode

    Elessar: Really, you need a really really fast /home if you want Chromium to start fast. On my system, .config/chromium is roughly 450 MiB large, and it needs to read those files at startup.

    Likewise, for desktop startup, you need to read desktop configuration files and this also needs a fast /home.

  • etbe

    Elessar and Julian:

    I guess the main issue is how much storage you need and where you need it. My workstation currently has everything fitting in 120G with room to spare as the big files are on the server. My laptop also has significantly less than 120G in use because I don’t generally do big things when on the move (I might take a few gigs of video to watch when travelling, but I wouldn’t take my entire archive of interesting Youtube videos and conference videos).

    As 120G isn’t the limit (240G SSD is affordable for desktop systems and Wikipedia suggests that up to 480G can be obtained in 2.5″ form factor) I don’t expect to have any serious problems in this regard.

    In terms of managing storage I’ve recently moved to the model of having a ~/src filesystem on my workstations. The main /home filesystem will be configured for maximum reliability (EG BTRFS with RAID-1 which can prevent common data loss even on a single disk) while ~/src won’t (anything that hasn’t been committed to GIT or uploaded to Debian probably isn’t that important).

    There are lots of other ways of separating storage if there was a real benefit. For example I have 2G of disk space used for email archives which aren’t accessed that often, if fast storage was really at a premium then I could have that on slow storage. But really it’s not an issue with current hardware. SSD capacities are way bigger than the data I use most of the time.

    It would be really nice if laptops were designed with something like 32G of fast flash storage on the motherboard as well as a SATA disk for large storage. 32G is enough to have the OS and everything that needs to be fast so that the hard drive could be spun down whenever big files aren’t being used. My Thinkpad has an SD card reader built in, maybe I should try getting that going as an OS device (it might not work as /boot but for / it should be fine).

    Kevin: Hopefully when I move to a SATA SSD or root on SD for my Thinkpad then the system heat problems can be reduced. In warmer weather the noise from the cooling fans is really annoying.

  • The one I’m hanging out for is bcache. That’s essentially a SSD cache layer in front of the hard disk(s), for both reads and writes, so it should in theory allow SSD speeds with HDD capacity (and HDD redundancy, if you’re using RAID). And because it’s supposed to be a drop-in cache, you don’t have to change your FS or reinstall with ZFS.

    The only problem is it’s taking _ages_ to get merged into the mainline Linux kernel, and the price of SSDs is really dropping fast (which pushes the practicality & need for this up)… so hopefully it gets merged sooner rather than later.

  • etbe

    Nick: Bcache sounds interesting, but it’ll be a couple of years before the next Debian release which limits my ability to use it in the near future.

    By the time of the next Debian release we should have a lot of new BTRFS features that make it compete better with ZFS.

  • Thiago jung Bauermann

    Have the reliability problems been solved already?

    I’ve read stories of SSD drives failing within one or two years after being bought.

  • etbe

    Thiago: Sometimes HDDs fail within a year or two…

    I’m not aware of SSDs having serious problems if you get a good brand. But as always if you want to keep your data then you use RAID, good backups, or both.

  • Re: Have the reliability problems been solved already?

    It seems to be matter a lot what brand of SSD you get. Last I read, Intel was the most reliable – some stats on return rates, from end of 2011, from a retailer:
    http://www.behardware.com/articles/843-7/components-returns-rates-5.html

  • etbe

    http://www.behardware.com/articles/862-7/components-returns-rates-6.html

    Nick: Thanks for that. Above is a link to a more recent version of the same thing.

  • etbe

    http://blog.cihar.com/archives/2012/07/13/intel-ssd-firmware-update-linux/?utm_source=rss2

    The above post about using GRUB to load an ISO image for updating flash in an Intel SSD is worth noting. The links from behardware show that Intel is among the best options so methods of making Intel SSDs work well are noteworthy.

  • pablo

    Could you please explain how you use the workstations as a switch replacement? Sounds interesting.

  • etbe

    auto xenbr0
    iface xenbr0 inet static
    address 10.0.0.1
    network 10.0.0.0
    netmask 255.255.255.0
    broadcast 10.0.0.255
    bridge_ports eth0 eth1 mb0
    bridge_stp on
    bridge_fd 3

    pablo: On a Debian system I use something like the above in /etc/network/interfaces after installing the bridge-utils package. That will put devices eth0, eth1, and mb0 into the bridge so the three Ethernet ports look like they are on a switch. The forwarding delay is set to 3 seconds which is quite low, for big and complex networks that might cause problems but for my home network it’s fine and it ensures that I don’t get DHCP timeouts.