New version of Bonnie++ and Violin Memory


I have just released version 1.03e of my Bonnie++ benchmark [1]. The only change is support for direct IO in Bonnie++ (via the -D command-line parameter). The patch for this was written by Dave Murch of Violin Memory [2]. Violin specialise in 2RU storage servers based on DRAM and/or Flash storage. One of their products is designed to handle a sustained load of 100,000 write IOPS (in 4K blocks) and 200,000 read IOPS per second for it’s 10 year life (but it’s not clear whether you could do 100,000 writes AND 200,000 reads in a second). The only pricing information that they have online is a claim that flash costs less than $50 per gig, while that would be quite affordable for dozens of gigs and not really expensive for hundreds of gigs, as they are discussing a device with 4TB capacity it sounds rather expensive – but of course it would be a lot cheaper than using hard disks if you need that combination of capacity and performance.

I wonder how much benefit you would get from using a Violin device to manage the journals for 100 servers in a data center. It seems that 1000 writes per second is near the upper end of the capacity of a 2RU server for many common work-loads, this is of course just a rough estimation based on observations of some servers that I run. If the main storage was on a SAN then using data journaling and putting the journals on a Violin device seems likely to improve latency (data is committed faster and the application can report success to the client sooner) while also reducing the load on the SAN disks (which are really expensive).

Now given that their price point is less than $50 per gig, it seems that a virtual hosting provider could provide really fast storage to their customers for a quite affordable price. $5 per month per gig for flash storage in a virtual hosting environment would be an attractive option for many people. Currently if you have a small service that you want hosted a virtual server is the best way to do it, and as most providers offer little information on the disk IO capacity of their services it seems quite unlikely that anyone has taken any serious steps to prevent high load from one customer from degrading the performance of the rest. With flash storage you not only get a much higher number of writes per second, but one customer writing data won’t seriously impact read speed for other customers (with hard drive one process that does a lot of writes can cripple the performance of processes that do reads).

The experimental versions of Bonnie++ have better support for testing some of these usage scenarios. One new feature is measuring the worst-case latency of all operations in each section of the test run. I will soon release Bonnie++ version 1.99 which includes direct IO support, it should show some significant benefits for all usage cases involving Violin devices, ZFS (when configured with multiple types of storage hardware), NetApp Filers, and other advanced storage options.

For a while I have been dithering about the exact feature list of Bonnie++ 2.x. After some pressure from a contributor to the OpenSolaris project I have decided to freeze the feature list at the current 1.94 level plus direct IO support. This doesn’t mean that I will stop adding new features in the 2.0x branch, but I will avoid doing anything that can change the results. So in future benchmark results made from Bonnie++ version 1.94 can be directly compared to results that will be made from version 2.0 and above. There is one minor issue, new versions of GCC have in the past made differences to some of the benchmark results (the per-character IO test was the main one) – but that’s not my problem. As far as I am concerned Bonnie++ benchmarks everything from the compiler to the mass storage device in terms of disk IO performance. If you compare two systems with different kernels, different versions of GCC, or other differences then it’s up to you to make appropriate notes of what was changed.

This means that the OpenSolaris people can now cease using the 1.0x branch of Bonnie++, and other distributions can do the same if they wish. I have just uploaded version 1.03e to Debian and will request that it goes in Lenny – I believe that it is way too late to put 1.9x in Lenny. But once Lenny is released I will upload version 2.00 to Debian/Unstable and that will be the only version supported in Debian after that time.


7 thoughts on “New version of Bonnie++ and Violin Memory”

  1. -dsr- says:

    Thanks for writing Bonnie++. We used multiple copies running in parallel over the past week to stress a server sufficiently to uncover a RAM problem that memcheck couldn’t find.

  2. Chris Samuel says:

    So no random data for testing instead of just NULLs (which give compressed ZFS volumes unreal numbers) ? :-(

    Please ? Pretty please ?

    I did submit a patch a long time ago.. ;-)

  3. etbe says:

    dsr: Thanks for that, it’s good to know that people find it useful.

    Chris: I seem to have lost the patch, could you please re-send it?

  4. Chris Samuel says:

    Thanks Russell, I thought I’d resent it to you after we chatted at LUV, but I see no evidence of that in my sent folder.. Sorry!

    Now winging its way to you..

  5. Bradley says:

    I wouldn’t want to use single box to handle hundreds of servers’ journals…

    I have recollections of people talking about using local nvram cards as local journal devices (data=journaled), but last time I wanted to do something like that (several years ago):

    – I couldn’t find anyone who’d actually done it
    – the kernel didn’t support using the same journal device for multiple devices, so you’d have to partition up the NVRAM if you had more than one partition
    – it was very hard to buy the cards, certainly in Australia (and google doesn’t suggest that anything has changed – I can’t even find who the manufacturer was; theres a umem kernel module but I can’t find their webpage now)

    Has anything changed in the last few years? All google turns up is which doesn’t have a date, but its probably a bit old – it uses a 2.4 kernel on RH8 on a P4 with 512MB of RAM (and actually only simulates the NVRAM with a ram disk)


  6. etbe says:

    It seems that UMEM has been bought out, the above URL has information on their latest products (including a PCIe card). They do provide better performance than flash based devices allow (claiming 6,700,000 IOPS).

  7. Brian says:

    Just an FYI in case nobody else has mentioned it yet…

    Bonnie++ 1.03e no longer builds on Solaris. O_DIRECT is not something that Solaris knows about. Apparently the analog would be the directio(3C) function:

    Bonnie++ 1.03d builds fine.


Comments are closed.