When configuring servers the trade-offs between RAM and disk are well known. If your storage is a little slow then you can often alleviate the performance problems by installing more RAM for caching and to avoid swapping. If you have more than adequate disk IO capacity then you can over-commit memory and swap out the things that don’t get used much.
One that often doesn’t get considered i the trade-off between RAM and CPU. I just migrated a server image from a machine with two P4 CPUs to a DomU on a machine with Opteron CPUs. The P4 system seemed lightly loaded (a maximum of 30% CPU time in use over any 5 minute period) so I figured that if two P4 CPUs are 30% busy then a single Opteron core should do the job. It seems that when running 32bit code, 30% of 2*3.0GHz P4 CPUs is close to the CPU power of one core of an Opteron 2352 (2.1GHz). I’m not sure whether this is due to hyper-threading actually doing some good or inefficiencies in running 32bit code on the Opteron – but the Opteron is not giving the performance I expected in this regard.
Now having about 90% of the power of that CPU core in use might not be a problem, except that the load came in bursts. When a burst took the machine to 100% CPU power a server process kept forking off children to answer requests. As all the CPU power was being used it took a long time to answer queries (several seconds) so the queue started growing without end. Eventually there were enough processes running that all memory was used, the machine started thrashing, and eventually the kernel out of memory handler started killing things.
I rebooted the DomU with two VCPUs (two Opteron cores) and there was no problem, performance was good, and because the load bursts last less than a minute the load average seems to stay below 1.
It seems that the use of virtual machines increases the scope of this problem. The advantage of virtual machines is that you can add extra virtual hardware more easily (up to the limit of the physical hardware of course) – I could give the DomU in question 6 Opteron cores in a matter of minutes if it was necessary. The disadvantage is that the CPU use of other virtual machines can impact the operation. As there seems to be an exponential relationship between the number of CPU cores in a system and the overall price it’s not feasible to just put in 32 core Xen servers. While CPU power has generally been increasing faster than disk performance for a long time (at least the last 20 years) it seems that virtualisation provides a way of using a lot of that CPU power. It is possible to have a 1:1 mapping of real CPUs and VCPUs in the Xen DomU’s, if you were to install 8 DomU’s that each had one VCPU on a server with 8 cores then there would be no competition between DomUs for CPU time – but that would significantly increase the cost of running them (some ISPs offer this for a premium price).
In this example, if I had a burst of load for the service in question at the same time as other DomUs were using a lot of CPU time (which is a possibility as the other DomUs are the clients for the service in question) then I might end up with the same problem in spite of having assigned two VCPUs to the DomU.
The real solution is to configure the server to limit the number of children that it forks off, the limit can be high enough to guarantee 100% CPU use at times of peak load without being high enough to start swapping.
I wonder how this goes with ISPs that offer Xen hosting. It seems that you would only need to have one customer who shares the same Xen server as you experiencing such a situation to cause enough disk IO to cripple the performance that you get.