Cnet has an article on the design of the Google servers [1]. It seems that their main servers are 2RU systems with a custom Gigabyte motherboard that takes only 12V DC input. The PSUs provide 12V DC and each system has a 12V battery backup to keep things running before a generator starts in the event of a power failure. They claim that they get better efficiency with small batteries local to the servers than with a single large battery array.
From inspecting the pictures it seems that the parts most likely to fail are attached by velcro. The battery is at one end, the PSU is at the other, and the hard disks are at one side. It appears that it might be possible to replace the PSU or the battery while the server is operational and in the rack.
The hard disks are separated from the motherboard by what appears to be a small sheet of aluminium which appears to give two paths for air to flow through the system. The thermal characteristics of the motherboard (CPUs) and the hard drives are quite different to having separate air flows seems likely to allow warmer air to be used in cooling the system (thus saving power).
Google boast that their energy efficiency now matches what the rest of the industry aims to do by 2011!
The servers are described as taking up 2RU, which gives a density of one CPU per RU. This surprised me as some companies such as Servers Direct [2] sell 1RU servers that have four CPUs (16 cores). Rackable systems [3] (which just bought the remains of SGI) sells 2RU half-depth systems (which can allow two systems in 2RU of rack space) that have four CPUs and 16 cores (again 4 CPUs per RU). Rackable systems also has a hardware offering designed for Cloud Computing servers, those CloudRack [4] systems have a number of 1RU trays. Each CloudRack tray can have as many as two server boards that has two CPUs (4 CPUs in 1RU) and 8 disks.
While I wouldn’t necessarily expect that Google would have the highest density of CPUs per rack, it did surprise me to see that they have 1/4 the CPU density of some commercial offerings and 1/8 the disk density! I wonder if this was a deliberate decision to use more server room space to allow slower movement of cooling air and thus save energy.
It’s interesting to note that Google have been awarded patents on some of their technology related to the batteries. Are there no journalists reading the new patents? Surely anyone who saw such patents awarded to Google could have published most of this news before Cnet got it.
Now, I wonder how long it will take for IBM, HP, and Dell to start copying some of these design features. Not that I expect them to start selling their systems by the shipping crate.
I wished I had read the date of the article. Got me.
They don’t look 19″ wide, so they can fit more in than a regular 2RU server. Probably a better measure is CPUs per container – they can apparently fit 1160 servers per container, vs say the Sun container which has 280RU available, which if you can get four CPUs per RU in is 1120 CPUs. So Google are achieving twice the density. They say in a video that they have some storage trays which have more disks.
Bart: What makes you believe it’s a joke?
James: Based on my measurements from the pictures my best estimate of the width was about 15 inches. Notably smaller than the 19 inch standard rack size, but definitely too wide to allow two side by side in a single rack (half-width rack mount servers are not uncommon).
I guess that they have their own rack style that supports ~15 inch wide servers. For the number of servers that they run it would not be difficult to get unique racks manufactured. Probably no-one manufactures racks that fit well in a shipping container anyway.