Archives

Categories

Pollution and Servers

There is a lot of interest in making organisations “green” nowadays. One issue is how to make the IT industry green. People are talking about buying “offsets” for CO2 production, but the concern is that some of the offset schemes are fraudulent. Of course the best thing to do is to minimise the use of dirty power as much as possible.

Of course the first thing to do is to pay for “green power” (if available) and if possible install solar PV systems on building roofs. While the roof space of a modern server room would only supply a small amount of the electricity needed (maybe less than needed to power the cooling) every little bit helps. The roof space of an office building can supply a significant portion of the electricity needs, two years ago Google started work on instralling Solar PV panels on the roof of the “Googleplex” [1] with the aim of supplying 30% of the building’s power needs.

For desktop machines a significant amount of power can be saved if they are turned off overnight. For typical office work the desktop machines should be idle most of the time, so if the machine is turned off outside business hours then it will use something close to 45/168 of the power that it might otherwise use. Of course this requires that the OS support hibernation (which isn’t supported well enough in Linux for me to want to use it) or that applications can be easily stopped and restarted so that the system can be booted every morning. One particular corner case is that instant-messaging systems need to be server based with an architecture that supports storing messages on the server (as Jabber does [2]) rather than requiring that users stay connected (as IRC does). Of course there are a variety of programs to proxy the IRC protocol and using screen on a server to maintain a persistent IRC presence is popular among technical users (for a while I used that at a client site so that I could hibernate the PowerMac I had on my desktop when I left the office).

It seems that most recent machines have BIOS support for booting at a pre-set time. This would allow the sys-admin to configure the desktop machines to boot at 8:00AM on every day that the office is open. That way most employees will arrive at work to find that their computer is already booted up and waiting for them. We have to keep in mind the fact that when comparing the minimum pay (about $13 per hour in Australia) with the typical electricity costs ($0.14 per KWh – which means that a desktop computer might use $0.14 of electricity per day) there is no chance of saving money if employee time is wasted. While companies are prepared to lose some money in the process of going green, they want to minimise that loss as much as possible.

The LessWatts.org project dedicated to saving energy on Linux systems reports that Gigabit Ethernet uses about 2W more power than 100baseT on the same adapter [3]. It seems most likely that similar savings can be achieved from other operating systems and also from other network hardware. So I expect that using 100baseT speed would not only save about 2W at the desktop end, but it would also save about 2W at the switch in the server-room and maybe 1W in cooling as well. If you have a 1RU switch with 24 Gig-E ports then that could save 48W if the entire switch ran at 100baseT speed, compared to a modern 1RU server which might take a minimum of 200W that isn’t very significant.

The choice of server is going to be quite critical to power use, it seems that all vendors are producing machines that consume less power (if only so that they can get more servers installed without adding more air-conditioning), so some effort in assessing power use before purchase could produce some good savings. When it comes time to decommission old servers it is a good idea to measure the power use and decommission the most power hungry ones first whenever convenient. I am not running any P4 systems 24*7 but have a bunch of P3 systems running as servers, this saves me about 40W per machine.

It’s usually the case that the idle power is a significant portion of the maximum power use. In the small amount of testing I’ve done I’ve never been able to find a case where idle power was less than 50% of the maximum power – of course if I spun-down a large number of disks when idling this might not be the case. So if you can use one virtual server that’s mostly busy instead of a number of mostly idle servers then you can save significant amounts of power. Before I started using Xen I had quite a number of test and development machines and often left some running idle for weeks (if I was interrupted in the middle of a debugging session it might take some time to get back to it). Now if one of my Xen DomU’s doesn’t get used for a few weeks it uses little electricity that wouldn’t otherwise be used. It is also possible to suspend Xen DomU’s to disk when they are not being used, but I haven’t tried going that far.

Xen has a reputation for preventing the use of power saving features in hardware. For a workstation this may be a problem, but for a server that is actually getting used most of the time it should not be an issue. KVM development is apparently making good progress, and KVM does not suffer from any such problems. Of course the down-side to KVM is that it requires an AMD64 (or Intel clone) system with hardware virtualisation, and such systems often aren’t the most energy efficient. A P3 system running Xen will use significantly less power than a Pentium-D running KVM – server consolidation on a P3 server really saves power!

I am unsure of the energy benefits of thin-client computing. I suspect that thin clients can save some energy as the clients take ~30W instead of ~100W so even if a server for a dozen users takes 400W there will still be a net benefit. One of my clients does a lot of thin-client work so I’ll have to measure the electricity use of their systems.

Disks take a significant amount of power. For a desktop system they can be hibernated at times (an office machine can be configured such that the disks can spin-down during a lunch break). This can save 7W per disk (the exact amount depends on the type of disk and the efficiency of the PSU – (see the Compaq SFF P3 results and the HP/Compaq Celeron 2.4GHz on my computer power use page [4]). Network booting of diskless workstations could save 7W for the disk (and also reduce the noise which makes the users happy) but would drive the need for Gigabit Ethernet which then wastes 4W per machine (2W at each end of the Ethernet cable).

Recently I’ve been reading about the NetApp devices [5]. By all accounts the advanced features of the NetApp devices (which includes their algorithms for the use of NVRAM as write-back cache and the filesystem journaling which allows most writes to be full stripes of the RAID) allow them to deliver performance that is significantly greater than a basic RAID array with a typical filesystem. It seems to me that there is the possibility of using a small number of disks in a NetApp device to replace a larger number of disks that are directly connected to hosts. Therefore use of NetApp devices could save electricity.

Tele-commuting has the potential to save significant amounts of energy in employee travel. A good instant-messaging system such as Jabber could assist tele-commuters (it seems that a Jabber server is required for saving energy in a modern corporate environment).

Have I missed any ways that sys-admins can be involved in saving energy use in a corporation?

Update: Albert pointed out that SSD (Solid State Disks) can save some power. They also reduce the noise of the machine both by removing one moving part and by reducing heat (and therefore operation of the cooling fan). They are smaller than hard disks, but are large enough for an OS to boot from (some companies deliberately only use a small portion of the hard drives in desktop machines to save space on backup tapes). It’s strange that I forgot to mention this as I’m about to buy a laptop with SSD.

7 comments to Pollution and Servers

  • What kinds of hibernation problems are you having? I’m seeing it work fine.

    The number one thing you can do to break the user’s habit of leaving the computer on is to make it start quickly — not just OS boot, but total time from first touch to seeing the first email of the day. Office workers don’t leave the electric pencil sharpener running, because it starts right up when you put the pencil in.

  • Hi Russell! Great post. The one thing I think you’ve missed is the heat factor of computers. Most server rooms require insane amounts of air conditioning, which of course is very expensive to operate.

    While I’m glad computer chip manufacturers are working to produce more energy efficient chips, I hope that they are able to reduce the heat dissipation of them as well. While the two seem to go hand in hand (i.e. – if the chip uses less power, it should generate less heat), I’ve found a variety of situations where it doesn’t always add up.

    Solid state drives can purportedly decrease boot up time and improve system responsiveness, and for corporations that use SANs, they are more than enough for workstations to boot from. Using SSDs instead of hard drives should also reduce heat dissipation.

    I feel we’re at a critical junction for hard drives and SSDs – you really need at least 2GB of storage for an operable OS (though much less is definitely doable), and its now possible to get 2 and 4GB chips for less than the cost of small hard drives. Granted they are 10X the size of the chip, but how often to corporate users need access to that much storage locally?

    Good posts about the EEE – ACER just came out with an ATOM netbook too!

  • etbe

    Don: I attended Matthew Garrett’s talk at LCA 2008 about suspend to disk where he described how hibernation works and all the strange things it has to do to work. It doesn’t seem like something that I want to use for my most important data. Lots of things can appear to work fine but have hidden risks.

    I agree about booting. Having machines boot quickly is a good feature and apparently SSDs support faster boots as do newer OS features such as “upstart” (in theory even if not yet in practice). But I still think that booting machines automatically at 8AM is not going to significantly increase power use and will provide a benefit.

    Albert: For air-conditioning systems for homes you typically expect to see about 1/3 of the energy required for cooling. So if you want to remove 3000J of heat energy from a room then you need 1000J of electrical energy. If the A/C systems for server rooms run at the same efficiency then you merely add 1/3 to the power use. But given the technical challenges of providing such large amounts of very cold air in a small space it seems likely that the efficiency is lower. But I don’t have any numbers.

    Good point about SSD, I had entirely forgotten about them while writing this post.

    Also that’s an interesting link about Japan. More appropriate office clothing is a good idea.

  • I really enjoyed reading your article and feel this is a really important subject. You may also want to think about:

    – One of the difficulties (from a management point of view) is that a lot of this is about detail. Now detail isn’t bad, but it does make for lousy initiatives and bullet points, so it doesn’t present as well as ‘big bang’ items, like, say, consolidation or virtualisation. So, if what you really do is go round every server, examine load profiles, check memory sizing, check the possibilities for consolidation with others, the only thing there that your manager would be happy presenting upwards would be ‘consolidation’. Ho, hum.

    – You are right that, at least on paper, a thin client approach can make very worthwhile savings against conventional desktop computing. Remember, however, that in practice you may well do two other, less desirable, things at the same time:

    You will probably leave the host machine on 24/7 where with desktop machines they were probably only on for ‘office hours’ (whatever they were).

    You may well move the server machine into the data centre. Air con may not have been critical in the general office environment (depending on building, climatic conditions, time of year, etc), but it almost certainly is in the data centre.

    So, although there is a big potential gain from the thin client approach, it may not be as big as it looks at first blush.

    Efficient power supplies.
    Many cheap power supplies are really rather poor in a variety of ways, including power efficiency. The trouble is the way the market has developed, there seems to be little between the ‘cheap and cheerful – but don’t believe the spec’ bargain basement models and the ‘idiotic power output – look at the agressive protection grille’ gamers’ ones. Probably if you get your equipment from top rank suppliers – the HPs the Suns of this world – you can almost take for granted getting decent power supplies. From the less well known people, who knows? And you probably have five year old data centre equipment still doing the job it was designed for, but, back then, people didn’t take it so seriously, so the efficiency of that equipment is really unknown. Unless you go round doing testing.

    (And you can probably add efficient UPSes to that.)

    CRTs
    I still have CRTs and I’m not going to throw them out while they still do the job (that would be un-green, but in a different way!), but you do have to be aware of the power wrinkles. While generally they use somewhat more operating power than the flat panel devices the bigger difference concerns standby. While a typical flat panel reduces power considerably when it blanks, if a CRT is on, it consumes a level of power that is close to the the active power. So the power switch is critical with CRTs.

    And, of course, a monitor with a screen saver running is active, but banning screen savers is probably too joyless to contemplate just yet. Maybe a bit of user democracy is called for here – explain the trade-offs and see what the users think is correct behaviour. You may be surprised.

    More modern cpus tend to consume less power for a given level of computational performance than older ones, so it can make great sense to consolidate onto more modern hardware. But, there is a danger here; if you take this to excess, you never have a settled system that ‘just works’. You are always moving processes and servers around and you are always having the unexpected little niggles associated with that. It may save you power, but irritates your users, which isn’t ideal.

    The point about SSDs is interesting. I had been wondering about the possibilities of using an SSD for an external journal (and the performance implications of that), but put myself off the idea because of the wear-out implications. This may be worth following up, I don’t know. Even ‘more performance for the same watts’ can be green.

    I also suspect that there are worthwhile gains to be made from storage architecture in many applications; starting from the bottom, file systems have an impact as do the drives themselves, RAID/non-RAID and NAS vs ‘local’ disks (and ability to spin down disks). But I can’t see me getting the time to do that experiment, either; at some point you just have to say that life’s just too short.

    My suspicion is that a single, high performance, disk subsystem (raid, nas) beats a multitude of individual hard disks distributed about the individual servers, at least for usual load profiles, but there are also ‘single point of failure’ considerations which might make that undesirable. And its only a suspicion.

    Don Marti wrote:
    “Office workers don’t leave the electric pencil sharpener running, because it starts right up when you put the pencil in.”

    While that’s true, it is also the case that electric pencil sharpeners don’t generally offer users the opportunity of leaving the motor running when inactive. When there is no pencil inserted, or the button isn’t pushed, they just don’t run. The IT industry has been less succesful in depriving end users of the option to do bad things. We expect to be able to have the choice of bad config options, and woe betide any supplier that doesn’t give us them.

    And why have electric pencil sharpeners anyway? Is the use of a ‘plain ole’ mechanical pencil sharpener too much like hard work for modern office workers?

    Albert Lash wrote:
    “While I’m glad computer chip manufacturers are working to produce more energy efficient chips, I hope that they are able to reduce the heat dissipation of them as well. While the two seem to go hand in hand (i.e. – if the chip uses less power, it should generate less heat), I’ve found a variety of situations where it doesn’t always add up.”

    While I agree with you that the attempts by chip makers to reduce watts per unit of computation are good, I haven’t followed the point about “if the chip uses less power, it should generate less heat” being other than universally applicable – surely that’s always true: If watts go in, they get disspated somewhere (exactly where might be a bit obscure in, say, PoE, but the result is dissipation somewhere). Anything else would violate conservation of energy.

    (Oh, and an optical pyrometer is a useful tool for a quick look around to see what is getting hot. And while temperature and power dissipation are not the same thing, it does give you a pointer to where you ought to be expending your efforts and to local hot-spots.)

  • […] the corporate governance nonsense instead of seeing Green IT as a vital pragmatic step, mentioning points I’d seen put more clearly elsewhere, as well as advertising Vista virtualisation and some panoramic webcam from Microsoft (who employ […]

  • etbe

    The LugRadio event review has one interesting claim (or quote of a claim to be precise), that is that charging devices via USB is more efficient than charging via a power point. This should be easy to test, so I’ll do so in the near future.

    Mark: You make many great points which require a lot of consideration. I’ll write some more posts about this issue in the near future, and I also have some plans for some tests to provide some more raw data about these issues.

    Don, Albert, Mark, and MJ, thank you all for your comments. I’m proud of the fact that my blog is read by you and other people like you, and that you consider it worth the effort of writing such thoughtful comments.