Archives

Categories

Could we have an Open Computing Cloud?

One of the most interesting new technologies that has come out recently is Cloud Computing, the most popular instance seems to be the Amazon EC2 (Elastic Cloud Computing). I think it would be good if there were some open alternatives to EC2.

Amazon charges $0.10 per compute hour for a virtual machine that has one Compute Unit (equivalent to a 1.0 to 1.2GHz 2007 Opteron core) and 1.7G of RAM. Competing with this will be difficult as it’s difficult to be cheaper than 10 cents an hour ($876.60 per annum) to a sufficient extent to compensate for the great bandwidth that Amazon has on offer.

The first alternative that seems obvious is a cooperative model. In the past I’ve run servers for the use of friends in the free software community. It would be easy for me to do such things in future, and Xen makes this a lot easier than it used to be. If anyone wants a DomU for testing something related to Debian SE Linux then I can set one up in a small amount of time. If there was free software to manage such things then it wuld be practical to have some sort of share system for community members.

The next possibility is a commercial model. If I could get Xen to provide a single Amazon Compute Unit to one DomU (not less or more) then I wouldn’t notice it on some of my Xen servers. 1.7G of RAM is a moderate amount, but as 3G seems to be typical for new desktop systems (Intel is still making chipsets that support a maximum of 4G of address space [2], when you subtract the address space for video and PCI you might as well only get 3G) it would not be inconceivable to use 1.7G DomUs on idle desktop machines. But it’s probably more practical to have a model with less RAM. For my own use I run a number of DomUs with 256M of RAM for testing and development and the largest server DomU I run is 400M (that is for ClamAV, SpamAssassin, and WordPress). While providing 1.7G of RAM and 1CU for less than 10 cents an hour may be difficult, but providing an option of 256M of RAM and 0.2CU (burstable to 0.5CU) for 2 cents an hour would give the same aggregate revenue for the hardware while also offering a cheaper service for people who want that. 2 cents an hour is more than the cost of some of the Xen server plans that ISPs offer [3] but if you only need a server for part of the time then it would have the potential to save some money.

For storage Amazon has some serious bandwidth inside it’s own network for transferring the image to the machine for booting. To do things on the cheap the way to go would be to create a binary diff of a common image. If everyone who ran virtual servers had images of the common configurations of the popular distributions then creating an image to boot would only require sending a diff (maybe something based on XDelta [4]). Transferring 1GB of filesystem image over most network links is going to be unreasonably time consuming, transferring a binary diff of an up to date CentOS or Debian install vs a usable system image based on CentOS or Debian which has all the updates applied is going to be much faster.

Of course something like this would not be suitable for anything that requires security. But there are many uses for servers that don’t require much security.

9 comments to Could we have an Open Computing Cloud?

  • Spotted Eucalyptus a while back. Haven’t had a chance to use it, but it might be worth looking at.

  • PlanetLab provides an open hosting platform for just this type of computing. In exchange for hosting 2 nodes, you have access to over 800 nodes world wide. You can add yourself to as many of the nodes as you like but the resources are shared among the other containers hosted on each.

    You can also download our self contained distro and start your own community.

    Check us out. http://Www.planet-lab.org

  • Well PlanetLab seems usefull if you need 800 computers, but you could do a better job informing users that Planet Lab exists as the self containd package “MyPLC” as a rpm

    Cooperating to pay for servers is a good idea, not sure who would pay 200€ up front to do this. At the moment I use hadoop which works for calculating batch jobs but not for running servers.

  • Anonymous

    This doesn’t directly address your idea for an open implementation, but:

    Gandi Hosting provides EC2-style cloud services for significantly less cost. They charge $14 per month per share, and they provide an API to dynamically create and destroy domains using those shares.

  • Gandi is linked from the Xen hosting post, and as stated on Gandi you would have to pay 70€ per month for 1.5GB.

  • etbe

    Niall and Faiyaz: Thanks for those URLs, I’ll have to check them out.

    Erik: While you would have to pay E70 per month on Gandi for that much RAM, if you can get by with less RAM (as you can for most applications) the cost is much lower. One issue with EC2 is that transfers between the running instance and the persistent storage (the EBS) is billed, so you want to minimise the amount of IO – having more RAM for cache is therefore very desirable.

    A Gandi instance gives you dedicated persistent storage with as much access as you want for no extra cost. So if the instance has little RAM for cache it wouldn’t matter as much (financially at least).

  • etbe

    http://www.planet-lab.org/hosting

    From the above URL it seems that the hosting requirements for planet-lab is to have two servers completely run by planet-lab (which while not unreasonable precludes the possibility of running software in a Xen DomU on hardware that is also used for other purposes). Also their distribution is based on Fedora Core 4 (WTF?). It also makes the significant omission of mentioning nothing at all about what hardware is required for the two nodes (is a couple of old P3 machines good enough?).

    http://www.planet-lab.org/hardware

    Oh the above URL has hardware specifications (there are some broken links so it wasn’t easy to find). P4-3.2GHz or better, 4G of RAM, and 320G of disk. They also require a system controller such as HP ILO or Dell DRAC, so basically a second-hand high end HP server from 4 years ago should do the job.

    Also when discussing Planet-Lab it seems that “you” means “your educational or industrial research organisation” not “you as a person”, and it seems to be only catering for people who have specific network research projects in mind. While that is fine for people who work in such environments, it does limit the use.

    http://www.planet-lab.org/FAQ

    The FAQ has a link for the source code and mentions the possibility of using the software to create a private network. It seems to me that the first thing to do with that software would be to get it running on a recent CentOS release.

  • Pak Tam

    I am looking for some idea and stumble upon your posting :) decide to wish you Thanks. Pak Tam

  • Good post. Eucalyptus would take a lot of the pain away, but is probably LAN-oriented.

    I think the biggest issue for potential donated-domU-hosters would not be the CPU/memory load, but the network/disk load required — particularly intensive network usage would be particularly annoying (even expensive) for many hobbyists.

    fwiw, we in Apache SpamAssassin are continually running into problems getting hold of donated server resources for our backends (spamtrap processing etc). we use a little EC2, but it’s expensive :(