I have just read an interesting post speculating about the possibility of open source hardware [1].
To some extent things have been following a trend in that direction. Back in the bad old days every computer manufacturer wanted to totally control their market segment and prevent anyone else from “stealing their business”. Anti-competitive practices were standard in the computer industry, when you bought a mainframe you were effectively making a commitment to buy all peripherals and parts from the same company. The problems were alleviated by government action, but the real change came from the popularity of PC clones.
White-box clones where every part came from a different company truly opened up the hardware development, and it wasn’t all good. When running a simple single-tasking OS such as MS-DOS the problems were largely hidden. But when running a reliable multi-tasking OS such as Linux hardware problems became apparent. The PCI bus (which autoconfigured most things) reduced the scope of the problem but there are still ways that white-box machines can fail you. Now when I get a white-box machine I give it away to members of my local LUG. My time is too valuable to waste on debugging white-box hardware, I would rather stick to machines from IBM and HP which tend to just work.
Nowadays I buy only name-brand machines, all the parts were designed and tested to work together – this doesn’t guarantee that the machine will be reliable but it does significantly improve the probability. Fortunately modern hardware is much faster than I require for the work I do, so buying second-hand name-brand machines (for less money than a new white-box machine) is a viable option.
The PCI bus [2] standard from Intel can be compared to some of the “Open Source” licenses from companies where anyone can use the software but only one company can really be involved in developing it.
One significant impediment to open hardware development is the proprietary nature of CPU manufacture. Currently there are only a few companies that have the ability to fabricate high-end CPUs so projects such as OpenRISC [3] which develop free CPU designs will be limited to having their CPUs implemented with older technology (which means lower clock speeds). However this doesn’t mean that they aren’t useful, tailoring factors such as the number of registers, the bus width of the CPU, and the cache size to match the target application has the potential to offset the performance loss from a lower clock speed. But this doesn’t mean that an OpenRISC or similar open core would be ideal for your typical desktop machine.
If companies such as Intel and AMD were compelled to fabricate any competing CPU design at a reasonable cost (legislation in this regard is a possibility as the two companies collectively dominate the world computer industry and it would be easy for them to form a cartel) then designs such as OpenRISC could be used to implement new CPUs for general purpose servers.
Another issue is the quality of support for some optional extra features which are essential for some operations. For example Linux software RAID is quite good for what it does (basic mirroring, striping, RAID-5 and RAID-6), but it doesn’t compare well with some hardware RAID implementations (which actually are implemented in software with a CPU on the RAID controller). For example with a HP hardware RAID device you can start with two disks in a RAID-1 and then add a third disk to make it a RAID-5 (I’ve done it). Adding further disks to make it a larger RAID-5 is possible too. Linux software RAID does not support such things (and I’m not aware of any free software RAID implementation which does). It would certainly be possible to write such code but no-one has done so – and HP seem happy to make heaps of money selling their servers with the RAID features as a selling point.
Finally there’s the issue of demand. When hardware without free software support (such as some video cards which need binary-only drivers for best performance) is discussed there is always a significant group of people who want it. The binary-only drivers in question are of low quality, often don’t support the latest kernels, and have such a history of causing crashes that kernel developers won’t accept bug reports from people who use them, but still people use them. In the short-term at least I expect that an open hardware design would deliver less performance and in spite of the fact that it would have the potential to offer better reliability the majority of the market would not accept it. The production volume of electronics gear is the major factor determining the price so it would cost more.
I think that both IBM and HP provide hardware that is open enough for my requirements, they both have engineers working on their Linux support and the interfaces are well enough documented that we generally don’t have any problems with them. Both Intel and AMD and the major system vendors are all working on making things more open, so I expect some small bug significant improvements in the near future.