New Dell Server

My Dell PowerEdge T105 server (as referenced in my previous post [1]) is now working. It has new memory (why replace just the broken DIMM when you can replace both) and a new BIOS (Dell released an “Urgent” update yesterday that fixes a problem with memory timing and Opteron CPUs). The BIOS update can be installed from a DOS executable (traditionally done from a floppy disk) or an i386 Linux executable. As I didn’t have a floppy drive in my new server I had to use Linux (not that I object to using Linux, but I’d rather have had the technician do it all for me). I used rescue mode from a Fedora 9 CD that was convenient, mounted a USB stick that I had used to store the BIOS update, and then ran it.

The Dell service was quite good, on-site service and the problem was fixed approximately 27 hours after I called them. Replacing a couple of DIMMs is hardly a test of skill for the repair-man (unlike the time in Amsterdam when a Dell repair-man swapped a motherboard in a server with only 20 minutes of down-time). So I haven’t seen evidence of them doing anything really great, but getting someone on-site close to 24 hours after the report is quite decent, especially considering that I paid for the cheapest support that they offer.

When I got it working I was a little surprised by the memory speed, I had hoped that a new 2GHz Opteron would perform similarly to an Intel E2160 and better than an old Pentium-D (see the results here [2]). Also the memtest86+ run took ages on the step of writing random numbers (I don’t recall ever seeing that step on previous runs, let alone having a system spend half an hour doing it). It seems that the CPU (Opteron 1212) doesn’t perform well for random number generation.

In terms of actual operation all I’ve done so far is to install Debian. The process of installing Debian packages was quite fast (even with a RAID-1 reconstruction occurring at the same time) and the boot time is also very quick.

The hard drive “rails” seemed a little flimsy. The way they attach to the drive is that they have screws that end in pins, so you screw them into plastic and the pins just sit in the holes in the drive where screws normally attach. I think that it would make more sense to have them not screw onto the plastic and instead screw onto the disk. Then if the plastic part that connects the two sides was to break it would still be usable. In fact they could just make the “rails” be separate rails as most other manufacturers do.

One thing that surprised me was the lack of PS/2 keyboard and mouse ports. I had expected that such ports would last longer than serial ports and floppy drives. However my Dell has a power connector for a floppy drive and has a built-in serial port (with some BIOS support for management via a serial port – I have not investigated this because I always plan to use a keyboard and monitor). Of course I expect that most other machines will start shipping without PS/2 ports now and I will have to dispose of my stockpile of PS/2 keyboards and mouses. I generally like to keep a few on hand so that I can give friends and relatives a chance to try a selection and discover which type suits them the best. But I probably don’t need a dozen of them for that purpose.

While a comment on my previous post noted that the floppy drive bay can be used for another disk, it seems that a disk is not going to fit in there easily. It looks like I might be able to install a disk there from the front if I unscrew the face-plate – but that’s more effort than I’m prepared to exert for testing the system (for production I will only have two disks).

In terms of noise, the Dell seems considerably better than a NEC machine which was designed for desktop use. Of course it’s difficult to be certain as part of the noise is from hard disks and one of the disks I’ve installed in the Dell is a WD “Green” disk and the other may have newer technology to minimise noise. Also the mounting brackets for disks in a server may be better at damping vibrations than screwing a disk to the chassis of a desktop machine. Finally the NEC machine does seem to make more noise now than it used to, so maybe it would be best to compare after a few months use to allow for minor wear on the moving parts.

I was initially going to run Debian/Etch on the machine. But as Debian didn’t recognise the built-in Ethernet card and the Xen kernel crashed when doing intensive disk IO I was forced to use CentOS. CentOS 5.1 didn’t start my DomU’s for some reason (which I never diagnosed) but CentOS 5.2 worked perfectly.

Finally I was shocked when I realised that the Dell has no sound hardware! When the CentOS post-install program said that it couldn’t find a sound device I thought that meant that it didn’t support the hardware (it’s the sort of thing that sometimes happens when you get a new machine). But it actually has no sound support! It seems really strange that Dell design a desk-side server (which is quiet) and don’t include sound support. If nothing else then using something like randomsound to take input from the microphone line as a source of entropy is going to be useful on servers.

While the seven USB ports initially seemed like a lot, being forced to use them for keyboard, mouse, and sound (if I end up using it on a desktop) means that there would only be four left.

Shared Context and Blogging

One interesting aspect of the TED conference [1] is the fact that they only run one stream. There is one lecture hall with one presentation and everyone sees the same thing. This is considerably different to what seems to be the standard practice for Linux conferences (as implemented by LCA, OLS, and Linux Kongress) where there are three or more lecture halls with talks in progress at any time. At a Linux conference you might meet someone for lunch and start a conversation by asking “did you attend the lecture on X“, as there are more than two lecture halls the answer is most likely to be “no“, which then means that you have to describe the talk in question before talking about what you might really want to discuss (such as how a point made in the lecture in question might impact the work of the people you are talking two). In the not uncommon situation where there is an interesting implication of combining the work described in two lectures it might be necessary to summarise both lectures before describing the implication of combining the work.

Now there are very good reasons for running multiple lecture rooms at Linux conferences. The range of topics is quite large and probably very few delegates will be interested in the majority of the talks. Usually the conference organisers attempt to schedule things to minimise the incidence of people missing talks that interest them, one common way of doing so is to have conference “streams”. Of course when you have for example a “networking” stream, a “security” stream, and a “virtualisation” stream then you will have problems when people are interested in the intersection of some of those areas (virtual servers do change things when you are working on network security).

There seem some obvious comparisons between Planet installations (as aggregates of RSS feeds) and conferences (as aggregates of lectures). On Planet Debian [2] there has traditionally been a strong shared context with many blog posts referring to the same topics – where one person’s post has inspired others to write about similar topics. After some discussion (on blogs and by email) it was determined that there would be no policy for Planet Debian and that anyone who doesn’t want to read some of the content should filter the feed. Of course this means that the number of people who read (or at least skim) the entire feed will drop and therefore we lose the shared context.

Planet Linux Australia [3] currently has a discussion about the issue of what types of content to aggregate. Michael Davies has just blogged a survey about what types of content to include [4]. I think it’s unfortunate that he decided to name the post after one blogger who’s feed is aggregated on that Planet as that will encourage votes on the specific posts written by that person rather than the general issue. But I think it’s much better to tailor a Planet to the interests of the people who read it than to include everything and encourage readers to read a sub-set.

When similar issues were in discussion about Planet Debian I wrote about my ideas on the topic [5]. In summary I think that the Gentoo idea of having two Planet installations (one for the content which is most relevant and one for everything that is written by members) is a really good one. It’s also a good thing to have a semi-formal document about the type of content that is expected – this would be useful both for using a limited feed for people who go significantly off-topic and as a guideline for people who want to write posts that will be appreciated by the majority of the readers. Planet Ubuntu has a guideline, but it was not very formal last time I checked.

Finally in regard to short posts, they generally don’t interest me much. If I want to get a list of hot URLs then I could go to any social media site to find some. I write a list post at most once a month, and I generally don’t include a URL in the list unless I have a comment to make about it. I always try to describe each page that I link to in enough detail that if the reader can’t view it then they at least have some idea of what it is about (no “this is cool” or “this sucks” links).

LUV Meeting July 2008

At the last two meetings of LUV [1] I’ve given away old hardware. This month I gave away a bunch of old PCI and AGP video cards, a heap of PC power cables, and some magnets (which I received for free because they were in defective toys that could seriously injure or kill children). One new member was particularly happy that at the first meeting he attended he received some free hardware (I hope it works – most of that stuff hasn’t been tested for over a year and I expect that some would fail). Also there was another guy giving away hardware, so I might have started a trend of giving away unused hardware at meetings (he was giving away some new stuff in the original boxes, mostly USB and firewire cables).

For a long time (many years) at LUV meetings there have been free text books given away. One member reviews books and then gives them away after he has read them.

At the meeting Ralph Becket gave a presentation on the Mercury functional language. It was interesting to note that Mercury can give performance that is close to C (within 80%) on LZW compression (which is apparently used as a benchmark for comparing languages). Given the number of reasonably popular languages which don’t give nearly that level of performance I think that this is quite a good result.

After the meeting Richard Keech demonstrated his electric car. It’s a Hyundai Getz which has had the engine replaced by an electric motor but which still uses the manual gearbox. Richard did a bit of driving around with various LUV members as passengers to demonstrate what the car can do. Unfortunately I didn’t get a chance to be involved in that, so I’ll have to do so next time I meet him. One thing to note is that Richard’s car was not built that way by Hyundai, it was a custom conversion job. The down-side to this of course is that it would have cost significantly more than a vehicle with the same technology that was manufactured. One design trade-off is that Richard had batteries installed in the place for a spare tire. Last year the RACV magazine published a letter I wrote suggesting that small cars should be designed without a spare tire and that owners of such cars should rely on the RACV to support them if they get a flat tire [2], my option has not changed in the last year, I still think that cars which are driven in urban areas don’t really need spare tires so I don’t think that Richard is losing anything in this regard.

The motor driving Richard’s car runs on three-phase AC and a solid-state inverter is used to convert 185V DC to about the same voltage at three phase AC (I didn’t write notes so I’m running from memory). Apparently on long drives the inverter gets cooler rather than hotter – I had expected that there would be enough inefficiency in the process of converting DC to AC that it would get hot.

In a previous conversation Richard told me that he can drive his car 75Km on one charge and that it takes him 8 hours to charge when using an Australian mains (240V) plug rated at 10A. When designing such a vehicle it would be trivial to make it use a 20A plug for a 4 hour charge or even a two-phase plug for even shorter charging (I’m sure that Richard could have requested these options if he wanted them). But an 8 hour charge allows the vehicle to be completely charged during a working day and the use of the most common type of plug (the type used in every home and office) means that it can be charged almost anywhere (the standard mains circuit used in Australia is rated at 15A so special wiring is needed for a 20A socket). There is such a power point mounted on the outside of my house not too far from where a visitor could park their car. I anticipate that in a few years time it will not be uncommon for people who visit me to charge their car during their visit. Richard’s ratio of an hour of charge to almost 10Km of driving means that someone who visits for dinner could get enough charge into their car to allow for 30Km of driving before they leave. 30Km is about the driving distance to go from my house to a location on the other side of the city that is just outside the main urban area, so probably at least half of Melbourne’s population lives within a 30Km driving distance from my house. Not that I expect friends to arrive at my house with their car battery almost flat, but it does make it easier to plan a journey if you know that at point A you will be able to get enough charge to get you to point B.

I think it’s a good thing to have members of LUGs give things away to other people and to demonstrate technology that is of wide interest. I hope to see more of it.

7

The History of MS

Jeff Bailey writes about the last 26 years of Microsoft [1]. He gives Microsoft credit for “saving us from the TRS 80”, however CP/M-86 was also an option for the OS on the IBM PC [2]. If MS hadn’t produced MS-DOS for a lower price then CP/M would have been used (in those days CP/M and MS-DOS had the same features and essentially the same design). He notes the use of the Terminate and Stay Resident (TSR) [3] programs. As far as I recall the TSR operation was undocumented and was discovered by disassembling DOS (something that the modern MS EULAs forbid).

Intel designed the 8086 and 80286 CPUs to permit code written for an 8086 to run unchanged in “protected mode” on an 80286 (as noted in the Wikipedia page about the 80286 [4]). Basically all that you needed to do to write a DOS program with the potential of being run directly in protected mode (or easily ported) was to allocate memory by requesting it from the OS (not just assuming that every address above your heap was available to write on) and by addressing memory only by the segment register returned from the OS when allocating memory (IE not assuming that incrementing a segment register is equivalent to adding 16 to the offset). There were some programs written in such a manner which could run on both DOS and text-mode OS/2 (both 1.x and 2.x), I believe that such programs were linked differently. The term Fat Binary [5] is often used to refer to an executable which has binary code for multiple CPUs (EG PPC and M68K CPUs on the Macintosh), I believe that a similar concept was used for DOS / OS/2 programs but the main code of the application was shared. Also compilers which produce object code which doesn’t do nasty things could have their object code linked to run in protected mode. Some people produced a set of libraries that allowed linking Borland Turbo Pascal code to run as OS/2 16bit text-mode applications.

The fact that OS/2 (the protected-mode preemptively multi-tasking DOS) didn’t succeed in the market was largely due to MS. I never used Windows/386 (a version of Windows 2.x) but used Windows 3.0 a lot. Windows 3.0 ran in three modes, “Real Mode” (8086), “Standard Mode” (80286), and “Enhanced Mode” (80386). Real Mode was used for 8086 and 8088 CPUs, for 80286 systems if you needed to run one DOS program (there was no memory for running more than one), and for creating or adjusting the swap-file size for an 80386 system (if your 80386 system didn’t have enough swap you had to shut everything down, start Real Mode, adjust the swap file, and then start it again in Enhanced Mode). Standard Mode was the best mode for running Windows programs (apart from the badly written ones which only ran on Real Mode), but due to the bad practices implemented by almost everyone who wrote DOS programs MS didn’t even try to run DOS programs in 286 protected mode and thus Standard Mode didn’t support DOS programs. Enhanced Mode allowed multitasking DOS programs but as hardly anyone had an 80386 class system at that time it didn’t get much use.

It was just before the release of Windows 3.1 that I decided to never again use Windows unless I was paid to do so. I was at a MS presentation about Windows 3.1 and after the marketing stuff they had a technical Q/A session. The questions were generally about how to work around bugs in MS software (mainly Windows 3.0) and the MS people had a very detailed list of work-arounds. Someone asked “why don’t you just fix those bugs” and we were told “it’s easier to teach you how to work around them than to fix them“. I left the presentation before it finished, went straight home and deleted Windows from my computer. I am not going to use software written by people with such a poor attitude if given a choice.

After that I ran the DOS multi-tasker DesqView [6] until OS/2 2.0 was released. Desqview allowed multitasking well written DOS programs in real mode, Quarterdeck was the first company to discover that almost 64K of address space could be used above the 1MB boundary from real-mode on a 80286 (a significant benefit when you were limited to 640K of RAM), as well as multitasking less well behaved DOS programs with more memory use on an 80386 or better CPU.

OS/2 [7] 2.x was described as “A Better DOS than DOS, a Better Windows than Windows”. That claim seemed accurate to me. I could run DOS VM86 sessions under OS/2 which could do things that even Desqview couldn’t manage (such as having a non-graphical DOS session with 716K of base memory in one window and a graphical DOS session in another). I could also run combinations of Windows programs that could not run under MS Windows (such as badly written windows programs that needed Real Mode as well as programs that needed the amount of memory that only Standard or Enhanced mode could provide).

Back to Bill Gates, I recently read a blog post Eight Years of Wrongness [5] which described how Steve Ballmer has failed MS stockholders by his poor management. It seems that he paid more attention to fighting Linux, implementing Digital Restrictions Management (DRM), and generally trying to avoid compatibility with other software than to actually making money. While this could be seen as a tribute to Bill Gates (Steve Ballmer couldn’t do the job as well), I think that Bill would have made the same mistakes for the same reasons. MS has always had a history of treating it’s customers as the enemy.

Jeff suggests that we should learn from MS that the freedom to tinker is important as is access to our data. These are good points but another important point is that we need to develop software that does what users want and acts primarily in the best interests of the users. Overall I think that free software is quite well written in regard to acting on behalf of the users. The issue we have is in determining who the “user” is, whether it’s a developer, sys-admin, or someone who wants to just play games and do some word-processing.