7

Car Drivers vs Mechanics and Free Software

In a comment on my post about Designing Unsafe Cars [1] Noel said “If you don’t know how to make a surgery, you don’t do it. If you don’t know how to drive, don’t drive. And if you don’t know how to use a computer, don’t expect anybody fix your disasters, trojans and viruses.” Later he advocates using a taxi.

Now I agree about surgery – apart from corner cases such as medical emergencies in remote places and large-scale disasters. I also agree that it’s good to avoid driving if you aren’t very good at it (that would be much better than the current fad of sub-standard drivers buying large 4WD vehicles).

But I don’t think that people who lack computer skills should avoid using computers.

When cars were first invented everyone who owned one was either a mechanic or employed one. Driving a car often involved being well out of range of anyone else who might know how to fix it, so either the car owner or their chauffeur had to be able to fix almost any problem. As the car industry evolved the level of mechanical knowledge required to own and operate a car has steadily decreased. I expect that a significant portion of drivers don’t know how to top up the oil or radiator water in their car and probably don’t know what is the correct pressure for air in their tires. To a large extent I don’t think this is a problem, owning a car involves regularly taking it to be serviced where professionals will (or at least should) check every aspect of the car that is likely to fail. If I used my windscreen-washer less frequently I could probably avoid opening the bonnet of the car between scheduled services!

When budgeting for car ownership you just have to include regularly spending a few hundred dollars to pay for an expert to find problems and fix them – with of course the occasional large expense for when something big breaks.

When the computer industry matures I expect that the same practice will occur. Most people will buy computers and plan to spend small amounts of money regularly to pay people to maintain it. Currently most older people seem to plan to have a young relative take care of their PC for them – essentially free mechanic services. The quality of such work will vary of course, and poorly designed OSs that are vulnerable to attack may require more support than can be provided for free.

Due to deficiencies in city design it is almost essential to drive a car in most parts of the US and Australia – as opposed to countries such as the Netherlands where you can survive quite well without ever driving. When a service is essential it has to be usable by people who have little skill in that area. It would be good if driving wasn’t necessary, I would be happy if I never drove a car again.

The need to use computers however will continue to increase. So we need to make them more available to users and to support users who can’t disinfect computers etc. The only skill requirements for using a computer should be the ability to use a keyboard and a mouse!

This requires a new industry in supporting PCs. Geek Squad in the US [2] seems to be the organisation that is most known for this. I expect that there will be multiple companies competing for such work in every region in the near future, just as there are currently many companies competing for the business of servicing cars.

We need support for free software from such companies. Maybe existing free software companies such as Red Hat and Canonical can get into this business. One advantage of having such companies supporting software is that they would have a strong commercial incentive to avoid having it break – unlike proprietary software vendors who have little incentive to do things right.

The next issue is the taxi analogy. Will software as a service with Google subsidising our use of their systems [3] take over any significant part of the market?

Of course the car analogy breaks down when it comes to privacy, no-one does anything remotely private in a taxi while lots of secret data is stored on a typical home computer. Google is already doing some impressive security development work which will lead towards low maintenance systems [4] as well as protecting the privacy of the users – to the extent that you can trust whoever runs the servers.

My parents use their computer for reading email, browsing the web, and some basic wordprocessing and spreadsheet work. The mail is on my IMAP server so all I need is to have some way to store their office documents on a server and they will pretty much have a dataless workstation. Moving their collection of photos and videos of their friends and relatives to a server will be a problem, transferring multiple gigabytes of data on a cheap Australian Internet access plan is a problem.

4

Laptop Reliability

Update: TumbleDry has a good analysis of the Square Trade report [0]. It seems that there are significant statistical problems in Square Trade’s analysis and a possible conflict of interest.

Square Trade did a survey of laptop reliability and wrote an interesting article about the results [1]. One thing to keep in mind when reading them is that the usage patterns vary greatly by type of product (netbook vs laptop) and probably by brand.

Their statistics indicate that netbooks are less reliable than laptops, but I think that my actions in taking my EeePC to places such as the beach are probably not uncommon – a netbook is ideal when you might need access to a computer at a moment’s notice. I expect that the reliability of my laptop has increased because I bought a netbook!

Their statistics show that Lenovo is far from the most reliable brand, I wonder what the usage scenarios for such machines are. I’ve been using Thinkpads happily for over 11 years. I have had many warranty repair jobs, I lost count long ago. But I don’t think that this indicates a problem with Thinkpads, my use is very demanding, I have done a lot of traveling, and done coding in planes, trains, trams, and taxis in many countries. So instead of criticising IBM/Lenovo for having their machines break, I praise them for repeatedly repairing them no matter how they wear out in my use. The speed and quality of the repair work is very impressive. Based on this I have been strongly recommending Thinkpads to everyone I know who seems likely to wear laptops out through basically doing everything that a laptop is designed to do all day every day! Among people I know the incidence of laptop warranty repairs is probably about 100% as the number of systems that are never repaired are outweighed by the systems that are repaired multiple times.

Generally Thinkpads seem fairly well built to me, I’ve been surprised at how many times they didn’t break when I expected them to.

When articles like the one from Square Trade are discussed people usually cite personal anecdotes. My above anecdote covers just over 11 years of intensive use of four different Thinkpads. Of course it doesn’t prove much about the inherent reliability of Thinkpads (I could have received through random selection either four Thinkpads that were significantly more or less reliable than average). But having dealt with the IBM/Lenovo service in a few countries I can confirm the quality of their repair work. Every time they have returned my machine rapidly, they never complained about my policy of giving them a system with the hard drive removed, and in all but one case they completely solved the problem on the first try. One time they forgot to replace a broken rubber foot and to make up for this they sent me a complete set of spare parts by courier – they had repaired the other two faults without problem.

Now comparing the reliability of rack-mount servers would be a lot easier. The vast majority of such systems are stored safely in racks all the time and tend not to be mishandled. My experience with servers from Dell, HP, IBM, and Sun is that apart from routine hard drive failures they all run well if you keep them cool enough. But of course as I haven’t run more than a few dozen of any of those brands at one time I don’t have anything near a statistically significant sample.

3

Planning Servers for Failure

Sometimes computers fail. If you run enough computers then you encounter failures regularly. If the computers are important then you need to plan for the failure.

An ideal situation is to have redundant servers. Misconfigured clusters can cause more downtime than they can prevent and it requires more expensive hardware to properly implement a cluster (you need at least two servers and hardware to allow a good node to kill a bad node) as well as more time (from people who may charge higher rates).

Most companies don’t have redundant servers. So if you have some non-redundant servers there seem to be two reasonable options. The first one is to use more expensive hardware on a support contract. If a server is really important to you then get a 24*7 support contract – it only takes an extra mouse click (and a couple of thousand dollars) when ordering a Dell server. I am not going to debate the relative merits of Dell vs IBM vs HP at this time, but I think that most people will agree that Dell offers significant advantages over a white-box server, both in terms of quality (the low incidence of failure) and support (including 24*7 part replacement).

The second option is to have a cheap server that can be easily replaced. This IS appropriate for some tasks. For example I have installed many cheap desktop systems with two IDE disks in a RAID-1 array that run as Internet gateway systems for small businesses. The requirements were that they be quiet, use little power (due to poorly ventilated server rooms / cupboards), be relatively reliable, and be reasonably cheap. If one of those systems suddenly fails and no replacement hardware is available then someone’s desktop PC can be taken as a replacement, having one person unable to work due to a lack of a PC is better than having everyone’s work impeded by lack of Internet access!

This ability to swap hardware is dependent on the new hardware being reasonably similar. Finding a desktop PC in an office today which can support two IDE disks and which has an Ethernet port on the motherboard and a spare PCI slot is not too difficult. I expect that in the near future such machines will start to disappear which will be an incentive for using systems with SATA disks and USB keyboards as routers.

This evening I had to advise someone who was dealing with a broken server. The system in question is mission critical, was based on white-box hardware, and had four SATA disks in a LVM volume group for the root filesystem. This gave a 600G filesystem with less than 10G in use. If the person who installed it had chosen to use only a single disk (or even better two disks in a RAID-1 array) then there would have been a wide range of systems that could take the disks and be used to keep the company running for business tomorrow. But finding a computer that can handle four SATA disks is a little more tricky.

Of course running a mission critical server that doesn’t use RAID is obviously very wrong. But using four disks in an LVM volume group both increases the probability of a data destroying disk failure and makes it more difficult to replace the computer itself. Some server installations are fractally wrong.

First Dead Disk of Summer

Last night I was in the middle of checking my email when I found that clicking on a URL link wouldn’t work. It turned out that my web browser had become unavailable due to a read error on the partition for my root filesystem (the usual IDE uncorrectable error thing). My main machine is a Thinkpad T41p, it is apparently possible to replace the CD-ROM drive with a second hard drive to allow RAID-1 but I haven’t felt inclined to spend the money on that. So any hard drive error is a big problem.

Fortunately I had made a backup of /home only a few days ago. I use offline IMAP for my email so that my recent email (the most variable data that matters to me) is stored on a server with a RAID-1 as well as on my laptop and my netbook. The amount of other stuff I’ve been working on in my home directory is fairly small, and the amount of that which isn’t on other systems is even smaller (I usually build packages on servers and then scp the relevant files to my laptop for Debian uploads, bug reports, etc.

The first thing I did was to ssh to one of my servers and paste a bunch of text from various open programs into a file there. That was the contents of all open programs, the URLs of web pages I was reading, and the contents of an OpenOffice spread-sheet which I couldn’t save directly (it seems that a read-only /tmp will prevent OpenOffice from saving anything). Then I used scp to copy 600M of ted.com videos that I hadn’t backed up, I don’t usually backup such things but I don’t want to download them twice if I can avoid it (I only have a quota of 25G per month).

After that I made new backups of all filesystems starting with /home. I then used tar to backup the root filesystem.

The hard drive in the laptop only had a single bad sector, so I could have re-written it so that it would be remapped (as I have done before with that disk), but I think that on a 5yo disk it’s probably best to replace it. I had been thinking of installing a larger disk anyway.

On restore I restored the root filesystem from a month-old backup and then used “diff -r” to discover what had changed, it took me less than an hour to merge the changes from the corrupted root filesystem to the restored one.

Now I have lots of free disk space and no data loss!

I am now considering making an automated backup system for /home. My backup method is to make an LVM snapshot of the LV which is used and then copy that – this gets the encrypted data so I can safely store it on USB devices while traveling. I could easily write a cron job that uses scp to transfer a backup to one of my servers at some strange time of the night.

The next issue is how many other disks I will lose this summer. I have installed many small mail server and Internet gateway systems running RAID-1, it seems most likely that some of them will have dead disks with the expected record temperatures this summer.

8

What to do When You Break a Server

Everyone who does any significant amount of sysadmin work will break a server. Most people who have any significant experience will have broken several. Anyone who has never broken one should be treated with suspicion by other members of the sysadmin team, they probably haven’t learned the caution that most of us learn from stuffing something up badly.

When you break a server there are some things that you can do to greatly mitigate the scale of the disaster. Firstly carefully watch what happens, for example don’t type “rm -rf *” and then walk away, watch the command – if it takes unusually long then press ^C and double-check that you are removing the right files. Fixing a half broken server is often easier than fixing one that has been broken properly and completely!

If you don’t know what to do then do nothing! Doing the wrong thing can make things worse. Seek advice from the most experienced sysadmin who is available. If backups are inadequate then leave the server in a broken state while seeking advice, particularly if you were working at the end of the day and it doesn’t need to be up for a while. There is often time to seek advice by email before doing something.

Do not reboot the system! Even if your backups are perfect there is probably some important data that has been recently modified and can be salvaged. Certain types of corruption (such as filesystem metadata corruption) will leave good data in memory where it can be recovered.

Do not logout! If you do then you may not be able to login again and this may destroy your chances of fixing the system or recovering data.

Do not terminate any programs that you have running. There have been more than a few instances where the first step towards recovering from disaster involved using an open editor session (such as vi or emacs). If the damage prevents an editor from being used (EG by removing the editor’s program or a shared object it relies on) then having one running is very important.

These procedures are particularly important if you are unable to visit the server. For example when using a hosted server at an ISP the more cost effective plans give you no option to ever gain physical access. So plugging the disks into another machine for recovery is not an option.

Does anyone have any other suggestions as to what to do in such a catastrophy?

PS This post is related to the fact that I had to recover the last couple of weeks of blog comments and posts from Google’s cache…

Update: I got my data back, not I have to copy one day of blog stuff from one database to another.

3

Exetel Stupidity

Anand Kumria has an ongoing dispute with Exetel, the latest is that a director of Exetel has libeled him in a blog comment [1].

Having public flame-wars with customers generally isn’t a winning move for a corporation. But doing so in the context of the blog world is a particularly bad idea. The first issue is that almost everyone who regularly reads Anand’s blog will trust him instead of a corporation (Anand is well regarded in the free software community). So it’s not as if accusing Anand of lying will gain anything.

But when a director of the company starts doing this it makes the issue more dramatic and interesting to many people on the net. Now Anand’s side of the story will get even more readers, of course Anand’s side was always going to get more readers than Exetel – I’m sure that Anand’s blog is more popular than that of Steve Waddington. I wouldn’t be surprised if my blog was more popular than Anand’s and now my readers will be following the Exetel saga for the Lulz. I’m sure that I won’t be the last person to comment on this.

The most amazing thing is that Steve Waddington talks about having to pay to take the TIO complaint. So I guess that means I should start complaining whenever I get bad service from an ISP and cost them some money! I should have stayed with Optus and started complaining all the time when they caused me problems!

One thing that Steve and people like him should keep in mind is that members of our community are not only heavy users of the Internet, we generally recommend ISPs to other people, and many of us make money working for ISPs. If you want your ISP to get good reviews and to be able to hire good staff then attacking people like Anand is not the way to go.

4

Ffmpeg and Video on a Viewty Phone

I recently decided to copy some of my FLV (Flash Video) collection to my LGU990 Viewty mobile phone [1]. I was inspired by the ffmpeg cheat sheet [2].

deb http://www.coker.com.au lenny selinux-mm

Firstly I installed the version of ffmpeg that comes from the Debian-Multimedia repository [3]. Then I spent about an hour getting it to build with some patches to not require text relocations. The change involves putting --disable-mmx in the CONFIG_OPT setting for i386 and patching rgb2rgb.c to not compile MMX code. I’ve put the packages in the above APT repository.

My first attempt was to produce AVI files similar to the ones that my Viewty makes when it saves a video. ffmpeg -i file.avi displays the file details, which includes mpeg4 video and mp3 audio. By default ffmpeg creates AVI files that use mp2 audio and my phone doesn’t like them. I tried using “-acodec libmp3lame” which gave a file that (according to ffmpeg) had the same encoding as files produced by my phone, but my phone didn’t accept them. I eventually gave up and used mpeg4 which worked without any problem. I never had a reason to desire AVI files, it just seemed likely that the easiest format would be the one that the phone uses to encode the videos it creates – obviously a bad assumption.

Just use MPEG4 for a Viewty phone!

One of the videos I tried causes various parts of the phone to crash. It would crash the preview screen (which shows icons of the videos), and crash the player after playing for a couple of seconds. Crashing the player caused a soft-boot of the phone (it restarted and asked for my PIN code). I expect that anyone who was suitably motivated could create a hostile mpeg4 file that can exploit a Viewty.

The next problem I had was that most videos had a blank icon for the preview. It seems that the Viewty uses the first frame as the icon which doesn’t work well for videos that fade in, as such videos comprise a large portion of my collection that is a problem. The solution to this problem is to take 100ms of video from later on in the flv file and prepend it to the start. The ffmpeg FAQ has some information on how to do this [4], it involves converting the data to a format that can be concatenated and then converting it back. It’s rather ugly, it would be good if someone added a feature to ffmpeg to support multiple -i options.

I used the below Makefile to convert all the flvs in my collection to mp4 format in a subdirectory named “mp4“. I used a Makefile for this so that I can just run “make” whenever I add more flvs to my collection, and “make -j2” to use both cores of my Opteron CPU. It takes 0.1 seconds of data from 20 seconds in the video and appends the entire video to that. It creates two pipes for the temporary data which tends to be about 30* larger than the source flv file.

MP4S:=$(shell for n in *.flv ; do echo $$n | sed -e s/^/mp4\\// -e s/flv$$/mp4/ ; done)
all: $(MP4S)

mp4/%.mp4: %.flv
        mkfifo $@.aud $@.vid
        sh -c "(ffmpeg -ss 20 -t 0.1 -i $< -vn -f u16le -acodec pcm_s16le -ac 2 -ar 22050 – ; ffmpeg -i $< -vn -f u16le -acodec pcm_s16le -ac 2 -ar 22050 – ) > $@.aud &"
        sh -c "(ffmpeg -ss 20 -t 0.1 -i $< -an -f yuv4mpegpipe – ; ffmpeg -i $< -an -f yuv4mpegpipe – ) > $@.vid &"
        ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 22050 -i $@.aud -f yuv4mpegpipe -i $@.vid -y $@
        rm $@.aud $@.vid

4

How to Choose a NetBook

I’ve previously written some suggestions for people choosing a portable computer [1]. Basically it’s about how to start by choosing the correct type of portable computer – if you don’t know whether you want a NetBook or a Laptop then you are really lost.

Now there are a range of NetBook type devices which vary greatly in size, weight, price, screen resolution, and keyboard quality.

Probably the first thing to consider is whether a NetBook will be your only portable PC, or even your only PC. I have an EeePC 701 and a Thinkpad T41p (old, but still more than adequate for my needs). When I’m at home I have a server that I use for compiling and other heavy tasks. So while my Thinkpad is old and I wouldn’t consider using it for all my work, as I have a server to use I find that I don’t need anything better. My EeePC is small and under-powered for even medium size compiles, but for most other tasks works quite well. The low screen resolution is annoying as is the tiny keyboard (which prevents me from touch-typing). But my plan is to spend much more time carrying my EeePC in case of emergencies than I will ever spend using it – so saving size and weight is more important than having a more capable computer.

If I had no laptop then I would have chosen a more powerful NetBook (such as an EeePC 900 or 901 – I bought my EeePC when it was outdated). If I had no server then I would have bought a more powerful laptop a while ago (at least something that can run Xen and KVM).

Now in terms of specific features, the first thing to consider when choosing a laptop or NetBook is whether you can touch-type. If you can then having a keyboard that permits it is a major feature. Which then drives the decision of whether your NetBook use will be intensive enough that touch-typing is required (my use of my EeePC does not require touch-typing – I’m annoyed every time I type on it but I deal with it). Of course I do have the option of using a USB keyboard.

When considering reviews of NetBook keyboards one issue that seems relevant is the size of your hands. If the reviewer has fingers that are significantly thinner or fatter than yours then the review of the keyboard may not be relevant to you. I suggest always testing a keyboard before making a purchase decision on a portable computer.

The screen resolution on NetBooks is a significant issue. For most tasks my EeePC 701 is adequate (not great) but there are some programs that require higher resolution, among other things this rules out playing most games (of course the slow CPU also rules out many games). Note that if you hold down the ALT key you can click on the middle of a window and drag it around, so you can work with windows that are larger than your screen (this is essential for programs that have large dialog boxes).

The low resolution of the screen on my EeePC means that there is little space for a task-bar or for windows to be tiled. So while I can comfortably work with 10 windows on one desktop on my 1400*1050 resolution Thinkpad I struggle with 5 windows on the 800*480 display of my EeePC. Some coding and sysadmin tasks can best be done with multiple Xterms open at once, my performance on those tasks is significantly decreased when using my EeePC. So while either machine can be used effectively for a single SSH session, if I need to have 8 sessions open at once then I will have to use my Thinkpad. If I was going to be routinely doing such tasks while on the move then I would have bought a NetBook with a greater display resolution.

The next issue is storage. The machines that are most commonly identified with the NetBook image use flash storage. This makes them resistant to being dropped but also dramatically reduces the storage space (or increases the price). If you have a bigger machine at home then a NetBook with flash storage works well. The 4G of internal storage in my EeePC plus the 8G SD card I always have installed works quite well for me. But I also carry a few USB flash storage devices for extra capacity. Anyone who is to use a NetBook as their primary PC would need to buy a model with a hard disk, and even for some more casual uses the storage capacity of the flash based models may not be adequate.

It seems to me that anyone who requests advice on buying a NetBook without specifying some detail about these issues will end up receiving recommendations for devices that fit the usage scenarios of other people. A machine that perfectly meets the needs of one of your friends may be totally inappropriate for your use.

My final suggestion is to consider the outdated models as well as the current ones. For certain usage scenarios the original EeePC is still a better machine than most of the newer and more expensive NetBooks that are on the market now. My use case of carrying an EeePC everywhere just in case a server happens to crash (or I need to check my mail) is one where the EeePC 701 is slightly better suited than most newer machines – saving a small amount of weight and space is important enough for me to accept the significant feature loss as a reasonable trade-off. As an aside I’m disappointed in the apparent lack of small NetBooks on sale at the moment, it seems that every manufacturer is now making NetBooks which are significantly bigger than the original EeePC and only slightly smaller than Laptops.

I’m Not a Fan

Fandom is something that has never made sense to me. If a sport such as football interested me then I would play it. Being good at something is not a requirement for participation, in fact my observation is that amateurs who kick a ball around for fun (without even keeping score) appear to enjoy it more than professional players. Also there are a huge number of support roles, anyone who really wants to support a professional sports team can apply for work – a sports team is a medium size corporation and they have all manner of jobs available.

I have had a few people try to convince me that I am somehow missing out on an important experience by not attending sporting events. But the fact is that there is more than enough fandom in and around the computer industry, so if I was interested in fandom then I could get it without going anywhere near a stadium.

One of the advantages of the free software community is the minimal amount of fandom. There are many opposing opinions which get some degree of fan attention (EG GPL vs BSD license, Linux vs FreeBSD vs OpenBSD, and GNOME vs KDE). But this is smaller than it might be due to the fact that free software development and support are activities that tend to drag in everyone who has interest and skill. When you have been involved in software development or support it becomes difficult to sustain the level of prejudice that is required to be a real fan.

This is not to say that there is a lack of strong advocacy. When a project has received contributions from hundreds of people and support from thousands of others there will be a huge number of people who feel that they have some degree of ownership of the project and therefore react strongly when it is criticised. But I think that there is a vast difference between someone who defends a project that they have worked on and someone who merely barracks for a team that doesn’t include them.

I am continually disappointed by Microsoft fans. Anyone who really wants to support Microsoft knows where they can send their CV. I can’t understand why someone would want to barrack for a team that rejects them – such as people who are unable to get employed by MS but who are still fans.

In the free software community every person who contributes is a member of the team. That includes all contributions, a large portion of which do not include patches to source code.

13

Hating Microsoft

In mailing list discussions I’ve seen Windows users get rather unhappy when people talk about “Hating Microsoft“, this often includes claims that it’s supposedly “unprofessional” to hate one vendor. Some go as far as to claim that it’s a good idea to avoid hiring someone who says that they Hate Microsoft – not that I would want to work for anyone who would reject someone’s CV based on a mailing list discussion.

The thing that they need to understand is that when someone says “I Hate Microsoft” it’s usually in a similar manner to someone saying “I Hate Broccoli“, it’s more of an expression of distaste than real hatred. The IHateMicrosoft.com site has animated pictures resembling nuclear explosions [1], which is good for a laugh (the site also lists some real reasons for avoiding MS). But there doesn’t seem to be any evidence of real hatred for MS, even in the US there doesn’t appear to be anyone wanting to use violence to solve the MS problem.

Abortion doctors are hated, MS isn’t.

The next thing that people need to know is that a significant portion of the “I Hate Microsoft” sentiment comes from people who spend about 40 hours a week being paid to use MS software. I am fortunate that it’s been a few years since I have had to use MS software in any way and many years since I was forced to use it in any serious way (IE anything other than using Windows as a SSH and email client), so I have little immediate need to get angry at them. But people who are forced to use or support MS software on a daily basis will often get unhappy about the situation.

It’s little things like an ActiveX bug that exposes Outlook and Internet Explorer to remote comprose [2] that can really annoy people, there was never a need for ActiveX and certainly never a need to have it work via email or be enabled by default. But MS released their software to work in that way and now all the users have to wait patiently for a fix (or scramble for a work-around).

Another issue that seems to get some complaints is the use of terms such as “M$” and “Microsloth” to refer to Microsoft. If that annoys you then please get a grip on yourself! It’s a software company not a religion! Official company documents should have all trademarks spelled correctly, but for casual discussion on a mailing list I think that such slang terms are appropriate. If nothing else you can take it as a declaration of possible bias.

I don’t use such terms, but again that may be because I am fortunate enough to not use MS software. When someone is unable to avoid using inferior software due to the anti-competitive actions of MS it is understandable that they may vent their frustration by misusing trademarked names.

Remember that English is a lot different from any language to use when programming computers. Using “M$” instead of “Microsoft” will not give a syntax error or an error about using an undeclared variable. The word “hate” has different meanings depending on context.