11

Is the PC Dying?

I just read an interesting article about the dispute between Microsoft and Apple about types of PC [1]. Steve Jobs predicted a switch from desktop PCs to portable devices, while Steve Ballmer of Microsoft claimed that the iPad is just a new PC.

Defining a PC

I think that the defining characteristic of the IBM Compatible PC was it’s open architecture. Right from the start the PC could have it’s hardware expanded by adding new circuit boards into slots on the motherboard (similar to other PC systems of that era such as the Apple 2 and the S-100 bus). The deal with IBM included Intel sharing all it’s CPU designs with other manufacturers such as NEC and AMD from the 8086 until the mid-90’s. AMD specialised in chips that were close copies of Intel chips at low prices and higher clock rates while NEC added new instructions. Compaq started the PC clone market as well as the laptop market, and system software for the IBM compatible PCs was primarily available from IBM and Microsoft in the early days, along with less popular variants such as CP/M86, Novell Netware and others. In the late 80’s there was OS/2 as an alternate OS and Windows as one of several optional GUI environments to run on top of MS-DOS or PC-DOS. In the mid 90’s PCs were used for running protected mode OSs such as Linux and Windows/NT.

Now if we look at a system such as a Netbook then it clearly misses some of the defining characteristics of the desktop PC. I can’t upgrade a Netbook in any meaningful way – changing a storage device or adding more RAM does not compare to adding an ISA/MCA/EISA/VL-Bus/PCI/PCIe expansion card. With my EeePC 701 I don’t even have an option of replacing the storage as it is soldered to the motherboard! A laptop allows me to add a PCMCIA or PC-Card device to expand it, but with a maximum of two cards and a high price this isn’t a great option.

What is Best for Home Users?

For a while now my parents have been using 3G net access for their home Internet use [2]. So it seems that a laptop provides greater benefits for their use now than it previously did when they used Cable and ADSL net access. My parents have been considering getting a new monitor (1920*1080 resolution monitors are getting insanely cheap nowadays) and driving such a monitor effectively might require a more capable PC. I recently bought myself a nice refurbished Thinkpad for $796 [3], it seems likely that I could find a refurbished Thinkpad at auction which is a little older and slower for a lower price, even buying an old T41p would be a reasonable option. This would give my parents not only the option of using the Internet when on holidays, but also in a different part of their house when they are at home.

The Apple iPad would probably be quite a reasonable Internet platform for my parents if it wasn’t for the fact that it uses DRM. While it’s not a great platform for writing, my parents probably don’t do enough that it would be a huge problem for them. So I might look for a less restrictive tablet platform for my parents. At the moment the best resolution for a tablet seems to be 1024*768, but I expect that some tablets (maybe with a hybrid tablet/laptop design like the Always Innovating Smartbook [4]) with a higher resolution will be released soon. I hope that the iPad and other closed devices don’t get any serious market share, but it seems likely that OSs such as Android which are only slightly more open will have a significant market share.

Ultra-Mobile Design vs PCs Design

One significant problem with ultra-mobile devices is that they make significant engineering trade-offs to get the small size. For a desktop system there are lots of ways of doing things inefficiently, running the AMD64 or i386 architecture which is wasteful of energy and having lots of unused space inside the box in case you decide to upgrade it. But for a laptop there are few opportunities for being inefficient, and for a tablet or smart phone everything has to be optimised. When the optimisation of a device starts by choosing a CPU that’s unlike most other systems (note that there is a significant range of ARM CPUs that are not fully compatible with each other) it makes it very difficult to produce free software to run it. I can salvage a desktop PC from a rubbish bin and run Linux on it (and I’ve done that many times), but I wouldn’t even bother trying to run Linux on an old mobile phone.

It seems that in the near future my parents (and many other people with similar needs) will be best suited by having a limited device such as a tablet that stores all data on the Internet and not having anything that greatly resembles a PC. In many ways it would be easier for me to support my parents by storing their data in the cloud and then automatically backing it up to removable SATA disks than with my current situation of supporting a fully capable PC and backing it up to a USB device whenever I visit them.

I’m also considering what to do for some relatives who are about to go on a holiday in Europe, they want to be able to send email etc. It might not be possible just yet, but it seems like an ideal way of doing this would be to provide them with something like an iPad that they can use with a local 3G SIM for the country that they stay in and they could then upload all their best photos to some server that I can backup and send email to everyone they know. An iPad isn’t good for this now as you don’t want to go on holidays in another country while carrying something that is really desirable to thieves.

Ultra Mobile Devices are Killing PCs

It seems to me that Google Android and the Apple iPad/iPhone OS are taking over significant parts of the PC market. The people who are doing traditional PC things are increasingly using Laptops and Netbooks, and the number of people who get the freedom that a PC user did in the 80’s and 90’s is decreasing rapidly.

I predict that by 2012 the majority of Linux systems will be running Google Android on hardware that doesn’t easily allow upgrading to more open software. At the moment probably the majority of Linux systems are wireless routers and other embedded devices that people don’t generally think about. But when iPad type devices running a locked-down Linux installation start replacing Ubuntu and Fedora desktop systems people will take notice.

I don’t think that the death of the PC platform as we know it will kill Linux, but it certainly won’t do us any good. If there were smarter people at Microsoft then they would be trying to work with the Linux community on developing innovative new ways of using desktop PCs. Of all the attempts that Microsoft has made to leave the PC platform the only success has been the X-Box which is apparently doing well.

Tablet devices such as the iPad could work really well in a corporate environment (where MS makes most of it’s money). On many occasions I’ve been in a meeting and we had to adjourn due to someone needing to go to their desk to look something up. If everyone had an iPad type device at their desk that used a wired network when it was available and encrypted wireless otherwise then for a meeting everyone could take their tablet without it’s keyboard and be able to consult all the usual sources of data without any interruption.

Could a high-resolution version of the iPad kill MS-Windows in the corporate environment?

6

Mail Server Security

I predict that over the course of the next 10 years there will be more security problems discovered in Sendmail than in Postfix and Qmail combined. I predict that the Sendmail problems will be greater in number and severity.

I also predict that today’s versions of Postfix and Qmail will still be usable in 10 years time, there will be no remote security problems discovered other than DoS attacks.

I’ve been having arguments about MTA security with Sendmail fans for over 10 years. I would appreciate it if the Sendmail fans would publish their own predictions, then we can wait 10 years and see who is more accurate.

I don’t recommend using Qmail (Postfix is what I use). But I think he wrote code that is unlikely to be exploited.

3

Predictions for 2009 and Beyond

Stewart has made some predictions for the future of computing [1].

He predicts that within 2 years the majority of consumer machines will be laptops and have SSD (not rotational media). I predict that by the end of next year more than half of all new consumer machines that are being sold will be laptops (defined as being portable machines with the display and keyboard forming part of a single unit), and that more than half of such machines will have SSD as the primary storage (IE used for booting and for most common file access). I predict that by the end of 2010 the majority of all computers shipped (in all form factors including games consoles and servers) will have SSD as their primary storage. I predict that in late 2010 rotational media will start to go away for most tasks, but for at least the next year the model will be SSD for small/light/fast operations and rotational media for large capacity. I’m not disagreeing with Stewart, just being more precise. Also while Val made some good points about the reliability of SSD [2] I don’t think that this will be an obstacle in the low-end of the market. There is no little evidence of computers failing in the consumer market due to being unreliable – it seems that Microsoft has conditioned people to expect unreliability.

I predict that Sun will not release ZFS under the GPL in time for anyone to care. The release of OpenSolaris was way behind schedule and I don’t expect anything different this time around.

Stewart predicts that in five years Linux will have significantly more desktop market share than Apple. I agree and also predict that Apple will convert to the Linux kernel. I predict that Apple will become the first Linux distributor to make any significant hardware sales for the mainstream computer market (Linux bundled with hardware has already done well for mobile phones, routers, Tivo, and similar devices where the user doesn’t know what OS is running).

I predict the death of Windows mobile. I predict that in five years the mobile phone/PDA market will be dominated by Android with a variety of other Linux based phones. I predict that some time after five years the iPhone will go away.

Chris Samuel has made some predictions too [3]. He predicts that within two years “The distinction between laptops, netbooks and mobile phones will get even more blurred with consumers demanding mobiles with more power and lighter and lighter laptops/netbooks”. I believe that the difference between laptops, netbooks, and mobile phones is primarily one of IO (size of keyboard, sockets for peripherals, and size of screen). For desktop use the only application I use which requires more RAM or CPU power than my EeePC 701 can provide is Firefox. A combination of more efficient javascript interpretation and better coding practices by web designers would solve that problem. A significant portion of the mass of a laptop is dedicated to suppporting IO ports and maintaining the structural integrity of the device. A common feature in science fiction is lapotops that can be rolled up, stretched to size, etc (the Thinkpad Butterfly keyboard was an attempt at a first step towards this which failed due to issues of mechanical strength).

As some Netbook class systems already have 3G networking built in it seems a logical extension to have telephony functions built in to a laptop. I predict that laptops with full telephony support will go on sale in 2010.

One promising feature in regard to laptop IO is the new Display Port [4] video port. It will only be an incremental improvement to the space taken for IO capacity, but I am not expecting anything revolutionary in the near future. I predict that HDMI will be a failure in the market and DVI will never gain critical market share, it will be VGA and Display Port on most systems by 2012.

Predicting that technological developments won’t happen is always risky, but I predict that the mechanical issues which separate the heavier laptops and desktop-replacements from netbooks (in terms of making a large display and keyboard that won’t break frequently) won’t be solved within five years. In the same note, I don’t expect anyone to try building a mobile phone which can have a full-size screen and keyboard connected to it (although it would be possible to do so). So I expect that the phone/PDA, Netbook, and laptop distinction will remain for at least the next 5 years.

One thing that would make sense is to have a small device (PDA or mobile phone) store data that is security relevant and connect it to full-size machines for serious work. So for example you could use a desktop machine for Internet banking (maybe in an Internet cafe) and have your mobile phone ask you to confirm the transaction and then authenticate you to the bank server. I predict a larger role for PDAs and mobile phones as computers as soon as people start to take security seriously. I won’t try and guess when that might be, but I predict that it won’t be for at least five years.

I predict that increasing oil prices will significantly make a significant impact on the price of computers before the end of 2010. Not that I expect the prices to suddenly jump upwards, it’s more likely that prices will steadily increase while at the same time new technology to reduce production expenses in other areas is introduced.

I also predict that increasing oil prices will increase the desire to maintain systems for longer periods of time without maintenance. For example my Thinkpad T41p has had a few significant part replacements (a couple of motherboards, half the case, and a few keyboard replacements). This is OK while plastic costs almost nothing and the manufacturing expenses are also very low. But in future I expect that people will want laptops that can run for years without needing part replacments and which have a service life of 10 years or more. This requirement for strength will counteract the demand for laptops that are as light as netbooks.

2

Bill Joy

Some time ago Bill Joy (who is famous among other things for being a co-founder of Sun) [1] wrote an article for Wired magazine titled “Why the future doesn’t need us” [2]. He wrote many sensible things but unfortunately focussed on the negative issues and didn’t receive a good response. On reading it today I thought more highly of the article than I did in 2000 when it was printed, largely due to having done some background research on the topic. I’ve recently been reading Accelerating Future [3] which has a more positive approach.

Now a talk by Bill Joy from 2006 has been published on the TED.com web site [4]. He starts by talking about “super empowered individuals” re-creating the 1918 flu. He also claims that “more technology super-empowers people more“. Such claims seem to be excessively hyped, I would be interested to see comments from someone who has a good knowledge of current bio-technology as to the merits of those claims.

He talks briefly about politics and has some good points such as “the bargain that gives us civilisation is the bargain to not use power” and “we can’t give up the rule of law to fight an asymmetric threat – which is what we seem to be doing“.

He mentions Moore’s law and suggests that a computer costing $1000 then (2006) might cost $10 in 2020. He seems to be forgetting the cost of the keyboard and other mechanical parts. I can imagine a high-end CPU which cost about $800 a couple of years ago being replaced by a $2 CPU in 2020, but I don’t expect a decent keyboard to be that cheap any time before we get fully automated nano-factories (which is an entirely separate issue). Even the PSU (which has a significant amount of government regulations for safety reasons) will have a floor cost that is a good portion of $10. Incidentally the keyboard on my EeePC 701 sucks badly, I guess I’m spoiled by the series of Thinkpad keyboards that I keep getting under warranty (which would cost me a moderate amount of money if I had to pay every time I wore one out). I will make a specific prediction, that by 2015 one of the better keyboards will comprise a significant portion of the entire cost of a computer system (more than a low end computer unit) – such that in some reasonable configurations the keyboard will be the most expensive part.

It would be good if PCs could be designed to use external PSUs (such as the Compaq Evo models that took laptop PSUs). Then the PSU, keyboard, and monitor would be optional extras thus giving a small base price. Given that well built PSUs and keyboards tend not to wear out as fast as people want to replace computers, it seems that financial savings could be provided to most customers by allowing them to purchase the computer unit without the extra parts. People like me who type enough to regularly wear out keyboards and who keep using computers for more than 5 years because they still work are in a small minority and would of course be able to buy the same bundle of computer, PSU, and keyboard that new users would get.

In 2020 a device the size of an iPaQ H39xx with USB (for keyboard and mouse) and the new DisplayPort [5] digital display interface (that is used in recent Lenovo laptops [6]) would make a great PDA/desktop. You could dock a PDA the way some people dock laptops now and carry all your data around with you.

Bill cites an example of running the Mac interface on an Apple ][ (does anyone know of a reference for this) as an example of older hardware being more effective with newer software. It’s a pity that often new software needs such powerful hardware. EG it’s only recently that hardware developments have overtaken the developments of OpenOffice to make it deliver decent performance.

He has an interesting idea of using insurance companies to replace government regulation of food and drugs. The general concept is that if you can convince an insurance company that your new drug is not a great risk to them so that they charge a premium you can afford then you could sell it without any further testing. Normally I’m in favor of government regulation of such things, but given the abject failures of the US government this idea has some merit. Of course there’s nothing stopping insurance companies from just taking a chance and running up debts that they could never hope to repay (in a similar manner to so many banks).

Finally I think it’s interesting to note the camera work that the TED people used. My experience of being in the audience for many lectures (including one of Bill Joy’s lectures in the late 90’s) is that a speaker who consults their notes as often as Bill does gives a negative impression to the audience. Note that I’m not criticising Bill in this regard, often a great talk requires some significant notes – very few people can deliver a talk in a convincing manner entirely from memory. It seems to me that the choices of camera angle are designed to give a better impression than someone who was seated in the audience might receive – there’s no reason why a video of a talk should be spoiled by seeing the top of the speaker’s head while they consult their notes!

4

How Many Singularities?

There is a lot of discussion and speculation about The Singularity. The term seems to be defined by Ray Kurzweil’s book “The Singularity Is Near” [1] which focuses on a near-future technological singularity defined by significant increases in medical science (life extension and methods to increase mental capacity) and an accelerating rate of scientific advance.

In popular culture the idea that there will only be one singularity seems to be well accepted, so the discussion is based on when it will happen. One of the definitions for a singularity is that it is a set of events that change society in significant ways such that predictions are impossible – based on the concept of the Gravitational Singularity (black hole) [2]. Science fiction abounds with stories about what happens after someone enters a black hole, so the concept of a singularity not being a single event (sic) is not unknown, but it seems to me that based on our knowledge of science no-one considers there to be a black hole with multiple singularities – not even when confusing the event horizon with the singularity.

If we consider a singularity to merely consist of a significant technological change (or set of changes) that change society in ways that could not have been predicted (not merely changes that were not predicted) then it seems that there have been several already, here are the ones that seem to be likely candidates:

0) The development of speech was a significant change for our species (and a significant change OF our species). Maybe we should consider that to be singularity 0 as hominids that can’t speak probably can’t be considered human.

1) The adoption of significant tool use and training children in making and using tools (as opposed to just letting them learn by observation) made a significant change to human society. I don’t think that with the knowledge available to bands of humans without tools it would have been possible to imagine that making stone axes and spears would enable them to dominate the environment and immediately become the top of the food chain. In fact as pre-tool hominids were generally not near the top of the food chain they probably would have had difficulty imagining being rulers of the world. I’m sure that it led to an immediate arms race too.

2) The development of agriculture was a significant change to society that seems to have greatly exceeded the expectations that anyone could have had at the time. I’m sure that people started farming as merely a way of ensuring that the next time they migrated to an area there was food available (just sowing seeds along traditional migration routes for a hunter-gatherer existence). They could not have expected that the result would be a significant increase in the ability to support children and a significant increase in the number of people who could be sustained by a given land area, massive population growth, new political structures to deal with greater population density, and then wiping out hunter-gatherer societies in surrounding regions. It seems likely to me that the mental processes needed to predict the actions of a domestic animal (in terms of making it a friend, worker, or docile source of food) differ from those needed to predict the actions of other humans (who’s mental processes are similar) and from those needed to predict the actions of prey that is being hunted (you only need to understand enough to kill it).

3) The invention of writing allowed the creation of larger empires through better administration. All manner of scientific and political development was permitted by writing.

4) The work of Louis Pasteur sparked a significant development in biology which led to much greater medical technology [3]. This permitted much greater population densities (both in cities and in armies) without the the limitation of significant disease problems. It seems that among other things the world-wars depended on developments in preventing disease which were linked to Louis’ work. Large populations densely congregated in urban areas permit larger universities and a better exchange of knowledge which permitted further significant developments in technology. It seems unlikely that a population suffering the health problems that were common in 1850 could have simultaneously supported large-scale industrial warfare and major research projects such as the Manhattan Project.

5) The latest significant change in society has been the development of the Internet and mobile phones. Mobile phones were fairly obvious in concept, but have made structural changes to society. For example I doubt that hand-writing is going to be needed to any great extent in the future [4], the traditional letter has disappeared, and “Dates” are now based on “I’ll call your mobile when I’m in the area” instead of meeting at a precise time – but this is the trivial stuff. Scientific development and education have dramatically increased due to using the Internet and business now moves a lot faster due to mobile phones. It seems that nowadays any young person who doesn’t want to be single and unemployed needs to have either a mobile phone or Internet access – and preferably both. When mobile phones were first released I never expected that almost everyone would feel compelled to have one, and when I first started using the Internet in 1992 I never expected it to have the rich collaborative environment of Wikipedia, blogging, social networking, etc (I didn’t imagine anything much more advanced than file exchange and email).

Of these changes the latest (Internet and mobile phones) seems at first glance to be the least significant – but let’s not forget that it’s still an ongoing process. The other changes became standard parts of society long ago. So it seems that we could count as many as six singularities, but it seems that even the most conservative count would have three singularities (tool use, agriculture, and writing).

It seems to me that the major factors for a singularity are an increased population density (through couples being able to support more children, through medical technology extending the life expectancy, through greater food supplies permitting more people to live in an area, or through social structures which manage the disputes that arise when there is a great population density) and increased mental abilities (which includes better education and communication). Research into education methods is continuing, so even without genetically modified humans, surgically connecting computers to human brains, or AI we can expect intelligent beings with a significant incremental advance over current humans in the near future. Communications technology is continually being improved, with some significant advances in the user-interfaces. Even if we don’t get surgically attached communications devices giving something similar to “telepathy” (which is not far from current technology), there are possibilities for significant increments in communication ability through 3D video-conferencing, better time management of communication (inappropriate instant communication destroys productivity), and increased communication skills (they really should replace some of the time-filler subjects at high-school with something useful like how to write effective diagrams).

It seems to me that going from the current situation of something significantly less than one billion people with current (poor) education and limited communications access (which most people don’t know how to use properly) to six billion people with devices that are more user-friendly and powerful than today’s computers and mobile phones combined with better education as to how to use them has the potential to increase the overall rate of scientific development by more than an order of magnitude. This in itself might comprise a singularity depending on the criteria you use to assess it. Of course that would take at least a generation to implement, a significant advance in medical technology or AI could bring about a singularity much sooner.

But I feel safe in predicting that people who expect the world to remain as it is forever will be proven wrong yet again, and I also feel safe in predicting that most of them will still be alive to see it.

I believe that we will have a technological singularity (which will be nothing like the “rapture” which was invented by some of the most imaginative interpretations of the bible). I don’t believe that it will be the final singularity unless we happen to make our species extinct (in which case there will most likely be another species to take over the Earth and have it’s own singularities).

5

My Prediction for the iPhone

I have previously written about how I refused an offer of a free iPhone [1] (largely due to it’s closed architecture). The first Google Android phone has just been announced, the TechCrunch review is interesting – while the built-in keyboard is a nice feature the main thing that stands out is the open platform [2]. TechCrunch says “From now on, phones need to be nearly as capable as computers. All others need not apply“.

What I want is a phone that I control, and although most people don’t understand the issues enough to say the same, I think that they will agree in practice.

In the 80’s the Macintosh offered significant benefits over PCs, but utterly lost in the marketplace because it was closed (less available software and less freedom). Due to being used in Macs and similar machines the Motorolla 68000 CPU family also died out, and while it’s being used in games consoles and some other niche markets the PPC CPU family (the next CPU used by Apple) also has an uncertain future. The IBM PC architecture evolved along with it’s CPU from a 16bit system to a 64bit system and took over the market because it does what users want it to do.

I predict that the iPhone will be just as successful as the Macintosh OS and for the same reasons. The Macintosh OS still has a good share of some markets (it has traditionally been well accepted for graphic design and has always provided good hardware and software support for such use), and is by far the most successful closed computer system, but it has a small part of the market.

I predict that the iPhone will maintain only a small share of the market. There will be some very low-end phones that have the extremely closed design that currently dominates the market, and the bulk of the market will end up going with Android or some other open phone platform that allows users to choose how their phone works. One issue that I think will drive user demand for control over their own phones is the safety issues related to child use of phones (I’ve written about this previously [3]). Currently phone companies don’t care about such things – the safety of customers does not affect their profits. But programmable phones allows the potential for improvements to be made without involving the phone company – while with iPhone you have Apple as the roadblock.

Now having a small share of the mobile phone market could be very profitable, just as the small share of the personal computer market is quite profitable for Apple. But it does mean that I can generally ignore them as they aren’t very relevant in the industry.

I’m Skeptical about Robotic Nanotech

There has been a lot of fear-mongering about nanotech. The idea is that little robots will eat people (or maybe eat things that we depend on such as essential food crops). It’s unfortunate that fear-mongering has replaced thought and there seems to have been little serious discussion about the issues.

If (as some people believe) nanotech has the potential to be more destructive than nuclear weapons then it’s an issue that needs to be discussed in debates before elections and government actions to alleviate the threat need to be reported on the news – as suggested in the Accelerating Future blog [0].

I predict that there will be three things which could be called nanotech in the future:

  1. Artifical life forms as described by Craig Venter in his talk for ted.com [1]. I believe that these should be considered along with nanotech because the boundary between creatures and machines can get fuzzy when you talk about self-replicating things devised by humans which are based on biological processes.
    I believe that artificial life forms and tweaked versions of current life forms have significant potential for harm. The BBC has an interesting article on health risks of GM food which suggests that such foods should be given the same level of testing as pharmaceuticals [2]. But that’s only the tip of the iceberg, the potential use of Terminator Gene technology [3] in biological warfare seems obvious.
    But generally this form of nanotech has the same potential as bio-warfare (which currently has significantly under-performed when compared to other WMDs) and needs to be handled in the same way.
  2. The more commonly discussed robotic nanotech, self-replicating and which can run around to do things (EG work inside a human body). I doubt that tiny robots can ever be as effective at adapting to their environment as animals, I also doubt that they can self-replicate in the wild. Currently we create CPU cores (the most intricate devices created by humans) from very pure materials in “clean rooms”. Making tiny machines in clean-rooms is not easy, making them in dirty environments is going to be almost impossible. Robots as we know them are based around environments that are artificially clean not natural environments. Robots that can self-replicate in a clean-room when provided with pure supplies of the necessary raw materials is a solvable problem. I predict that this will remain in science-fiction.
  3. Tiny robots manufactured in factories to work as parts of larger machines. This is something that we are getting to today. It’s not going to cause any harm as long as the nano-bots can’t be manufactured on their own and can’t survive in the wild.

In summary, I think that the main area that we should be concerned about in regard to nano-bot technology is as a new development on the biological warfare theme. This seems to be a serious threat which deserves the attention of major governments.

12

Future Versions of Windows

There is currently a lot of speculation about the future of Windows following the massive failure of Vista in the market.

One theory that is being discussed is that Microsoft will cease kernel development and adopt a Unix kernel in the same way that Apple adopted a BSD based kernel.

I predict that MS in it’s current incarnation (*) will never do that. Having an OS kernel that enables easy porting of code to/from other platforms is entirely against their business model which relies on incompatibility to lock customers in. Whatever kernel MS use, it has to be incompatible in some ways with everything else. One easy way of achieving this would be to have a shared object (DLL) interface published and have the interface between the libc and other libraries and the kernel be undocumented and ever-changing (simply renumbering the system calls on every minor version increment would be a good start). The DLL interface could then have the complex APIs that MS loves to force on their victims (see Stewart Smith’s post about getting a file size in Windows for an example of the horror [1]).

The advantage of this approach would be that MS could cease developing an OS kernel (something that they were never much good at) and concentrate on owning the proprietary DLLs. There would be nothing stopping them from using a Linux kernel for this, as long as they release all source to the kernel they use (including the patch to renumber the system calls) they would be within the terms of the GPL.

My specific prediction is that some time between Jan 2011 and Dec 2016 Microsoft will release systems with the majority of the kernel code coming from BSD or Linux as their primary desktop and server operating systems.

Could people who disagree please make specific predictions for the future (including dates and actions) so that we can determine who was most accurate.

(*) For future incarnations of Microsoft after chapter 11 or being split in the way that AT&T was there seems no possibility to predict their actions.