There is currently a lot of speculation about the future of Windows following the massive failure of Vista in the market.
One theory that is being discussed is that Microsoft will cease kernel development and adopt a Unix kernel in the same way that Apple adopted a BSD based kernel.
I predict that MS in it’s current incarnation (*) will never do that. Having an OS kernel that enables easy porting of code to/from other platforms is entirely against their business model which relies on incompatibility to lock customers in. Whatever kernel MS use, it has to be incompatible in some ways with everything else. One easy way of achieving this would be to have a shared object (DLL) interface published and have the interface between the libc and other libraries and the kernel be undocumented and ever-changing (simply renumbering the system calls on every minor version increment would be a good start). The DLL interface could then have the complex APIs that MS loves to force on their victims (see Stewart Smith’s post about getting a file size in Windows for an example of the horror [1]).
The advantage of this approach would be that MS could cease developing an OS kernel (something that they were never much good at) and concentrate on owning the proprietary DLLs. There would be nothing stopping them from using a Linux kernel for this, as long as they release all source to the kernel they use (including the patch to renumber the system calls) they would be within the terms of the GPL.
My specific prediction is that some time between Jan 2011 and Dec 2016 Microsoft will release systems with the majority of the kernel code coming from BSD or Linux as their primary desktop and server operating systems.
Could people who disagree please make specific predictions for the future (including dates and actions) so that we can determine who was most accurate.
(*) For future incarnations of Microsoft after chapter 11 or being split in the way that AT&T was there seems no possibility to predict their actions.
On one of the benchmarks that most matters to the customers who actually sign the big checks — battery life on a ThinkPad — the MSFT OS is still ahead of Linux. (Windows XP as of last summer.)
MSFT could pay ten or fifteen kernel teams for different “Windows” products, and just have ISVs write to .Net or a future “managed” target that runs on all of them.
My prediction: Microsoft will never use enough kernel code from BSD or Linux to oblige it to give customer-visible credit or to release source.
(They’ll read the code and borrow ideas, of course.)
It won’t serve us to underestimate the opponent. Microsoft has some very smart people working for it, people who are capable of writing good code. The reason that so much out of Microsoft sucks is the huge amount of legacy they are dragging around: every badly designed interface they ever put out has users, and those users (and the customers of those users, who are also Microsoft customers) scream if anything breaks, even if it is undocumented behavior. If they didn’t have that severe handicap, they’d leave us in the dust, with their combination of top talent and advance knowledge of the hardware (they know what’s coming out of Intel, AMD, and nVidia long before any free software developers do). They’d be a year ahead at any given time.
And that’s a good reason not to listen to people like Ian Murdock, always complaining that Linux is insufficiently careful to preserve legacy ABIs. It’s the fact that Linux breaks ABIs and Windows doesn’t that’s allowed Linux to catch up so nicely (though still with some significant gaps, as Don points out).
Don: Good point about battery life (and you could also mention the support for suspend), but it doesn’t seem relevant to the discussion about kernel choice. It should be an easy thing to fix if you have some time and some developers.
I doubt that MS could pay 15 kernel teams, I doubt that they could find so many good programmers. If I wanted to start 15 kernel teams I would start by creating a large number of primary schools that teach children well as the start of a 15 year process. Creating 15 independent kernels of the quality that users want would be a project of the scale of the Manhattan project.
Your point about borrowing ideas is good.
Joe: They don’t drag that much legacy around. Programs written for proprietary Unix systems in the 80s are still being run on Linux today while programs for Windows 3.1 generally stopped working a very long time ago. I know people who support systems such as Windows 95 (long out of support from MS) because it runs programs that don’t run on newer systems.
MS are quite good at going to great lengths to work around bugs in popular programs, but in general they aren’t so good at preserving compatibility.
Hmmm –
didn’t they “get” their whole TCP/IP stack from the BSDs already – years ago, while they were still fighting Novell on the server market?
And don’t some of their “new” command line functions remind us of Unix since a while?
I bet that they “borrow” lots of code from there already – and with BSD licenses, that’s an ok thing to do AFAIK (or understand that legal stuff).
cheers,
Wolfgang
Vista’s problems in the market have little to do with its kernel, so they have no reason to abandon it. The failure of Vista stems from the shell (the user interface). It is horribly designed, inconsistent, and too different from their previous interfaces with no obvious benefit. Microsoft will take a page from the Apple and Gnome playbook and finally realize that user interface design and simplicity matter. They will release a version of Windows with a Vista based kernel but a simplified shell.
http://www.kuro5hin.org/?op=displaystory;sid=2001/6/19/05641/7357
Wolfgang: The above URL has some interesting information on the history of Windows TCP/IP. It seems that MS bought code from a company that used some BSD code, but that the majority of such code is now long gone.
I don’t know what the “new” command line functions are. I am sure that they get “inspiration” from GPL code. But this is far from taking an entire kernel from BSD or Linux, which is what I predict.
skelter: They had lots of grand plans for filesystem integration with databases etc. Those plans were abandoned, apparently due to kernel development issues.
How big are the kernel teams behind Plan 9 or QNX? If you decide that you’re going to go after a specific target market, and have plenty of build/test infrastructure behind you, kernels are a doable project. Look for MSFT to try “HPC Windows/.NET HPC Edition” first–regular Windows doesn’t seem to be catching on in the HPC market, and giving all the bragging rights to Linux must be a kick in the corporate pride.
(I do predict that MSFT will have to throw out its driver model, either doing drivers itself based on docs the hardware vendors provide, or insisting that vendors commit them to a Novell-like build/test farm. Making the stability of an OS depend on joe.random@hw-labs.example.com is futile in the long run.)
Microsoft already has a Unix like kernel, it was called Services for Unix, which is now embedded in Windows 2008, so yes Windows can now be the best Unix server you have ever seen, will run native sourcecode (updated for machine specifics of course). It just runs as a subsystem in the same way the old posix and os2 subsystems did
Microsoft has approx 20000 developers spread across multiple countries with access to sourcecode, so why do you think they are unable to write whatever code they like ie have 9 kernel teams if it makes commercial sense to do so. Highly unlikely I agree but possible.
Absolutely the new interface is both GUI and command driven, they need to cater for the ponytailed sandel wearers who can’t use a mouse, but do want to script their life and make it easier. With the new version of Exchange and the powershell there is actually more features driven from the command line than the GUI which I think is a first.
Don: I expect that the kernel teams for Plan 9 and QNX are quite small. Plan 9 appears to have never had wide-spread use. QNX was used in a limited market segment.
Developing an OS kernel to run on a small set of hardware is much easier than writing a general purpose OS to run on lots of different hardware manufactured by a variety of 3rd parties that you don’t control. If you add in scaling from embedded to HPC then it becomes even harder again.
A company such as SGI could easily write an entire OS kernel if they reassigned the people who currently work on Linux kernel development. Of course the OS kernel in question would not do all the things that Linux does.
Alphag: A layer to make POSIX APIs work on an NT kernel is not the same thing as an OS kernel.
Having 9 viable kernel teams requires having 9 groups of competent programmers. I doubt that there are enough people in the world who have such skills.
So they (1) find a hardware vendor who wants to get into HPC and get them to do a unique architecture, like SGI Altix, and make it available for licensing (2) write an OS for that, and (3) sell it as “Windows” something. They don’t have to do one OS that works for a wide range of hardware and apps, since apparently that’s only possible if the code is out there for experimentation and discussion by people who have diverse uses for it.
Don: You are correct that they can create a set of OSs which can all be sold under the same name, as we know they have already done that in several ways. The problem is that the different OSs don’t run the same software in the same way.
Remember when OS/2 was advertised as “A better DOS than DOS and a better Windows than Windows”? The advert was correct, I could set up two DOS VMs running two DOS programs that could not run on the same DOS configuration and I could have three Windows VMs running three Windows programs of which no two would run on the same Windows configuration.
Now of course you could have one OS emulate another (as NT emulated Windows 3.11) and if you try really hard then you could get it to work as well as OS/2 did. Another possibility is to have multiple VMs running complete OSs (which means different login sessions etc which is painful to use).
The benefits to MS of taking Linux kernel source, making some simple changes for gratuitous incompatibility and then using it for a new OS are significant. After all it has worked reasonably well for Apple. :-#