The NT kernel originally was a Mach-kernel, just like the kernel of OS X, which still is a Mach-kernel. And no, Apple did not event that concept, neither did Microsoft.
Mach kernels are modular and produce a lot of communication overhead, which makes them less efficient than monolithic kernels like Linux. That's why Microsoft mostly gave up on the Mach-design and moved many parts back to a monolithic design. And that's also why Linux never went modular in the first place. Your "beach ball" in OS X is --the-- symptom for that communication overhead and how it actually slows down the performance of the system.
Modular kernels supposedly are more "beautiful in design" and supposedly also more robust, because a failing module in theory shouldn't be possible to bring down the entire system. However, our "beach ball of death" shows us a different reality - it is usually caused by an interruption in that communication flow and the respective kernel modules wait for all eternity for synchronization which is no longer possible because one or more kernel modules went South.
Andrew S. Tannenbaum, the father of Minix and Amoeba, champions modular kernels. Hurd also is a modular kernel, which might be the reason why it never gets finished.
Communication within a monolithic kernel is more efficient, faster and as Linux demonstrates in reality even more robust than most modular designs. On the very same hardware - a Mac - OS X usually is the slowest of all operating systems at any given task, while Linux usually is the fastest. The NT family is between the two.
And, yes, the NT family also has a Posix subsystem to allow source code compatibility with compliant Unix (console) applications. Furthermore, the NT family of operating systems never ever had one single bit of DOS in its core. It has a DOS-compatible subsystem that is launched when needed. And for clarification, what most people call a "DOS prompt" in NT-systems actually is an NT command prompt and NOT DOS-compatible by default. It -looks- like DOS, but it uses a completely different command interpreter (CMD.EXE) and not the DOS command interpreter (COMMAND.COM). It's also not DOS what's running there, but an NT console and applications running in that mode are true NT applications with full access to all NT APIs. However, a DOS subsystem is activated when a legacy app requires it.
About the VMS->WNT myth: Dave Cutler, the mind behind VMS, joined Microsoft rather late in the design process of Win NT. He added his famous name to an already very advanced project. Not unsimilar to Steve Jobs adding his name to the Mac project.
You should also note that Windows NT already was an enterprise-ready 32-Bit operating system with support for multi-processing, pre-emptive multi-threading AND multiple hardware platforms (Sparc, Alpha, HP and i386) years before Apple bought NeXTstep and when Apple actually still was only selling a 16-Bit operating system with cooperative multi-threading.
I've been using Windows NT since version 3.50 in business environments and when that platform began its success story, Apple was struggling for survival and a total nobody in the business world. But in all honesty, they still are a nobody in the business world. I've yet to see the first company in real life that runs its servers with OS X. Everybody uses an NT-based product or Solaris or something from the Linux family. Even a few BSDs are out there, but OS X is --NOWHERE-- to be seen.
So much for the reality in the server rooms. And on the business desktop, almost everything you can find is also a member of the NT family (2000, XP, Vista). The German Axel Springer Verlag probably is the largest company that tries to migrate to OS X desktops. Time will tell if they succeed.
OS X is NOT a good platform for enterprise-level rollouts or mass deployments. NT, on the other hand, was designed from foundation to top for customized and network-based deployment.
You guys should try to deploy 1,000 or more desktops with a standardized software installation with both OS X and NT. Something tells me that you will no longer be so excited about OS X after that experience.
But I don't even have to deploy the software images myself. You see, I can order 1,000 computers from Dell and --THEY-- install --my-- software image on each machine for me. Try getting that service from Apple.
Dell service sucks? And maybe HP's service sucks, too? You live in a dream world. These guys have an enterprise grade support the consumer company Apple can only dreams of. Not only fix they a broken machine within four hours WITHOUT any discussion, they also have what I would call "SWAT teams" when you have some seriously complex network problems - like the kind of problems that you run into when you are moving a dozen agencies to a 30-floor skyscraper. (Yes, I've been there and done that with them - I'm not talking fiction here.)
Oh, and about the "cheap parts" that Dell supposedly puts into their machines: You don't have a clue what you're talking about. Dell's parts come from the same Chinese factories where your Apple parts come from. Open a 30" Apple Cinema Display and a 30" Dell display and tell me if you find a real difference. Except for maybe that the Dell has more features like a card reader. Then tell me if you find different chip sets or hard disks in a Dell.
There's no shame in this, but face it: Your Mac is a PC in a designer case running a FreeBSD-based operating system with a proprietary GUI, nothing more.
A few last comments about security:
There are no secure systems - at least not as long as there will be human users.
However, Microsoft went through great lengths to make Vista secure but still usable, and they did the same with their servers.
In reality, you -will- protect your entire network with security appliances that filter all incoming and outgoing network traffic, and you will also install anti-virus software on all machines. And in a business environment you -will- do the same on Macs, no matter how much you believe how secure they are.
The most dangerous malware is the one nobody knows of. Because the dangerous criminals want to have access to your network without being noticed. And they want to come back to your system.
Furthermore, the most dangerous factor for a system's security still comes from within the company/organization. It's the disgruntled employee who willingly installs the backdoor software. It's usually a movie myth that people break into your network -- such attempts usually end in a denial of service and unresponsive servers. But not in a security breach. No, successful attackers normally have help from the inside.
But even without those scary movie scenes, every normal corporate network is protected by multiple layers of defense and you have to pass several firewalls -and- security appliances and traffic filters before you even hit your first server. By that time, most malware has already been filtered out.
Then you have another filter on your mail and file servers.
Then another one on your workstations.
And still, the crap gets through. And this would also happen if everybody was using Unix systems. Because then the folks would invest their energy in exploiting those systems - written by humans, used by humans, therefor flawed by design.
Sure, Unixes probably are more secure than Windows-based systems, although I'm no longer so sure since Vista 64 is out; Vista 64 is rather safe. But a lot of that security -does- come from the fact that there still is not much interest in exploiting those systems. It's also a cost-benefit issue; if it's easier to build a bot net with 100 million home computers running XP than with 10 million home computers running OS X, why the heck would I even think about trying to hack OS X?
I know that this is regarded as a FUD argument, but I think it is a very valid argument.
Anyway. There are huge traffic websites that completely run on Windows. eBay, for example. Or the World Health Organization and several other United Nations agencies. And they all work without show-stopping security issues. So obviously even non-Unix-based networks can be made secure.