Funny, in the first statement you imply that not everything is virtualized and in the second you state that it is. Betting and knowing are two different things. I guess it would be possible to be totally virtualized for simple web and file sharing.
We're about 80% virtualized; VMware and Gartner both tell us that's really high.
Where did I say that everything is virtualized? I said that most things are, including the bulk of, if not all of the services offered by the companies I've listed.
80% might be really high right now, but the trend is clear, these numbers are growing. It can take years to migrate servers from one environment to another, sometimes it makes sense to retire bare metal servers by attrition. That number will only go up, because it makes no sense for it not to.
My point was simply that virtualization is where it's at, and Apple is woefully behind the times by not embracing this, although admittedly that language implies that Apple is trying to be a leader in the server arena, which is definitely not a given.
Right, but you said we could allocate more RAM and CPU if we were virtualized versus being on a physical machine. I just don't see that happening.
If you have the hardware for this, why not? Providing you have the physical space for it, you can expand a cluster to be as large as you wish by continuing to add new nodes to it (which would up the number of CPU cores, RAM slots, etc.). I'm not aware of any restraints as to how large a cluster can be.
Additionally, not everything is x86. It's not really possible to virtualize your high performance database clusters.
It's probably not as practical to virtualize stuff if you can't do para-virt either, which is why I should (and didn't) say that this applies to the Windows, Linux, and Solaris worlds. However, even in the FreeBSD world they are starting to (if not already) supporting Xen guests, so it seems like it is only a matter of time before they work on para-virt drivers, if not so already.
This just leaves Apple, the only vendor not in on the game.
What I'm stating is not myth. It's based on testing in our environments. I see it every day.
I'm not sure what your argument is. If you provide additional hardware to stuff, it's ability for it to scale and do more stuff simultaneously (and therefore perform better) is pretty obvious, no?
Maybe this is not so if you just have 5 users or something, but this would fall under my fringey category.