I was talking about the "Server" model of the Mac Pro - not the OS X server software - I'm sure OSX Server will continue, since the Mac Mini Server seems like a success. However, since Lion the emphasis of OS X server has shifted distinctly towards the "small workgroup server" rather than "big iron server" (see the wails from SL Server users when Lion came out). I suspect that, for many people, the Time Machine facilities will be the USP.
Small home environments are going to fit TimeCapsule or AirportExtreme (given it can take a USB drive) if TimeMachine is the one and only primary criteria.
Small workgroup is generally more indicative of number of people not the individual workloads each one puts onto the server.
For example, At one point ActiveSan had (or was going to have) a product called InnerPool ( company didn't survive so to pointing to FAQ elsewhere. Essentially a sizable chunk of those folks got absorbed by Quantum/StorNext).
http://www.xsanity.com/forum/viewtopic.php?p=46623#46623
It put the XSan metadata controller on a single card with and SSD. The new Mac Pro is a bit more than a single card with SSD but for some workgroups it would work better as a Metadata server ( more CPU horesepower, dual Ethernet connections so that one could be on dedicated metadata VLAN. ) than a mini would. It is metadata that is being stored so it isn't an issue of bulk data inside the device for a reasonably sized XSan workgroup deployment.
It while number of people isn't completely decoupled from how much workload they throw at the metadata server a relatively small number of users with lots of metadata blocks to grab and/or manipulate can invoke a workload that could test a limited Mac Mini. High user concurrent loads tend to lead to higher server workloads.
With the new direction that Apple is taking with the new Mac Pro I don't think they are going to terminate XSan anytime soon. The new Mac Pro may be an oddball for rack based SAN farm but the Mac mini isn't a perfect fit out of the box either.
Similar issues for new Mac Pro versus mini if running as a ZFS "head node". (
http://getgreenbytes.com/solutions/zevo/ ). The server side computational levels are high. It doesn't necessarily have to be just file serving duties; ZFS is just illustrative that there more than just XSan metadata workload even in file system space. In short, it is just server side computational requirements that would separate a Mac Pro Server from a Mac Mini server. [ I know some will view the difference between the two purely as a internal storage capacity difference but I think that is warped viewpoint. ]
The Server version of the mini is just a configuration different SKU. The bulk of the device is the same. The configuration skew Apple could do with a Mac Pro Server model is possibly drop the second GPU. It isn't likely to be useful in most server contexts. They could swap that BOM cost decrease for a SSD capacity increase or just a general price reduction ( would not hold breath on the latter).
Alternatively, they could swap the 2nd GPU for a drive card. Server OS/Apps on RAID 1 would have as much service uptime as a mini in mirror mode. [ It doesn't look like there is room for a 2.5" ( unless super ultra slim) on a card, but a second SSD would provide redundancy/capacity with not a whole lot of drama. It is just a cut down version of the GPU card with the drive slot on it; just remove the GPU stuff. It will be a smaller, cheaper card with mainly "delete this stuff" engineering. ]
Fine - the issue with the new Mac Pro as a server is that 2/3 of the beast is filled with expensive "workstation-class" graphics cards that won't do much good in a headless server.
OS X has terminal server like capabilities. It isn't necessarily headless from a "logged in and using graphics on server" point of view.
http://appleinsider.com/articles/11/03/31/mac_os_x_10_7_lion_to_introduce_multi_user_screen_sharing
Nor as I pointed out.... does there have to be two GPU cards present if it is a separate SKU sold with with OS X Server App bundled.
VNC over a hardwired 1GbE local LAN link isn't that bad for alot usages. ( of course in the distant past I've used 300 baud modems and X Windows over 10Mb/s Ethernet to get remote work so may have a different, more appreciative, perspective. )
(Mind you, a quick Google shows that OpenCL-accelerated DBMSs, webservers etc. are being developed so I wouldn't count it out in the longer term - but here and now...)
Webservers no. DBMS not so sure.
How are SMB and Active Directory not cross-platform c.f. the protocols that Apple have used previously?
How long did it take SAMBA to match all of the latest updates to AD (primary domain controller ,etc) and to SMB 3.0 (formerly 2.2 ) ?
They may have originated from Windows, and MS may not be too chuffed with Samba, but they're pretty much industry standard.
There are defacto standards that are reverse engineering and eventually dribbled out to multiple platforms. That is a huge stretch of "cross platform" definition. For significant blocks of time the standard as it evolves only works on one platform. Then folks uncork that and Microsoft tweaks them again. Rise and repeat. But yeah if you lag sufficiently far behind when the standard is deployed on Windows.... yes it is effectively cross platform.
If Windows wasn't the overwhelming dominant player that wouldn't be happening. That at only adds to point to with what OS X Server is actually primarily competing with.
It is questionable Apple is going to be able to keep up with this over time. They may have been frustrated by SAMBA development decisions but they still have to prove they can actually keep up with the Microsoft treadmill.