Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

dyn

macrumors 68030
Aug 8, 2009
2,708
388
.nl
You may use whatever you want but the things you named are simply not very good arguments in favour of using Solaris over something else like FreeBSD. Other systems also have these options thus the question remains unanswered: why choose Solaris?
 

Mr-Stabby

macrumors 6502
Sep 1, 2004
338
324
We run purely Mac OS Lion Server on our 120 client network at work. It runs File Services, a SAN for Shared Video editing projects, and the usual things like print services, web services, DNS, DHCP etc etc. Our users all have networked home folders run by AFP and Open Directory. No Windows network is involved.

The thing that Mac OS X Server has going it for it is designed for running Macs and iOS devices, and you can setup a network with it fairly easily with limited admin experience. It allows you to control networked Macs in a way that other 3rd party alternatives just don't.

However, when i first setup our network, Tiger server had just come out. I got the impression then that Apple were very interested in the server market. They were selling XServes, developing OS X Server, XSan, XServe RAID etc. No iPhones or iPads to speak of, so they were focused purely on Macs and had iPods as a sideline. Though OS X Server had bugs, you got the impression that Apple were putting their all into it. Especially as they eventually made products like Final Cut Server. That is why i decided to invest in it (it was still quite expensive then) as it seemed the obvious choice for our large Mac network.

However, this really isn't the case any more. We upgraded to Lion Server purely to support our new Lion Macs, and it's terrible. Networked home folders via AFP are now completely unreliable. They crash often, randomly kick people off. Up until i think 10.7.3, you couldn't even kick off a crashed user without restarting the AFP service. The exact same hardware was running Snow Leopard without a single issue (We're actually rolling back to Snow Leopard soon, despite the Lion issues it will no doubt bring up)

I can't go home after a days work and not worry that overnight the server will crash or some service will start misbehaving the next day. That's Lion Server all over. You really cannot trust it to run 24/7 in an enterprise environment.

What's worse is you feel so alone owning Mac servers. If a problem with a Windows server appears, you can google the error number or message and within seconds you've got 50 pages telling you what the error is and possible solutions. With Macs, i get very bizarre AFP errors and other Open Directory related errors often, and they're almost impossible to interpret yourself. So you google it, and you get absolutely nothing, except the odd Apple support page where the small amount of Enterprise admins left running Macs are pulling their hair out with the same problem. Hardly anyone is running Mac servers any more, so there is no support network to investigate these problems. No one at Apple is any help either. When you've got 50 people literally screaming at you through your office door demanding you fix the server, it's so demoralising.

The perfect solution for me would be if a 3rd party company actually made an equivalent to Mac OS X Server with a decent GUI interface that actually ran on a standard Mac, and had all the necessary services like AFP, Open Directory, MCX etc. Does such a thing exist? That is actually supported? If so i would move to it in a heartbeat. Unfortunately we invested so much in XServes just before they were EOL'd, that we can't replace them for a few years yet.

The moment i knew Apple were going to sideline OS X Server, was when it was dropped to £30. There is no way any real development or effort is going to go into OS X Server again, so why should i keep running it?
 

Don.Key

macrumors regular
Jan 11, 2005
132
6
Let me bump this one up.

I plan to dump OSX SL Server soon in favor of Solaris (OpenIndiana actually) Servers which are de facto standard in our shop now.

What daemons do you use for emulating the OSX Server under other UNIXes?

We absolutely need:

Network Authentication and accounts
Network Homes (Both on-line and synced)

We would really love to have:

Network search a-la Spotlight

Can it be done?
 

radiogoober

macrumors 6502a
Jun 7, 2011
972
1
I ran Lion Server last night for about 2 hours. I installed it, turned on File Sharing, and it completely wrecked my "server" that was already serving as a file hub and time machine hub. It absolutely refused to let other computers backup to different partitions on an external (this works fine without Lion Server.) I hated it, and for a small household most of the services are absolutely worthless.

I restored via Time Machine to a previous backup, thankfully, and wrote Apple a long letter detailing my experience and they issued a refund in about an hour. A++ service.
 

radiogoober

macrumors 6502a
Jun 7, 2011
972
1

That setup is incredible, but expensive. I think it's like $1100-$1200. But it gives dual Thunderbolt ports, and you could add a little card for 4 eSATA ports and another card for a pair of gigabit Ethernet ports (link aggregation.) So you could add an awesome rack mount storage system and have a ton of bandwidth to the rest of the network.

I went with their much cheaper solution, the little metal rack that holds two Mac minis and has front USB outlets. Ill post pics in a minute.
 

Mr-Stabby

macrumors 6502
Sep 1, 2004
338
324
The xMac-Mini Server is rather expensive isn't it? $1295 and you have to buy a Mac Mini on top of that?
 

TX65

macrumors newbie
Jan 27, 2010
14
0
I used to own 3 Xserves and they were great and the server OS was easy to set up and reliable.

When Apple discontinued the Xserve, it was clear that Apple was abandoning the serious server market and I sold my Xserves.
 

radiogoober

macrumors 6502a
Jun 7, 2011
972
1
The xMac-Mini Server is rather expensive isn't it? $1295 and you have to buy a Mac Mini on top of that?

Yup. It's incredibly expensive, but....... if you had the money (I'm not saying you don't, I mean people in general), if you put a top of the line mac mini in it, and the cards for dual gigabit ethernet and 4x eSATA, it'd be pretty damn powerful. But, insanely expensive, and honestly not worth it. :)

------

Here is my RackMac mini:

http://www.sonnettech.com/product/r...mactech&utm_medium=banner&utm_campaign=rmm300

IMG_0502.jpg



IMG_0504.jpg


It has a base model Mac mini in it. Instead of using Lion Server, I'm just running plain Lion on it. File sharing is enabled, as is screen sharing, so I can do whatever I need to do over the network connection. It has a Thunderbolt MyBook Duo (2x3TB thunderbolt drives), a 3TB firewire drive, and a pair of large USB2 disks. They server as a file server, and as backup destinations for 3 computer's time machines and also a bootable clone of the server HD in case something goes bad.

I actually have the server hosting my iTunes library instead of just storing my iTunes data on the file server. My reason for that was the server will always be on, so let it go ahead and actually server iTunes. I can still access my entire library from my other computers through home sharing (in iTunes.) My only complaint with this setup is that iTunes will not remember playback positions on shared music, so it sucks for listening to lengthy radio shows that I enjoy (O&A!)

To add to the library, I've "shared" the "Automatically add to iTunes" folder on the server to my computer. I just drop stuff in it and it gets added to iTunes on the server. I've configured a couple of scripts, along with Hazel app, to automatically convert any torrent downloaded TV shows to .m4v, and send to iDentify. All I have to do in Identify is verify the metadata, click 1 button, then it gets sent to the server. So the whole thing is pretty smooth and automated.

No, the rack isn't near finished, and I have a ton of wiring to do. I'll showcase some more stuff in another thread if anyone is interested.
 
Last edited:

besson3c

macrumors member
Apr 9, 2003
98
0
Respectfully, I think many of you are missing the point.

As has been said, nobody runs an OS on bare metal hardware these days, this model is too outdated and expensive. Virtualization is where it is at.

Not only does Apple not provide a decent VM host environment (nor is one available with features you'd expect from a server, as has been stated here), but OS X is not suitable for running as a VM guest either, since your best case scenario is getting it to work fully virtualized (rather than para-virtualized, which offers much faster network and disk access). Last I checked FreeBSD lacks para-virt drivers too.

As has also been said, the rackable hardware thing is problematic too.


I think Apple killed off OS X server because Windows does well with small business, and Linux does well with larger business. I don't think there is really much room for Apple to want to compete. A lot would be needed for Apple to become a viable competitor, technologically speaking (another thing being a more sophisticated software update mechanism, or native package management).

I don't recommend anybody get into OS X Server products, unless you are just running a website hosting pictures of your cat or something.
 

Alrescha

macrumors 68020
Jan 1, 2008
2,156
317
As has been said, nobody runs an OS on bare metal hardware these days, this model is too outdated and expensive.

Respectfully, I think you need to get out more. Virtualization has come a long way in forty years, but to say that 'nobody' runs an OS directly on the hardware is simply preposterous.

A.
 

besson3c

macrumors member
Apr 9, 2003
98
0
Respectfully, I think you need to get out more. Virtualization has come a long way in forty years, but to say that 'nobody' runs an OS directly on the hardware is simply preposterous.

A.


Sorry, I should have stated this as nobody being *interested* in running it that way, where "nobody" is any business with the sense to know better.
 

Mr-Stabby

macrumors 6502
Sep 1, 2004
338
324
As has been said, nobody runs an OS on bare metal hardware these days, this model is too outdated and expensive. Virtualization is where it is at

We do. Having virtualisation is impractical for a lot of people, including ourselves.

We have a Mac server purely for running Final Cut Server, and another server for running user home directories. If you have user home directories on the same server as Final Cut Server when it is transcoding a large amount of videos, even when virtualised, Final Cut Server will use 100% processing power, leaving none for running directory and file services. So your clients will crash or at least be dog slow. Yes you can give processor usage quotas to each virtual server, but why do that when you can give 100% to Final Cut Server, which allows it to be faster, and get another server for running other services. We actually run 4 XServes all told doing different things. When you deal with large amounts of users, virtualisation can be impractical, depending on what services you provide.
 

belvdr

macrumors 603
Aug 15, 2005
5,945
1,372
Sorry, I should have stated this as nobody being *interested* in running it that way, where "nobody" is any business with the sense to know better.

We have the sense to know that virtualizing some servers won't deliver the performance required. I can't say I know of a large environment that's 100% virtualized.
 

besson3c

macrumors member
Apr 9, 2003
98
0
We do. Having virtualisation is impractical for a lot of people, including ourselves.

We have a Mac server purely for running Final Cut Server, and another server for running user home directories. If you have user home directories on the same server as Final Cut Server when it is transcoding a large amount of videos, even when virtualised, Final Cut Server will use 100% processing power, leaving none for running directory and file services. So your clients will crash or at least be dog slow. Yes you can give processor usage quotas to each virtual server, but why do that when you can give 100% to Final Cut Server, which allows it to be faster, and get another server for running other services. We actually run 4 XServes all told doing different things. When you deal with large amounts of users, virtualisation can be impractical, depending on what services you provide.


That sounds more like a bug than an actual problem, or an underpowered cluster. Given that you can build clusters to be more powerful than any single machine in any number of ways, I cannot wrap my head around what would prevent making Final Cut Server to work in a VM cluster if Apple were to expend the appropriate effort to get it to work.

I'm not criticizing your decision to go with bare metal hardware, obviously you need to go with what works right now, but I don't buy your premise (if this is what you intended) that virtualization is inherently impractical for certain services. I've come across VM systems that service literally tens of thousands of users doing everything from email to running large databases to heavily trafficked websites. AFAIK, with the right approach and resources you can run pretty much anything faster and usually cheaper as well.

----------

We have the sense to know that virtualizing some servers won't deliver the performance required. I can't say I know of a large environment that's 100% virtualized.

Are you kidding? Pick any large tech company, their stuff is virtualized, including Google search.

You have this literally backwards, you go with virtualizing stuff when you *want* things to be faster.

With an intelligently built cluster you can provide more CPU cores, RAM, and since VM clusters are often attached to high performance SANs or RAID arrays, faster I/O too. Not only that, but when it comes to scalability a VM cluster will help you with load balancing, dynamic allocation of resources during heavy usage periods, not to mention disaster recovery, affordable remote console support, and portability.

There is really no argument that supports running well behaved services on bare metal hardware, except for maybe your fringy stuff that has fixable issues.
 

belvdr

macrumors 603
Aug 15, 2005
5,945
1,372
AFAIK, with the right approach and resources you can run pretty much anything faster and usually cheaper as well.

Faster? Virtualization causes a performance penalty.

Are you kidding? Pick any large tech company, their stuff is virtualized, including Google search.

You have this literally backwards, you go with virtualizing stuff when you *want* things to be faster.

With an intelligently built cluster you can provide more CPU cores, RAM, and since VM clusters are often attached to high performance SANs or RAID arrays, faster I/O too. Not only that, but when it comes to scalability a VM cluster will help you with load balancing, dynamic allocation of resources during heavy usage periods, not to mention disaster recovery, affordable remote console support, and portability.

There is really no argument that supports running well behaved services on bare metal hardware, except for maybe your fringy stuff that has fixable issues.

How can I provide more CPUs or RAM by virtualizing? Answer: you will oversubscribe your resources. Oversubscribe your RAM and have services that use it and you will swap to disk on your host.

My disks sit on a SAN, but I can't use VMDKs for high availability clusters, like SQL Server. I have to use RDMs. Now I have issues because I can't have ALUA enabled on my RDMs, so I have separate HBAs for just those LUNs. Also, now that I have RDMs, I lose vMotion and snapshots.

Again, I don't know of one company that's 100% virtualized. Your statements conflict with even what the virtualization companies state. I don't know of one virtualization company that states things will run faster after being P2Ved.
 
Last edited:

besson3c

macrumors member
Apr 9, 2003
98
0
Faster? Virtualization causes a performance penalty.


On a single machine, yes, but on a VM cluster this slight performance hit is not only compensated for, but hopefully netting greater performance than what you'd get on that same original single machine.

----------

And you will inherently oversubscribe your resources. My disks sit on a SAN, but I can't use VMDKs for high availability clusters, like SQL Server. I have to use RDMs. Now I have issues because I can't have ALUA enabled on my RDMs, so I have separate HBAs for just those LUNs. Also, now that I have RDMs, I lose vMotion and snapshots.

Again, I don't know of one company that's 100% virtualized.


I think your perceptions are either a little off or at least a little outdated. I'd go far as to say that far more stuff is virtualized than not, or at least is headed in that direction.

One company? I'd be willing to bet Google is, as is Amazon. I'd be willing to bet that iCloud is as well, perhaps all of Apple.

----------

How can I provide more CPUs or RAM by virtualizing? Answer: you will oversubscribe your resources. Oversubscribe your RAM and have services that use it and you will swap to disk on your host.



Or, have sufficient hardware in the cluster not being consumed to be able to offer to your service as needed.

Oversubscription occurs when you lack sufficient hardware, it is not inherent to virtualization.

----------

It is bewildering to me where these sorts of myths about virtualization come from. I don't mean to mock anybody here, I've heard all of this stuff before, it's not uncommon.

The fact is that the jury is not out on virtualization, it is where it is at, period, particularly when your OS supports para-virtualization (i.e. Linux/Windows).
 

belvdr

macrumors 603
Aug 15, 2005
5,945
1,372
I think your perceptions are either a little off or at least a little outdated. I'd go far as to say that far more stuff is virtualized than not, or at least is headed in that direction.

One company? I'd be willing to bet Google is, as is Amazon. I'd be willing to bet that iCloud is as well, perhaps all of Apple.

Funny, in the first statement you imply that not everything is virtualized and in the second you state that it is. Betting and knowing are two different things. I guess it would be possible to be totally virtualized for simple web and file sharing.

We're about 80% virtualized; VMware and Gartner both tell us that's really high.

Or, have sufficient hardware in the cluster not being consumed to be able to offer to your service as needed.

Oversubscription occurs when you lack sufficient hardware, it is not inherent to virtualization.

Right, but you said we could allocate more RAM and CPU if we were virtualized versus being on a physical machine. I just don't see that happening.

Additionally, not everything is x86. It's not really possible to virtualize your high performance database clusters.

What I'm stating is not myth. It's based on testing in our environments. I see it every day.
 

besson3c

macrumors member
Apr 9, 2003
98
0
Funny, in the first statement you imply that not everything is virtualized and in the second you state that it is. Betting and knowing are two different things. I guess it would be possible to be totally virtualized for simple web and file sharing.

We're about 80% virtualized; VMware and Gartner both tell us that's really high.

Where did I say that everything is virtualized? I said that most things are, including the bulk of, if not all of the services offered by the companies I've listed.

80% might be really high right now, but the trend is clear, these numbers are growing. It can take years to migrate servers from one environment to another, sometimes it makes sense to retire bare metal servers by attrition. That number will only go up, because it makes no sense for it not to.

My point was simply that virtualization is where it's at, and Apple is woefully behind the times by not embracing this, although admittedly that language implies that Apple is trying to be a leader in the server arena, which is definitely not a given.


Right, but you said we could allocate more RAM and CPU if we were virtualized versus being on a physical machine. I just don't see that happening.

If you have the hardware for this, why not? Providing you have the physical space for it, you can expand a cluster to be as large as you wish by continuing to add new nodes to it (which would up the number of CPU cores, RAM slots, etc.). I'm not aware of any restraints as to how large a cluster can be.

Additionally, not everything is x86. It's not really possible to virtualize your high performance database clusters.

It's probably not as practical to virtualize stuff if you can't do para-virt either, which is why I should (and didn't) say that this applies to the Windows, Linux, and Solaris worlds. However, even in the FreeBSD world they are starting to (if not already) supporting Xen guests, so it seems like it is only a matter of time before they work on para-virt drivers, if not so already.

This just leaves Apple, the only vendor not in on the game.

What I'm stating is not myth. It's based on testing in our environments. I see it every day.

I'm not sure what your argument is. If you provide additional hardware to stuff, it's ability for it to scale and do more stuff simultaneously (and therefore perform better) is pretty obvious, no?

Maybe this is not so if you just have 5 users or something, but this would fall under my fringey category.
 

belvdr

macrumors 603
Aug 15, 2005
5,945
1,372
Where did I say that everything is virtualized? I said that most things are, including the bulk of, if not all of the services offered by the companies I've listed.

No, you said this:

As has been said, nobody runs an OS on bare metal hardware these days, this model is too outdated and expensive. Virtualization is where it is at.

This just leaves Apple, the only vendor not in on the game.

They're not in the server game unless it's for the SOHO environment. Given that, no wonder they don't concern themselves with virtualization. They're mainly aimed at the consumer at this point.

I'm not sure what your argument is. If you provide additional hardware to stuff, it's ability for it to scale and do more stuff simultaneously (and therefore perform better) is pretty obvious, no?

Sure you can provide additonal RAM or CPU, but when disk is the limitation, it doesn't matter what you do otherwise. You're assuming RAM and/or CPU are always the bottleneck. For example, I see disk bottlenecks when virtualized, but not when using a physical machine.
 

besson3c

macrumors member
Apr 9, 2003
98
0
No, you said this:

Fair enough, I meant "nobody" in a hyperbolic way.


Sure you can provide additonal RAM or CPU, but when disk is the limitation, it doesn't matter what you do otherwise. You're assuming RAM and/or CPU are always the bottleneck.

If disk is the bottleneck and we are talking about the same given disk source it is going to be the bottleneck whether on physical hardware or in a VM cluster. One difference is, it is generally more cost effective to connect a single consolidated VM cluster to your SAN or disk array than it is to connect a bunch of physical servers to it. Depending on the layout of the machine room this also often allows you to consolidate SANs/disk arrays, which is another potential substantial cost savings. The other difference is that in some cases you can lessen your load by adding more disks to the array, which is generally more sensibly done with one big honkin' over-engineered disk source rather than trying to maintain a bunch of different storage systems.

----------

I'm sorry you have nothing to add to the discussion. Maybe you'd like to head on over to another thread to discuss there.



Agreed. I didn't even know we were arguing. It seemed like a legitimate discussion to me.

:confused:


I'm not a regular here, but it's so amusing to me how every forum seems to have their own version of these sorts of characters :)

P.S. I'm rather fond of applesauce.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.