Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

theRick119

macrumors member
Original poster
Apr 27, 2008
83
1
We have this Mac Pro handling all of our files over AFP.

We recently ramped up for the holiday season and have approximately 25 users constantly reading and writing press quality InDesign files to and from this server.

Since ramping up our staff, opening files, saving files and doing searches on the shared drives has become painfully slow. Looking at activity monitor we are seeing that the AFP service is running at about 350% constantly and mds is taking up most of the rest. Basically we are at 100% capacity.

We want to be able to keep all of our files in a centralized location, but I'd appreciate advice on best upgrade paths to consider.

Should we be looking at swapping to another Mac Pro with more cores? Consider switching to something other than AFP on a different platform?

We don't have a networking specialist, but we'd be happy to explore all options.

Thanks
 

Silencio

macrumors 68040
Jul 18, 2002
3,532
1,664
NYC
More info required:

- What version of Mac OS X or Mac OS X Server are you running?
- How much RAM is installed?
- What hard drive(s) are installed, and how are they formatted / partitioned?
- How is your Mac Pro connected to your network?

That Mac Pro ought to be able to handle the load just fine, but you'll need to make some investments in some upgrades to get performance up to par.
 

theRick119

macrumors member
Original poster
Apr 27, 2008
83
1
Thank you for replying, Silencio.

- What version of Mac OS X or Mac OS X Server are you running?
10.5.8 Server

- How much RAM is installed?
14GB

- What hard drive(s) are installed, and how are they formatted / partitioned?
There is a 320gb 7200RPM drive with 8mb cache installed as the OS drive. It has a single HFS+ Journaled partition.
There are 3 2TB WD Caviar Green 64mb cache drives mirrored via SoftRaid. Also a single HFS+ Journaled partition.

- How is your Mac Pro connected to your network?
A single ethernet port connected to a ProCurve 1800-24G switch.

That Mac Pro ought to be able to handle the load just fine, but you'll need to make some investments in some upgrades to get performance up to par.

That would be great if we could use our existing hardware. I would assume any / all upgrades would ultimately be less expensive than a new system.
 

Silencio

macrumors 68040
Jul 18, 2002
3,532
1,664
NYC
Hmm, you could consider upgrading to Snow Leopard or Lion Server, as they both made some improvements to AFP performance.

You have three 2TB drives, all mirrored? I've never set up a mirrored RAID configuration like that before, but I'd imagine that puts some load on the CPU running in software.

Also, the Western Digital Green drives are not your ticket to high performance RAID configurations. You'll want to use Western Digital RE4s, or Blacks at the very least. Using higher performance drives ought to help, but it seems like your workload is such that a hardware RAID of some sort should be strongly considered.
 

mainstay

macrumors 6502
Feb 14, 2011
272
0
BC
Agreed on the Green drives. They are not high performance drives and switching to Enterprise drives in a hardware RAID would help.

Can you moderate / tone down your Spotlight Indexing? Exclude some directories?

If you are using an offsite backup they often trigger Spotlight RE-indexing. Exclude those directories.

Or maybe it is stuck on a particular file?

See: https://forums.macrumors.com/posts/4419601/

In the long term (i.e., I hate upgrading when there is a panic situation, I usually do triage, get people through the emergency, and then carefully and thoughtfully plan for a proper solution) you may need to look at load balancing your system with a second server and splitting up the roles of the server.

I would not do an upgrade to SL server at this point. But definitely consider it for the spring when (presumably) your business is not in high stress mode.


EDIT: reading on other sites:

other resolutions include:

.disable Spotlight on client machines (understood, this isn't ideal)
.fragmentation of data drives
.people running applications from the server, because they had created what they thought were "shortcuts"
.ensure all clients (and the server) are fully updated

From: http://www.ijoel.com/groups/tech/we..._High_CPU_Utilization_on_Mac_OS_X_Server.html

Here is a list of the top "fixes" for this scenario, but by and large, these items ultimately did not resolve the issue for the majority of people:

1. Setting AFP wan threshold
2. Turning off spotlight on server by marking AFP volume as private
3. Turned off Time Machine even though no backup volumes where defined
4. Set kern.maxfiles=200000 and kern.maxfilesperproc=5000 sysctl’s
5. Turned DS_Store off on both clients and servers
6. Turned off smb (windows samba) file server
7. Disabled auto-disconnect in AFP after idle time.
8. Removed spotlight indexing on all afp volumes and deleted .SpotLight-V100 directories on AFP volumes.
9. Verified that Host Cache Flushing is disabled on external RAID array.
10. Set the following default: defaults write com.apple.desktopservices DSDontWriteNetworkStores true. Set as preference for all groups.
11. Disabled kerberos for AFP authentication.
12. Changed the fibre topology to Point to Point for all 4 fiber connections to the Promise VTrak array.
13. Stop spotlight indexing by using the command: touch /Volumes/Sharename/.metadata_never_index
14. Renamed odpac.bundle in /System/Library/KerberosPlugins/KerberosAuthDataPlugins/ to odpac.bundle_DISABLED
15. volume with our home dirs has to keep at least 10% free for performance reasons

So we broke down and ordered 32GB of RAM for each of our home directory servers (an upgrade from 8GB), and we upgraded the home directory server to Mac OS X Server 10.6.2.

Its only been two weeks, but I have been monitoring every day for those two weeks, and it looks like the issue has been resolved by the RAM upgrade and u0grade to 10.6 server.

and this:

I resolved the problem quickly by transferring the network users cache to the local Imac drive. Your server needs to be configured as advanced mode so you can use Work Group Manager. For instructions, go to Page 10"Reducing afp load on your servers" of this PDF http://www.afp548.com/filemgmt_data/files/Leopard Server Quickstart Guide. pdf.
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
There is a 320gb 7200RPM drive with 8mb cache installed as the OS drive. It has a single HFS+ Journaled partition.
There are 3 2TB WD Caviar Green 64mb cache drives mirrored via SoftRaid. Also a single HFS+ Journaled partition.

I suspect that tweaking the wan threshdhold and quantum on the server/clients may help.

http://www.afp548.com/article.php?story=20060329213629494

A good chance the problem is a negative feedback loop in that a loaded, unbalanced server will look like it is on WAN (not a local LAN) and the chunks being sent/received will get smaller. This causes more load on the server (since harder to track file broken up into smaller chunks) . ... which causes another trip through the negative feedback cycle (looks slower to more client so they go to higher chunking overhead mode).


23 users pulling large files on effectively a single spindle is likely a problem. Is there a reason for the triple redundancy? Double is "good" so "triple" is better?
Mirroring means that reads/write are effectively going to a single drive. [ SoftRAID may be smart and spread the reads out a bit, but that won't work for writes. Neither does it work if checking for errors in the reads (by comparing between the mirrored sources. ] So you will have 15-25 folks requesting data from 15-25 different places on the same hard drive. That will drive the time to find each of those data chunks happen slower.

Personally, I'd peel off one of the 2TB drives and move all the "old and not going to change" projects onto that new archive volume. Back it up (there's your two copies) and restore any files as necessary (shouldn't be because they are not active).

That would require some down time but isn't a major upgrade. With the Thailand floods, HDDs are expensive to get these days anyway. There should be two somewhat helpful benefits.

1. should give you more free space on the "active projects" volume.
2. split some of the workload if folks are "searching" old projects content going to copy/paste into new ones.

The first, should help slightly with a fragmented drive. At least for a while, the new larger files will find chunks of storage more closely clustered together. The second may help remove some of the overhead that mds is adding. It also will slightly improve average access latency on the active project drive since remove head seeks for spotlight data.

A minor project would be to add an external eSATA box and pair the "old projects" disk back up with a partner and run those two RAID 1. That way if someone accidently modified an old project then modification would have a live clone.


You'd need to collect more data about disk I/O throughput but I suspect it is low once you factor in the number of concurrent users. Activity monitor will tell you what the MB/s I/O rate is. I'd be surprised if getting over 40 MB/s once the metadata+"large file" traffic gets heavy. So with 15 actively concurrent users ---> 40/15 = 2.6MB/s ... yeah opening/saving files will be slow.


A single ethernet port connected to a ProCurve 1800-24G switch.

Even if the disk isn't bottlenecked this may be an issue also. Since there is only one 1Gb/s port if there are 15 actively concurrent users that is 66Mb/s which is 8MB/s. If the disk per user is capped around 2 MB/s that isn't a problem. However, if uncork the disk to 10 MB/s it would be the new bottleneck.
 

Silencio

macrumors 68040
Jul 18, 2002
3,532
1,664
NYC
Even if the disk isn't bottlenecked this may be an issue also. Since there is only one 1Gb/s port if there are 15 actively concurrent users that is 66Mb/s which is 8MB/s. If the disk per user is capped around 2 MB/s that isn't a problem. However, if uncork the disk to 10 MB/s it would be the new bottleneck.

Setting up link aggregation with the ProCurve switch and the Mac Pro's two ethernet ports is worth a try.

Good advice from everybody else on this thread.
 

theRick119

macrumors member
Original poster
Apr 27, 2008
83
1
Extremely helpful replies, I very much appreciate it.

In summary, it looks like our short term solution that requires no new software or hardware (or maybe 1 more drive) would be to disable spotlight on non-production directory trees, abandon our double redundancy (at least internally), and create some load balancing on our drives by separating our "active" projects from our archived projects.

We'll certainly look at tweaking the wan threshold as well and we already have a LaCie 4big Quadra. It sounds like that might be a good place for the archived projects as well.


Regarding a long term solution that we can tackle when things are a little more calm it sounds like an upgrade to 10.6 server would be a good place to start.

What are your recommendations on a hardware raid configuration? Is there a best choice controller that is reasonably priced? Is the Apple Mac Pro Raid Card the best option? Or are there better options?

Would a hardware raid solution with our existing drives provide a reasonable performance upgrade? Or are we best served to do everything at once?

Where do you recommend purchasing enterprise grade drives from?

Thanks again for all the helpful advice.
 

rwwest7

macrumors regular
Sep 24, 2011
134
0
The hard drives are no doubt your choke point. Did a consultant recomend 3 X 2 TB drives mirrored with software? That's a very odd config. When you're talking multi-user acces, a single 2 TB drive is just a bad idea. Think about it this way, what happens when one user is accessing a file on the lower end of the drive and another user is trying to read a file stored on the higher end. The single head is going to go nuts. You can have all the RAM in the world but it's not going to speed up the IOPS of your HDD.

6 300 GB drives in a RAID 5 is waaaaaaaay better than a single 2 TB drive. You have 6 heads covering less ground. Personally I would never put any single drive over 1 TB in a production environment. Bigger is not always better.

If you want to do it right don't settle for anything less than a RAID 5 w/ 1 standby drive running 10K SAS HDDs.
 

Alrescha

macrumors 68020
Jan 1, 2008
2,156
317
Looking at activity monitor we are seeing that the AFP service is running at about 350% constantly and mds is taking up most of the rest. Basically we are at 100% capacity.

I would suggest that since AFP is *able* to run at nearly 100% CPUx4, you do not have obvious bottlenecks in memory, disk I/O, or network throughput. I would concentrate on ways of making AFP more efficient. The tuning page referenced by deconstruct60 and upgrading to 10.6 would be at the top of my priorities.

A.
 
Last edited:

rwwest7

macrumors regular
Sep 24, 2011
134
0
We recently ramped up for the holiday season and have approximately 25 users constantly reading and writing press quality InDesign files to and from this server.

There are 3 2TB WD Caviar Green 64mb cache drives mirrored via SoftRaid.

Thanks

Anyone reading the bolded should be able to see an obvious bottleneck in disk I/O. A WD Caviar "Green" drive is not meant to be constantly read from. True, tuning AFP preferences may speed things up a bit, but think about what that does!!! It reduces disk I/O's.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I would suggest that since AFP is *able* to run at nearly 100% CPUx4, you do not have obvious bottlenecks in memory, disk I/O, or network throughput.

Not really true. High CPU rates can show up as a symptom of bottlenecked disk I/O. On the Mac, the usual indicators ( Activity Monitor and top ) don't break down CPU data into an "doing I/O work" category. It is usually just "User, System , Idle". An I/O bottleneck can show up as a high 'System' CPU load depending upon the locks and process task switching involved. If the I/O is jammed up some processes may spend lots of time trying to hand off their time/resource allocation to another processes... which also have blocked I/O so they hand off .... Lots of thrashing around (consuming CPU) but not getting anything useful done.

For example, there is a report here:
https://discussions.apple.com/thread/1685965?start=75&tstart=0

of someone doing a trace ( DTrace? or a Mach utility) on a high load machine and finding lots of swtch_pri calls. That is basically the processes/threads trying to "give up and go to the back of the line". Usually that is smart move if waiting on glacially slow I/O request to return. But if the number of thread (users with files open) just keep getting bigger you enter a negative feedback loop. The line gets longer and the disk gets slower.

Likewise, if spin locks are being used for some critical sections ( a tight loop that keeps testing to acquire a lock on a resource) the CPU load can go up and again nothing particularly useful is getting done. The long lock wait's root cause can be I/O ( since disks are 1,000-10,000 slower than CPUs, processes can blow a huge amount of cycles on this. )

For a file server application, if the CPU load is high but the users are getting slow file service, then the load is only a side effect symptom of a more deep seated problem.

Ideally, the server software would also start to avoid system calls and spin locks when the load goes very high. In some cases it would make more sense to sleep/idle instead of trying to transfer control to another thread. Then you would might see the 'idle' percentage go up.
 

chown33

Moderator
Staff member
Aug 9, 2009
10,999
8,888
A sea of green
... Usually that is smart move if waiting on glacially slow I/O request to return. But if the number of thread (users with files open) just keep getting bigger you enter a negative feedback loop. The line gets longer and the disk gets slower.

Just wanted to say: That's a positive feedback loop.

Positive and negative refer to the sign of the error term. They do not refer to the output or outcome.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
What are your recommendations on a hardware raid configuration?

You may or may not need one. For example, if you could fit your "projects on a deadline" into 240-340 GB of space you could RAID-1 two SSDs. If you stick to 3Gb/s SSD drives then that may end up just as expensive as some fancy RAID-5 solution with 2-3 times as many disks to keep up on IOPs performance.

If the InDesign projects average 8GB in size you could keep 25 projects on a 240GB SSD with some space to spare.

As the costs of SSDs come down you will be able to fit more "critical timeline" projects onto two SSDs at reasonable costs. Right now OWC has a 240GB SSD for $390. http://eshop.macsales.com/shop/internal_storage/Mercury_Extreme_SSD_Sandforce/Solid_State_Pro/

However, if need 100's of GB for "projects on a deadline" then HDDs are still the more viable option for a couple more years.


Is there a best choice controller that is reasonably priced? Is the Apple Mac Pro Raid Card the best option? Or are there better options?

The Apple card is probably not a good choice. In fact, I suspect the product will disappear with next Mac Pro update (there should be RAID 0/1 support in the core offering and no need for a PCI-e card.). Apple will leave it purely as a 3rd-party opportunity.

RAID-5 has problems which require that the system have a UPS power supply. ( you'll need graceful shutdowns for any extended power outage. ). That drives up system costs.

If the number of users is going to increase over time then RAID-5 is an option to consider.




Would a hardware raid solution with our existing drives provide a reasonable performance upgrade?

With 25 users you should switch to "Enterprise" drives. Or at least drives oriented to RAID environments. Those drives typically have settings to better deal with high concurrent disk I/O requests.

It is going to be costly to switch out the whole drive set for the next 4 to 5 months.

Also 500GB and 1TB drives would be a better building block than 2TB. One reason may have chosen to mirror 3 to avoid a second failure while trying to "recover" from a single drive failure. (e.g., before duplicate the data the second disk of a pair also fails. ). So get a third disk. However, the problem the more disks the higher the likelihood will get a failure. The other way to avoid the problem is to use smaller disks so can recover faster.

So for instance. 4 1TB drives could be set up as two 1TB RAID-1 volumes. Same total capacity, 2TB, but spread out over two sets.
[If willing to put a storage drive into one of he Mac Pro's ODD bays can all be done internally. ]


Or are we best served to do everything at once?

Depends upon what kind of outage windows the business can tolerate. If swapping in 5 new drives for the 4 present can be done quickly then a switch-fall-back-to-old can be done quickly. If the "convert to" process is long ( 8-10 hours) and the "fall back" is equally as long can the server be out of action for almost 24 hours?

----------

Just wanted to say: That's a positive feedback loop.

It is negative on the root cause. The disks get slower. That's the core problem. The side effect symptoms happens to be positive but that isn't important.
 

Alrescha

macrumors 68020
Jan 1, 2008
2,156
317
An I/O bottleneck can show up as a high 'System' CPU load depending upon the locks and process task switching involved.

Agreed, but the OP specifically said that AFP is getting 350% of 400% available CPU and that mds is getting the rest. No high 'System' indication. If AFP can get that kind of CPU utilization, that is an indication that it is not waiting for much.

My speculation is that AFP itself is being pushed to transaction rates that it cannot support, and the solution is to reduce the number of transactions by tuning (more data per transaction, for instance).

A.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Agreed, but the OP specifically said that AFP is getting 350% of 400% available CPU and that mds is getting the rest. No high 'System' indication. If AFP can get that kind of CPU utilization, that is an indication that it is not waiting for much.

Depend upon what part of the Activity monitor screen you are looking at. The top pane where it is "%CPU per process ID" mixes the two. Only the aggregate indicators at the bottom split out the "user space" versus "kernel space "(i.e., system ) breakdowns. When you trap into a kernel call your not really changing PIDs.

There is nothing for AFP to "compute" that requires that much effort. Copy/Push file block request from/to the disk and add/remove the TCP/IP wrapper it goes/comes in. There is no huge computational load there. This is why SMB NAS servers work with Atom and ARM chips in them.

A x86 core pushing 1 billion No-ops through its pipeline will show up in the stats as a high CPU load. It is not doing anything. That is not a sign of being CPU limited. A 12 core Mac Pro with the same set of disks and a single ethernet connection would push no more data to the clients but probably would show a higher CPU "load". It is a giant load of "nothing".

% idle is when the scheduler can't find any processes to put onto the cores so the kernel knows nothing is happening. The kernel not that stats have no idea if a process is in a "busy wait" loop that really doesn't do anything productive.
 

Alrescha

macrumors 68020
Jan 1, 2008
2,156
317
Depend upon what part of the Activity monitor screen you are looking at. The top pane where it is "%CPU per process ID" mixes the two.

Well, this is not my experience. If I run a program that does a lot of disk I/O, but doesn't do much to the data, kernel_task gets the bulk of CPU in Activity monitor followed by the pid of the actual program I'm running.

That said, I can't speak to what AFP might be doing internally. Nor is Intel my first language/architecture...

A.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.