(Moderators: This is a work related problem and we have Mac-users as customers but the question is technical and not product related. I've removed all commercial info and feel the question has merit in this forum as is. I've read the faq and user rules and I believe this is a fair use, but if you require it I'm happy to annotate my company and product names. But what I'm asking is really an OS X 10.6 vs OS X Server AND tcp/ip question, without reference to particular applications. This note needn't accompany the posting.)
I'm testing a multi-platform app that uses tcp/ip for client/server communications. To simulate a typical environment, a script calls a script... to start a number of parallel operations through tcp/ip. The data is the same for all instances but can be anything the discs will hold. For testing, both clients and server are actually running on the same computer.
Our baseline Linux server runs fine, with 100 simultaneous "users" and
8 files each- 6 text and binary files up to 1M, a 4.5Mb bitmap photo
image and a 17Mb bitmap photo image. (yes we chose the bitmaps to load
the system. The clients compress and decompress them before and after they go through the network.
With OS X 10.5 Server, our very lightly configured Intel XServe blade
can handle it too- it has four cores, 2.* GHz (just under 3) and 4 Gb
of physical memory. We had it configured as a lightweight to support
testing client software, mostly.
With OS X 10.6.4, my desk Mac Pro chokes at something over 60 'users',
although it has 8 cores (two quads at 2.66Ghz, 1066 bus speed) and
16Gb of physical memory. With only the 6 little files and the smaller (4.5Mb)
photo. Take out the photo and I can get 100 'users' running, files
size up to 1Mb spreadsheet, without a problem.
When it chokes, I get tcp/ip timeout errors, leading to broken pipe
errors, all from deep in the stack.
I copied all the sysctl parameters relating to networks from the XServe/OS X
Server 1.5 to my Mac Pro/ OS X 1.6.4. Number of processes per UID, number of files, etc etc. There were fewer failures but no complete success.
One of our frighteningly smart, but not very Mac knowledgeable,
resources believes that its the OS getting in the way and we should buy
Server for the Mac Pro. They also believe that having the same
parameters in OS X and OS X Server is as good as we can get.
I suspect that the Mac Pro, good though it is, with twice the cores and four times the memory, is still less powerful than the XServe, in terms of tcp/ip throughput. I am certainly willing to hope, however. If there were a software change that allowed me to make this work, I'd be on it.
The next step is dual booting with Linux, and hoping that tcp/ip support is better there.
Does anyone have experience using OS X Server on a Mac Pro (mid 2009
production) as a way to get better tcp/ip performance? vs OS X 10.6.4?
I think copying the sysctl parameters may be less than a complete effort, not only are 10.5 Server and 10.6.4 different, the hardware is different and built for different tasks. There may be different parameters to set and different parameter values needed to get the max throughput without timeouts.
Second question, does anyone have experience tuning Mac OS X 10.6 for maximum tcp/ip throughput, in particular, tuning-out the timeouts/broken pipes?
Of course,researching all these settings is my next step- RTFM.
If someone's been down this path and has experience I'd love to
hear from them. Particularly whether spending $499 for Server will
solve the problem, or if the bandwidth problem is something I can tune out. Anyone? This is too long as it is but I can supply blow by blow sysctl values, etc, if anyone needs them.
Many thanks
Bill
I'm testing a multi-platform app that uses tcp/ip for client/server communications. To simulate a typical environment, a script calls a script... to start a number of parallel operations through tcp/ip. The data is the same for all instances but can be anything the discs will hold. For testing, both clients and server are actually running on the same computer.
Our baseline Linux server runs fine, with 100 simultaneous "users" and
8 files each- 6 text and binary files up to 1M, a 4.5Mb bitmap photo
image and a 17Mb bitmap photo image. (yes we chose the bitmaps to load
the system. The clients compress and decompress them before and after they go through the network.
With OS X 10.5 Server, our very lightly configured Intel XServe blade
can handle it too- it has four cores, 2.* GHz (just under 3) and 4 Gb
of physical memory. We had it configured as a lightweight to support
testing client software, mostly.
With OS X 10.6.4, my desk Mac Pro chokes at something over 60 'users',
although it has 8 cores (two quads at 2.66Ghz, 1066 bus speed) and
16Gb of physical memory. With only the 6 little files and the smaller (4.5Mb)
photo. Take out the photo and I can get 100 'users' running, files
size up to 1Mb spreadsheet, without a problem.
When it chokes, I get tcp/ip timeout errors, leading to broken pipe
errors, all from deep in the stack.
I copied all the sysctl parameters relating to networks from the XServe/OS X
Server 1.5 to my Mac Pro/ OS X 1.6.4. Number of processes per UID, number of files, etc etc. There were fewer failures but no complete success.
One of our frighteningly smart, but not very Mac knowledgeable,
resources believes that its the OS getting in the way and we should buy
Server for the Mac Pro. They also believe that having the same
parameters in OS X and OS X Server is as good as we can get.
I suspect that the Mac Pro, good though it is, with twice the cores and four times the memory, is still less powerful than the XServe, in terms of tcp/ip throughput. I am certainly willing to hope, however. If there were a software change that allowed me to make this work, I'd be on it.
The next step is dual booting with Linux, and hoping that tcp/ip support is better there.
Does anyone have experience using OS X Server on a Mac Pro (mid 2009
production) as a way to get better tcp/ip performance? vs OS X 10.6.4?
I think copying the sysctl parameters may be less than a complete effort, not only are 10.5 Server and 10.6.4 different, the hardware is different and built for different tasks. There may be different parameters to set and different parameter values needed to get the max throughput without timeouts.
Second question, does anyone have experience tuning Mac OS X 10.6 for maximum tcp/ip throughput, in particular, tuning-out the timeouts/broken pipes?
Of course,researching all these settings is my next step- RTFM.
If someone's been down this path and has experience I'd love to
hear from them. Particularly whether spending $499 for Server will
solve the problem, or if the bandwidth problem is something I can tune out. Anyone? This is too long as it is but I can supply blow by blow sysctl values, etc, if anyone needs them.
Many thanks
Bill