Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
MacWorld compares it to a Mac Pro, which was faster on both sites than the MBP.

Edit: Oh I see... :p

Odd. Barefeats system had a lot more RAM too, and only showed a small lead. Looks like the 3rd test will decide the true winner.
 
I'm going with bad drivers or different set ups for the tests. I would personally expect the 8600 to be closer to the 7600 in terms of performance than what it is currently showing.

Maybe even ut2004 needs a patch to get max performance from the new graphics card
 

The MacWorld tests compared the SR MBP to the old 17" MBP.

The BareFeats test was comparing the SR MBP to the 15" 2.33GHz MBP.

The 17" C2D MBP's X1600 isn't underclocked at all. The 15" C2D MBP is slightly underclocked (423MHz vs 470MHz for core... and the 15" CD MBP is quite heavily underclocked...).

So, that would account for it.

Refs: 17" MBP C2D X1600 Clock Speed vs 15" MBP C2D.
 
The MacWorld tests compared the SR MBP to the old 17" MBP.

The BareFeats test was comparing the SR MBP to the 15" 2.33GHz MBP.

The 17" C2D MBP's X1600 isn't underclocked at all. The 15" C2D MBP is slightly underclocked (423MHz vs 470MHz for core... and the 15" CD MBP is quite heavily underclocked...).

So, that would account for it.

Refs: 17" MBP C2D X1600 Clock Speed vs 15" MBP C2D.
Oh, didn't know 15.4" and 17" modells ran at difference speed. Guess that's one more point against the guy who seriously belives the new GPU is 2-3x faster..

Thought if the macworld test is against 17" the speed difference vs the old 15.4" will be even better then.
 
I'm going with bad drivers or different set ups for the tests. I would personally expect the 8600 to be closer to the 7600 in terms of performance than what it is currently showing.

Maybe even ut2004 needs a patch to get max performance from the new graphics card

The games were running on OSX:

All systems were running Mac OS X 10.4.9 with 2GB of RAM.

But it sounds like Apple is looking into this particular issue:

We’ve contacted MacSoft and Apple, and they are looking into the problem. Until we figure out the reason for the Unreal Tournament results, we’ll wait to publish a Speedmark score.
 
Oh, didn't know 15.4" and 17" modells ran at difference speed. Guess that's one more point against the guy who seriously belives the new GPU is 2-3x faster..

Thought if the macworld test is against 17" the speed difference vs the old 15.4" will be even better then.

Indeed.

Where we'll see improvement (assuming good drivers) will depend a lot on workload.

The mem bandwidth of the 8600M GT vs the Mobility X1600 using Apple stock speeds is about 70% better, and fillrate is about 120% faster, based on the Bare Feats figures and what I know of the chip itself. Shaders, OTOH, should be in the range of 3-5x faster, I think. Not sure, since the "unified" SM4.0 shaders are quite different from SM3.0...

Fillrate bound stuff could see a doubling of framerate it appears, and shader bound stuff even more.
 
Indeed.

Where we'll see improvement (assuming good drivers) will depend a lot on workload.

The mem bandwidth of the 8600M GT vs the Mobility X1600 using Apple stock speeds is about 70% better, and fillrate is about 120% faster, based on the Bare Feats figures and what I know of the chip itself. Shaders, OTOH, should be in the range of 3-5x faster, I think. Not sure, since the "unified" SM4.0 shaders are quite different from SM3.0...

Fillrate bound stuff could see a doubling of framerate it appears, and shader bound stuff even more.
I don't know how you calculated but from wikipedia:
http://en.wikipedia.org/wiki/GeForce_7_Series
7600 GS, 400/800MHz core/mem, 12/5 pixel/vertex shaders
7600 GT, 560/1400MHz core/mem, 12/5 pixel/vertex shaders.

From nvidia:
http://www.nvidia.com/object/go_7_series_techspecs.html
Sucked
http://www.laptopvideo2go.com/forum/index.php?showtopic=8896&pid=50215&st=20&#entry50215
7600 go, 450/800 is said to be stock for one asus laptop, another have 450/700.

http://www.nvidia.com/object/geforce_8600M.html
8600M GT, 475/1400 MHz, 32 stream processors.

Anyway, I don't know how to calculate this stuff, and since stream processors can be used for everything they will be a little more powerful, but anyway, 1 stream processors calculates one pixel at the time, but 1 shader units calculated 4 components which I think means 4 pixels? Or such? In that case the 32 stream processors will be equivalent to 8 regular shaders, and the 7600-series seems to have 17 shaders. And 17*450 > 8 * 475. But there probably is more to it =P, I know the 8600-series could do twice as menu texture whatever than the 8800-series could, whatever that means.

Someone who do understand **** like that better? Anyway I only care about FPS.

I would want to be able to play supreme commander (doubt it), crysis (maybe?) and starcraft 2, but I guess I'm screwed?

Performance in OS X on my 6800 LE is ****.
 
Anyway, I don't know how to calculate this stuff, and since stream processors can be used for everything they will be a little more powerful, but anyway, 1 stream processors calculates one pixel at the time, but 1 shader units calculated 4 components which I think means 4 pixels? Or such? In that case the 32 stream processors will be equivalent to 8 regular shaders, and the 7600-series seems to have 17 shaders. And 17*450 > 8 * 475. But there probably is more to it =P, I know the 8600-series could do twice as menu texture whatever than the 8800-series could, whatever that means.

Quick misconception correction here -- 475MHz is the core speed, but shaders on the G8x cores have their own separate clock speed -- and on the 8600M GT, it's in the 1GHz range. Secondly, the "elements" - it means (for ATI) each shader can handle up 3 vector and one scalar instruction per cycle, vs each stream processor handling 2 scalar. (The G8x arch converts everything to scalars). However, stream processors are much more efficient in other ways too -- for example, the split fragment/vertex config of older graphics cards is gone, with each stream processor being able to work as either (or as a geometry shader, something totally new), so there's never any idle units when there's data to be processed. There's often idle units on older archs. Bear in mind this is pretty simplified and from memory. Basically, it's a significantly different arch, but (with appropriate coding/drivers) should be a much more efficient one. Granted, perhaps 3-5x is a little excessive, but that's what I get for doing top-of-the-head calcs in the early-AM :)

Finally, I'm not sure why you're comparing everything to a 7600 when the MBPs have never had 7600s? I was always trying to compare to the X1600 as found in the previous MBP. I remember the 8600GT was around 20% faster than the 7600GT in 3DMark06, if that's any help? :)
 
I referenced the 7600 due to it being in the imac which is also using a mobile cpu so is a fair comparison in terms of performance impact. Also the 8600 is meant to be a replacement for the 7600.
 
I referenced the 7600 due to it being in the imac which is also using a mobile cpu so is a fair comparison in terms of performance impact. Also the 8600 is meant to be a replacement for the 7600.

Ah.

Well, the 7600GT in the 24" iMac is pretty comparable in terms of fillrate etc. to the 8600M. It's also in a much bigger case, so it's unlikely to be underclocked -- in fact, it wouldn't surprise me if they were able to use desktop timings on it.

I expect any significant differences between the two currently to be more likely attributed to an immature driver for the 8600M, as it's a substantially different arch to the previous generation, vs the 7600GT being from a well-established and mature arch (the 7xxx parts are quite heavily derived from the 6xxx parts). The slightly faster CPU might help a little, too... :)

Having said that, the 8600M's not doing too badly...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.