We have the actual timings and it's not underclocked by anything like that.
Firstly, if you look up the 7600GT desktop part specs, you'll find it's capable or pretty similar theoretical speeds as the 8600GT.
Secondly, you're comparing 3DMark06 scores. 3DMark06 is a synthetic benchmark that attempts to calculate what sort of performance a card might have with future games by running various tests and weighting the scores from each test.
In the case of the '06 version, shader performance is quite heavily weighted compared because future games are expected to use shaders much more heavily. The 8600M GT shaders on paper should have MUCH better performance when compared to the 7600GT, so it's likely that which accounts for the 3DMark06 score.
Note that 3DMark scores in general have frequently not reflected real world performance accurately. Just recently some reviews showed the new ATI HD 2900XT card as being close to the GeForce 8800 GTS. The same reviews showed the latter would also best the former by 25-30% in some games under the right circumstances.
3DMark doesn't really tell us much beyond how well a certain card runs 3DMark.
Most current games also aren't heavily using shaders. So, a large part of the 3DMark score is coming from a part of the GPU that's not really being taxed. And if we ignore the shaders, the 7600GT in desktop form is on a par with the 8600M for raw fillrate. It should beat it at texturing by dint of more TMUs, but otherwise they're pretty much on a par.
However, that's assuming the drivers are up to scratch. The G7x arch of the 7600GT is a mature arch that is heavily derived from the previous generation, NV4x (ie 6x00 series). This is a well known and understood arch from being around so long, and the drivers are mature and tweaked for best performance. At least on Windows, the drivers also contain a variety of optimisations for specific titles in order to ensure that they perform as effectively as possible.
The G8x series is new, and the "cut-down" parts (ie not 8800 series) have literally just been launched. The drivers are, as such, very new and it's unlikely they've been tweaked much for performance -- as often tweaks for the "bigger, better" card need further work for the cut down parts.
Thirdly, you're also comparing D3D performance under Windows to OpenGL performance under Mac OS X. They can differ hugely. Back before the multithreaded OpenGL stack appeared, World of Warcraft on the same machine was often showing close to double the FPS under XP compared to OS X.
Furthermore, there's already some noted discrepancies with the 8600's performance, in that the older X1600 from the 17" beats it in UT2004 on the Mac. Give the 8600 blows the X1600 away on paper, it shouldn't be doing that. Right there, that suggests that there's some driver issues.
Finally, those 3DMark scores are all from different machines, as are the iMac and MBP scores. There's always potential for a bottleneck of some kind coming from elsewhere, be it memory, hard disk, whatever.
I'll also note that it doesn't matter if the X1600 was underclocked, if the enclosure was slim, or if Apple has traditionally underclocked parts or even if Steve Jobs himelf whispers "it's underclocked by 30%!" in your ear as you sleep every night when we have
the actual timings used.
We have no need to "trust you" when
we can look at them, look at
nVidia's recommendations which are on their website, and SEE if it's underclocked or not. The memory is underclocked by ~10%, the core is underclocked by ~1% (which is more likely just clock error from the crystal used).
You certainly don't "know" it's underclocked by 30%.
Please stop claiming the patently ludicrous.