Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

streetfunk

macrumors member
Original poster
Feb 9, 2023
89
46
Hello,


here: Mac16,11
seems my M4pro, 14core, is a weak prune.
I´d like to figure out if i can do something about that.

Geekbench tells me: apple M4pro @ 4.39GHz !
but it´s supposed to run @ 4.49 - 4.51 Ghz, no ?

my Geekbench6 values:
single core: 3569 / 3561
Multicore: 21283 / 21138

SC average on M4pro is around 3750 i´d say. Top benchs at 3950.
Mine is 3560, WTF


what can i do ?
(the system runs now since two days or so)


need to decide if i should send it back, which i would hate to do.
I need SC speed by any means ! a 10%up would be allready close to be enough to justifie a new mac purchase. A 15% up on a new model makes me definitly jump. And now i have a 7-8% less speed than what hoped for, at least. I can´t stomach that i fear.

Who else ?
please feel free to post here too ;)

any tech insights, tipps, anybody ?
 
Last edited:

Mike Boreham

macrumors 68040
Aug 10, 2006
3,931
1,909
UK
I have often noticed my Geekbench scores to be a bit down on the published ones. I suspect the reason is other installed software, background processes and connected devices. Published data doesn’t say what else was running or connected, or environmental conditions.

People who chase Geekbench scores (which I have been guilty of sometimes) are likely to pull every trick in the book to score a few more points. In the past I found I could make a difference by external cooling fans and applying ice packs as well as shutting everything else down.

Can you really detect a 7-8% difference in normal usage? If you return it you would likely find the replacement was no better.

Personally I would never upgrade for only a 15% increase in speed.
 
Last edited:

pshufd

macrumors G4
Oct 24, 2013
10,155
14,579
New Hampshire
I have often noticed my Geekbench scores to be a bit down on the published ones. I suspect the reason is other installed software, background processes and connected devices. Published data doesn’t say what else was running or connected, or environmental conditions.

People who chase Geekbench scores (which I have been guilty of sometimes) are likely to pull every trick in the book to score a few more points. In the past I found I could make a difference by external cooling fans and applying ice packs as well as shutting everything else down.

Can you really detect a 7-8% difference in normal usage? If you return it you would likely find the replacement was no better.

Personally I would never upgrade for only a 15% increase in speed.

My primary system is an iMac Pro which I use next to my Mac Studio. The Mac Studio is faster in every way but a few hundred milliseconds of responsiveness doesn't matter in terms of getting work done. Unless you have a workflow that requires sustained high performance to get a long process done. And I run those on machines with faster processors if I need them.
 

streetfunk

macrumors member
Original poster
Feb 9, 2023
89
46
I have often noticed my Geekbench scores to be a bit down on the published ones. I suspect the reason is other installed software, background processes and connected devices.
ok. How much % are we talking here vs. your own measurements vs. your cases ?

People who chase Geekbench scores (which I have been guilty of sometimes) are likely to pull every trick in the book to score a few more points. In the past I found I could make a difference by external cooling fans and applying ice packs as well as shutting everything else down.
Some exeptionally well benchmark ratings might have been taken in a fridge, or who knows how.
But average is everything way higher than mine


Otherwise: some valid questions from you, but i´d like to keep it on topic

Thanks for feedback !

My primary system is
this is off topic and not welcome here !
happens in quasi EVERY thread nowadays. I´m tired of that.
i have a specific problem. please keep it on topic.

Thanks
 

Mike Boreham

macrumors 68040
Aug 10, 2006
3,931
1,909
UK
ok. How much % are we talking here vs. your own measurements vs. your cases ?
It was a few years ago now, and I don't have the details, but similar to yours I think.

Two other thoughts:

1. I always ran Geekbench a few times in a row, and believed the highest score, reasoning that there are lots things that could cause a score to be low (background processes) but nothing that could happen to make it too high. So the highest of a few is likely to represent the underlying performance.

2. I am not a chip expert, but believe chips are not like machined parts, capable of being the exactly the same to a very close tolerance. I think there will be some natural variability with some faster and some slower. So you might get lucky by returning it, if you think you would notice the difference.
 

streetfunk

macrumors member
Original poster
Feb 9, 2023
89
46
1. I always ran Geekbench a few times in a row, and believed the highest score, reasoning that there are lots things that could cause a score to be low (background processes) but nothing that could happen to make it too high. So the highest of a few is likely to represent the underlying performance.
ok, i did run geekbench6 some few times more. Now, i first quit everything else.
The tests were now better. Not that good ones, but within the pack.

Also: Geekbench 6 has now way more testresults for M4/pro macs.
It gives now a way better idea of what we can expect.

i also found one very weak testresult from somebody else.
i mean, like REALLY very weak !

while my one showed now the expected clock frequenzy, 4,51. That´s good news.
at least it´s running at the "has to be" clock speed, 😂

So my concerns are now normalized.
I think i see now a realistic picture of my own M4pro.
It´s definitly not a beast, but also not a sick lemon.
Mac 16,11: SC 3712 / MC 22385 _Geekbench6_ / MC is finally not that bad


also, i really wonder if these machines get to be slightly better after some initlal time ?
i mean, i´ve not donne these tests within the first minutes. It had some few hours runtime allready.

Thanks alots @Mike Boreham for your feedback !
that was helpful ;)
 
  • Like
Reactions: Mike Boreham

Mike Boreham

macrumors 68040
Aug 10, 2006
3,931
1,909
UK
also, i really wonder if these machines get to be slightly better after some initlal time ?
i mean, i´ve not donne these tests within the first minutes. It had some few hours runtime allready.

Quite possibly. New machines index for some time and not all processes are user processes that you can shut down yourself. Activity Monitor can show you what is going on before your run Geekbench. Anything with mds in the name is Spotlight indexing. But other processes may be busy too. In the View menu make sure you have ‘All processes’ selected, not User ‘Processes’.
 
  • Like
Reactions: streetfunk

Mike Boreham

macrumors 68040
Aug 10, 2006
3,931
1,909
UK
Quite possibly. New machines index for some time and not all processes are user processes that you can shut down yourself. Activity Monitor can show you what is going on before your run Geekbench. Anything with mds in the name is Spotlight indexing. But other processes may be busy too. In the View menu make sure you have ‘All processes’ selected, not User ‘Processes’.
Coincidentally today’s Eclectic Light article is about exactly this issue.
 
  • Like
Reactions: throAU

throAU

macrumors G3
Feb 13, 2012
9,269
7,433
Perth, Western Australia
Modern machines have so much going on in the background, especially when new and indexing, updating, running AI on your photo library, etc.

Plus environmental conditions to take into account.

Stress less. Use the machine - or don't, return it and roll the dice.
 

dmccloud

macrumors 68040
Sep 7, 2009
3,155
1,910
Anchorage, AK
Many benchmarks recommend only running them under certain conditions:

- machine is connected to AC power.
- any login items that run at startup are disabled.
- the machine is shut down and allowed to cool down before powering back on to run the benchmark(s) in question.

In my mind, only the first one is really valid, and even then only under certain conditions. I would rather benchmark my systems under actual operating conditions to get an accurate read of relative performance than to use a setup that is not indicative of my daily workflow to obtain idealized results.

Quite possibly. New machines index for some time and not all processes are user processes that you can shut down yourself. Activity Monitor can show you what is going on before your run Geekbench. Anything with mds in the name is Spotlight indexing. But other processes may be busy too. In the View menu make sure you have ‘All processes’ selected, not User ‘Processes’.

I use Asitop to display CPU and RAM usage via Terminal instead of Activity Monitor. It gives me both E-core and P-core usage and clock speeds as well as GPU usage and RAM usage, plus power draw. One thing that many people overlook is that Activity Monitor itself introduces some additional overhead due to the way it monitors all processes and calculates memory pressure.



Screenshot 2024-11-19 at 6.20.01 PM.jpg
 

throAU

macrumors G3
Feb 13, 2012
9,269
7,433
Perth, Western Australia
I'll just say - try not to get too hung up on benchmarking your personal Mac against some other machine in some other room at some other temp running some other suite of background applications to you.

At the end of the day as a Mac customer your choices are limited - there are a few fixed specs to choose from and its a case of buying the particular Mac model that fits your budget and usage pattern.
 
  • Like
Reactions: streetfunk

Analog Kid

macrumors G3
Mar 4, 2003
9,362
12,611
I would rather benchmark my systems under actual operating conditions to get an accurate read of relative performance than to use a setup that is not indicative of my daily workflow to obtain idealized results.

That's not a benchmark. A benchmark is meant to be comparable, and if you have a bunch of random variables in the mix, it's not comparable.
 

Mike Boreham

macrumors 68040
Aug 10, 2006
3,931
1,909
UK
That's not a benchmark. A benchmark is meant to be comparable, and if you have a bunch of random variables in the mix, it's not comparable.

Agreed but I use benchmarks primarily to compare with my own systems to confirm how much improvement I am getting. I keep a record of the benchmarks of all my devices and sometimes check whether macOS updates have had an effect.
 
  • Love
Reactions: streetfunk

dmccloud

macrumors 68040
Sep 7, 2009
3,155
1,910
Anchorage, AK
That's not a benchmark. A benchmark is meant to be comparable, and if you have a bunch of random variables in the mix, it's not comparable.

99% of the benchmark scores posted via Geekbench, 3D Mark, etc. are not run under those "ideal" conditions, yet they are still used as tools for comparison between systems on a daily basis. Real world conditions will always outweigh lab scenarios.
 
  • Like
Reactions: streetfunk

Analog Kid

macrumors G3
Mar 4, 2003
9,362
12,611
Agreed but I use benchmarks primarily to compare with my own systems to confirm how much improvement I am getting. I keep a record of the benchmarks of all my devices and sometimes check whether macOS updates have had an effect.
99% of the benchmark scores posted via Geekbench, 3D Mark, etc. are not run under those "ideal" conditions, yet they are still used as tools for comparison between systems on a daily basis. Real world conditions will always outweigh lab scenarios.

I understand, but if you aren't running that benchmark tool under tightly controlled conditions or running it a very large number of times randomly on each system to be confident you have a good statistical sampling of your system, you aren't able to do that comparison. Certainly not to the level of precision OP seems to be concerned about.

People often misunderstand the purpose of a benchmark-- a benchmark tests one very specific thing and only that thing. Different benchmarks test different things, but each reports one number for one thing.

Geekbench, for example, tests this one thing:

I know it looks like it's testing a lot of different things to give an idea of "real world performance", but it's not. It's testing one thing: the weighted average of exactly those workloads on every machine, every time it's run. If your workload doesn't happen to be the same as the Geekbench test suite, if you're not recognizing bicycles in images and planning map routes in Waterloo, Ontario in the same ratio Geekbench does, then your experience won't match what Geekbench tells you.

People tend to run benchmarks incorrectly and then try to extrapolate what they find beyond the domain of relevance. Every time someone tries to make a benchmark "more real world", it just makes that benchmark less meaningful. If what you want to know is how much faster a machine is at a specific, narrow, technical task, then benchmarks are your friend. If what you want to know if whether it'll be subjectively better at what you do with it, the only true way to measure is to use it for a few weeks and see what you subjectively think. If you think you're at risk of fooling yourself about which machine is better, then they're close enough that it doesn't matter.

Benchmarks are for geeks. Trial periods are for people. Running benchmarks once under bizarre and unrepeatable conditions is for MaxTech.
 
Last edited:

Mike Boreham

macrumors 68040
Aug 10, 2006
3,931
1,909
UK
I understand, but if you aren't running that benchmark tool under tightly controlled conditions or running it a very large number of times randomly on each system to be confident you have a good statistical sampling of your system, you aren't able to do that comparison. Certainly not to the level of precision OP seems to be concerned about.

People often misunderstand the purpose of a benchmark-- a benchmark tests one very specific thing and only that thing. Different benchmarks test different things, but each reports one number for one thing.

Geekbench, for example, tests this one thing:

I know it looks like it's testing a lot of different things to give an idea of "real world performance", but it's not. It's testing one thing: the weighted average of exactly those workloads on every machine, every time it's run. If your workload doesn't happen to be the same as the Geekbench test suite, if you're not recognizing bicycles in images and planning map routes in Waterloo, Ontario in the same ratio Geekbench does, then your experience won't match what Geekbench tells you.

People tend to run benchmarks incorrectly and then try to extrapolate what they find beyond the domain of relevance. Every time someone tries to make a benchmark "more real world", it just makes that benchmark less meaningful. If what you want to know is how much faster a machine is at a specific, narrow, technical task, then benchmarks are your friend. If what you want to know if whether it'll be subjectively better at what you do with it, the only true way to measure is to use it for a few weeks and see what you subjectively think. If you think you're at risk of fooling yourself about which machine is better, then they're close enough that it doesn't matter.

Benchmarks are for geeks. Trial periods are for people.
There used to be (maybe still is) There is a Photoshop based benchmark that specifically ran a collection of Photoshop actions which I used a lot when I was working a photographer. Used it for comparing swap disk configs and effects of graphics card and RAM changes as well as Photoshop settings. As you say still a test, but more representative of my usage at the time.
 
  • Like
Reactions: Analog Kid

streetfunk

macrumors member
Original poster
Feb 9, 2023
89
46
ok, i see, so i have effectivly to shut down just about everything.
That was so far not the case when benchmarking my M4.
Once i had everything open i would run. It was just hinaging there idle.
Once i had most things closed, but probably not exactly everything.

Thanks for the feedback !


edit: and i renamed the thread.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.