Can you please help me with something , ppl diss GB5 & SpecInt in this forum (or any other benchmark) , can you please show me a case where those benchmarks showed an increase of 20% in single/multicore performance , but in the "real world" things just got worse ? I would reckon it might have happened once or twice in the history of benchmarking and its probably related to systems having different DRAM sizes or SSD speeds (I,.e after the benchmarks on good machines , ppl stuck the new CPU in a worst computer and then showed bad "real world" results) , I would be surprised to see the same system doing great in benchmarks and then fail in "Real world usage" , this is because the benchmarks of today run a LOT of real world use cases and just avg them out (biased to Integer which usually is the most important) , I would agree that if you have a specific program you run then you will have better correlation when running this specific SW , but in general if GB5 thinks a CPU is 20% faster then a different CPU , it won't be way off for the majority of the uses cases , of course over a single "real world" use case your milage may vary as a dedicated ASIC for certain workload will just skew the results , but in general they track real world usage (from the CPU POV) very well , if you have a good cooling solution (which is the main downside of the benchmarks , as they dont show the cooling solution impact on the scores) then it will track real world behaviour even better.Synthetic benchmarks don't matter now, they just make people feel warm and fuzzy inside at something appearing to be better than something else whilst ignoring the most important elements. Real world usage.
When you say "real world results" , isn't it just benchmarks of specific SW as well ? how would you define which SW needs to run in order to say if the CPU is better or not ? who gets to decide what SW to run ? today if you want to show Intel is better then AMD , you focus your review on gaming (single core) , if you want to show AMD demolish Intel you focus on multithread stuff such as rendering , so a "real world" review is no better then a GB5 results , I will say that at least in GB you get a level playing field in which every CPU gets the same suite of tests to run ,while a YouTube reviewer is WAY more biased.
Note that looking at the GB5 score of both Intel vs AMD , you can deduce the results of the gaming benchmarks to favour Intel (due to single core performance that is dominating gaming) and the AMD rendering prowess due to multicore score being much better.
TLDR - folks should not discount all the "synthetic" benchmarks , they are the same as any other benchmark being done by a reviewer who picks and chooses which SW to run for himself.