Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

NotTooLate

macrumors 6502
Jun 9, 2020
444
891
Synthetic benchmarks don't matter now, they just make people feel warm and fuzzy inside at something appearing to be better than something else whilst ignoring the most important elements. Real world usage.
Can you please help me with something , ppl diss GB5 & SpecInt in this forum (or any other benchmark) , can you please show me a case where those benchmarks showed an increase of 20% in single/multicore performance , but in the "real world" things just got worse ? I would reckon it might have happened once or twice in the history of benchmarking and its probably related to systems having different DRAM sizes or SSD speeds (I,.e after the benchmarks on good machines , ppl stuck the new CPU in a worst computer and then showed bad "real world" results) , I would be surprised to see the same system doing great in benchmarks and then fail in "Real world usage" , this is because the benchmarks of today run a LOT of real world use cases and just avg them out (biased to Integer which usually is the most important) , I would agree that if you have a specific program you run then you will have better correlation when running this specific SW , but in general if GB5 thinks a CPU is 20% faster then a different CPU , it won't be way off for the majority of the uses cases , of course over a single "real world" use case your milage may vary as a dedicated ASIC for certain workload will just skew the results , but in general they track real world usage (from the CPU POV) very well , if you have a good cooling solution (which is the main downside of the benchmarks , as they dont show the cooling solution impact on the scores) then it will track real world behaviour even better.

When you say "real world results" , isn't it just benchmarks of specific SW as well ? how would you define which SW needs to run in order to say if the CPU is better or not ? who gets to decide what SW to run ? today if you want to show Intel is better then AMD , you focus your review on gaming (single core) , if you want to show AMD demolish Intel you focus on multithread stuff such as rendering , so a "real world" review is no better then a GB5 results , I will say that at least in GB you get a level playing field in which every CPU gets the same suite of tests to run ,while a YouTube reviewer is WAY more biased.

Note that looking at the GB5 score of both Intel vs AMD , you can deduce the results of the gaming benchmarks to favour Intel (due to single core performance that is dominating gaming) and the AMD rendering prowess due to multicore score being much better.

TLDR - folks should not discount all the "synthetic" benchmarks , they are the same as any other benchmark being done by a reviewer who picks and chooses which SW to run for himself.
 
  • Like
Reactions: leman

Yebubbleman

macrumors 603
May 20, 2010
6,024
2,616
Los Angeles, CA
As we know the Apple silicon Macs will have high performance cores and power efficient, lower performance cores. Will benchmarking them even be comparable to whatever intel puts out in the future? I've been thinking about this and here's a few points I've come up with.

1. For single core benchmarks, you can benchmark any intem core you want to, they are all the same. You get your result.
For Apple silicon though, a performance core will give a very different result to the low power core. Sure you could test the low power cure just hard enough so the benchmarking process doesn't transfer to the high performance core. That's totally missing the point of the low poer cores though. The benchmark software developers will ahve to add that code in so we can choose which Apple silicon core we are benchmarking.

2. For multicore benchmarking, on Intel it's easy. Just benchmark every core working hard and you have your result.
On Apple Silicon what do we do? Do we just benchmark every core together and just take the average result of both types of cores? Or do we only benchmark the high performance cores and divide that by the number of cores so get a "per core" result when using multicores at the same time?

3. Apple moving to the single SoC. That's happening, however what about the future? In the future what we consider to be a CPU and GPU could be transitioned into one larger whole. Not like the current integrated solution, more like a different, implimentation of that that doesn't have both as two parts of the same whole. It would be just one whole doing both processes. The process making this happen would not be comparable to the standard dedicated CPU and GPU we are used to today. In Intel and co stick with the dedicated model, then comparing this to Apple silicon would be quite impossible indeed.

4. Apple could make improvements to enhance the overall user experience that makes your apps run even better. These improvements could be through optimisation of some kind, not just faster better cores with more flops. (I say flops and not tflops as eventually we will one day hit more than 1000 flops). This could make an Apple silicon Mac with a lower benchmark score the better PC to use.

What are your opinions on this?

You raise interesting concerns, but I think that you (like all of us) are trained to think of Macs and Mac performance in terms of the kinds of Macs we've seen throughout, not just the Intel era, but the latter half of the PowerPC era as well. I think that the benchmarks will translate into raw performance making the comparison of an Apple Silicon Mac to either an Intel Mac or any x86-64 PC still mean something. I think how the high versus low efficiency/performance cores will still give you that Mac's maximum performance given the applications. Unless Apple turns off the high efficiency cores when tasks need the high performance cores (I'm like 95% sure they don't do this), you're still going to get a measurement of how well said Apple Silicon Mac does against the computer you're comparing it to.
 
  • Like
Reactions: the8thark

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Every time benchmarks get brought up between iPad chips and Intel chips, people always claim it's not comparable. I don't think this will change when Macs switch over. Also even on Macs, don't they benchmark differently in MacOS and Windows? That's something also to consider.

Going further than that: Apple's always tried to avoid directly competing with other manufacturers. Their marketing strategy is always to focus on things that only Apple has, like Mac OS, or the T-series chips, or their design, etc. etc. Even if these new chips outpace their x86 competition in any benchmark it's going to be claimed that it's "not comparable", and every time an x86 processor will outpace an Apple Silicon processor the peanut gallery will chime in with "Apple #rekt again!"

None of this takes into account the high/low power cores of course, but I don't think it'll matter anyway.
 

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
Part of the issue is when Apple goes to AS you will then be comparing a normal x86 CPU to a SOC. This creates problems because (especially with Apple) an increasing number of tasks are not going to the CPU block in the SOC but rather to specialized co-processor blocks. Add in the asymmetric cores and you are into Apples/Oranges territory.

So how to compare?

You need task based tests instead of synthetic scores. For example, you create a list of operations to perform on the machines and perform them 100 or 1000 times on each. Average the times, thermals and such and compare.
 

thenewperson

macrumors 6502a
Mar 27, 2011
992
912
You need task based tests instead of synthetic scores. For example, you create a list of operations to perform on the machines and perform them 100 or 1000 times on each. Average the times, thermals and such and compare.

Will this help with the Apples/Oranges nature of the comparison between Intel CPUs and ASi SOCs?
 

ian87w

macrumors G3
Feb 22, 2020
8,704
12,638
Indonesia
Imo benchmarks already don't matter for majority of people and for majority of computers for the last 10 years or so for regular tasks.
What's more important imo, and what Apple would probably build its case, is the performance per watt/battery life (for laptops). It coincide with their going green campaign as well.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Imo benchmarks already don't matter for majority of people and for majority of computers for the last 10 years or so for regular tasks.
From every YouTube video with benchmarks that I've seen, they seem to focus on two things: video rendering and video gaming. Occasionally I've seen 7zip compress/decompress and very rarely code compiling as other tests.

Though I would say, "Regular Users" are playing some video games, so we can't rule that out. In that sense benchmarks matter to people saying "hey I wanna play X game, how well does Y product run it?"
 

the8thark

macrumors 601
Original poster
Apr 18, 2011
4,628
1,735
You raise interesting concerns, but I think that you (like all of us) are trained to think of Macs and Mac performance in terms of the kinds of Macs we've seen throughout, not just the Intel era, but the latter half of the PowerPC era as well.
Add in the 68k era also. My first Mac I ever owned was a 512k Mac - upgraded from the stock 128k.

Unless Apple turns off the high efficiency cores when tasks need the high performance cores (I'm like 95% sure they don't do this), you're still going to get a measurement of how well said Apple Silicon Mac does against the computer you're comparing it to.
From what I wactched in the WWDC 2020 videos, (correct me if I'm wrong), I got the impression that it's up to the developers to correctly decide which tasks in their apps use which type of core. Also do we even know if you can spread a single task or load across both the high performance and the low power cores at the same time?

Your overall post does make some good points. Comparing the max performance of the performance cores is still comparable to the max performance of Intel cores. This does make single core comparisons at least somewhat meaningful. However it gets murky when you consider multicore performance. Comparing symmetric vs asymmetric cores is not really an apples to apples comparison.
I am sure every man and his dog will try to do this though to get some kind of a multicore comparative result. Then it'll be up to each of us individually to decide how meanimgful (or not) these reaults are.
 

thejadedmonkey

macrumors G3
May 28, 2005
9,240
3,498
Pennsylvania
Edit: My thoughts on the topic: Apple is going to have a CPU that's competitive with a newer i5. If they don't, they'll lose a lot of developers who use Macbooks (but aren't tied to MacOS) for work.

I do the managing of computers at my company, and it is that bad.

Microsoft is coasting to the market share that Windows has. Have been for many years. They have dismissed their entire internal testing group, over 5,000 people, and replaced them with the Windows Insider folks (I am one of them). As Leo Laporte said in MacBreak Weekly, "Microsoft can't code their way out of a paper bag", direct quote, and it is available on Youtube if you look for MacBreak Weekly. He has also stated on a repeated basis "Friends don't let friends use Windows".

Microsoft is no longer actively managing Windows issues. They have started focusing on stealing people's data (to follow what Google and Facebook already do so successfully), and trying to turn everything into a service (see Office 365), and cloud based services.

The fiinal point is that Microsoft has taken to doing substantial updates to Windows every six months or so. That is why the releases have been ramping up. 1903, 1909, 2004, etc. The thing is, they don't even let the previous version settle down before they are pushing out a new version. The code base for Windows is slowly becoming ever so unstable. And there is no end in sight.

Microsoft became the size it is because of the market share of WIndows and Office in the past. Apple got to the point it is because of the iPhone. Microsoft's market share has next to nothing to do with the quality of its software, and much to do with how cheap the Intel/Windows platform is to buy into. if there were another consumer level OS available on the Intel/Windows platform, maybe Microsoft wouldn't be in the position it is in now. However, in the end, it may be of interest to see where, in terms of market cap, Apple was in comparison to Microsoft, and where they are now.
I think you're missing the larger picture(s) here. As a hiring manager, that's probably not good.

1) Everything is moving to the cloud today. This means that the days of poping a CD into a PC to install Office and are long gone. The concept of a "file" that you keep on your PC is gone. Because now, you have "data" that you control, or allow others access to, or embed in other programs. And you're paying for a user to be able to manipulate and control this data. You could host it in a server at your company, but every company I've worked for in the last decade has moved to the cloud because it's easier than paying for server admins. You're not paying more, you're just outsourcing admin costs.

2) Windows is old. Like ancient. And Microsoft is improving it every update. 2004 isn't some random blobs of code that you don't need. It's a few app updates, UI improvements, and an update to the underlying display manager to allow monitors with different refresh rates to work well together, all while keeping the existing app base working properly. It's kind of like changing the engine on a car, while it's being driven down the freeway. And Microsoft has been releasing in a 6 month cycle like this for years now. It's probably better than a 3 year massive update, too, as was traditional (XP excluded)

3) Microsoft has something called a hardware abstraction layer. They can put Windows on ARM hardware, too, with nothing more than a recompile if they're lucky. The only difference between MacOS and Windows in this regard is that Apple has invested in the silicone, where Microsoft (up until about a year ago) was happy to let their OEMs take the lead. They do have an SQ1 chip (arm) in the Surface X, but it's probably not as optimized as Apple who has been making ARM a priority for the last decade.
 
  • Like
Reactions: the8thark

leman

macrumors Core
Oct 14, 2008
19,516
19,664
Every time benchmarks get brought up between iPad chips and Intel chips, people always claim it's not comparable.

This only applies to benchmarks like Geekbench, which occasionally use different code on different platform. There are many ways to benchmarks things, but it should be clear WHAT is being benchmarked. And achieving that can be already surprisingly difficult.

Let's take a look at one of the simplest types of benchmarks: measuring how much time a system needs to run some specific computational task. For this kind of benchmark one would ideally take the same source code, compile it with a state of the art compiler for each of the respective platforms, and measure the time. Benchmark done. But already here you can find issues. Maybe the code is written a some way that penalizes the performance on a certain machine. Maybe one machine has dedicated computational hardware that allows it to run the code much faster, prompting some to claim that the comparison is not "fair" since the generated machine code is not comparable. Maybe the computational task itself is not a useful approximation of what one usually does.

Now for another type of benchmark: graphics. This is tricky, since you can't just use the same code. You need to use different APIs on different platforms. And even if two platforms support the same API, you can never be sure whether you are benchmarking the performance of the machine or the quality of the API implementation.

And still, I believe that these kind of benchmarks are absolutely justified as long as these various caveats are addressed and documented. Because in the end, what we often care about is "which machine is faster if I need to do X". So if you can assume that your benchmark task is representative (like the SPEC suite that includes tasks like code compilation, graph manipulation, software image compression, ray tracing — types of algorithms that are useful and relevant), you can get a good idea about the relative performance of a target machine. Even cross-platform graphics benchmarks are useful, if they do the same amount of work — they won't give you a clear idea about which hardware is faster, but they can give you an idea what performance you can expect from doing particular kind of drawing work.

Going further than that: Apple's always tried to avoid directly competing with other manufacturers. Their marketing strategy is always to focus on things that only Apple has, like Mac OS, or the T-series chips, or their design, etc. etc. Even if these new chips outpace their x86 competition in any benchmark it's going to be claimed that it's "not comparable", and every time an x86 processor will outpace an Apple Silicon processor the peanut gallery will chime in with "Apple #rekt again!"

Once people see that their renders, compiles and statistical simulations are substantially faster on Apple hardware, they will start making their choices, no matter what the peanut gallery says. If Apple can deliver a 13" Pro that can smoke an i9 series CPU for doing phylogenetic simulations, my entire department will upgrade to ARM Macs in a heartbeat.
 

TrevorR90

macrumors 6502
Oct 1, 2009
379
299
I think benchmarks are mostly useless but it could provide an idea on how much faster a processor/hardware is versus previous generation processors/hardware.
 

Tech198

Cancelled
Mar 21, 2011
15,915
2,151
Can you please help me with something , ppl diss GB5 & SpecInt in this forum (or any other benchmark) , can you please show me a case where those benchmarks showed an increase of 20% in single/multicore performance , but in the "real world" things just got worse ? I would reckon it might have happened once or twice in the history of benchmarking and its probably related to systems having different DRAM sizes or SSD speeds (I,.e after the benchmarks on good machines , ppl stuck the new CPU in a worst computer and then showed bad "real world" results) , I would be surprised to see the same system doing great in benchmarks and then fail in "Real world usage" , this is because the benchmarks of today run a LOT of real world use cases and just avg them out (biased to Integer which usually is the most important) , I would agree that if you have a specific program you run then you will have better correlation when running this specific SW , but in general if GB5 thinks a CPU is 20% faster then a different CPU , it won't be way off for the majority of the uses cases , of course over a single "real world" use case your milage may vary as a dedicated ASIC for certain workload will just skew the results , but in general they track real world usage (from the CPU POV) very well , if you have a good cooling solution (which is the main downside of the benchmarks , as they dont show the cooling solution impact on the scores) then it will track real world behaviour even better.

When you say "real world results" , isn't it just benchmarks of specific SW as well ? how would you define which SW needs to run in order to say if the CPU is better or not ? who gets to decide what SW to run ? today if you want to show Intel is better then AMD , you focus your review on gaming (single core) , if you want to show AMD demolish Intel you focus on multithread stuff such as rendering , so a "real world" review is no better then a GB5 results , I will say that at least in GB you get a level playing field in which every CPU gets the same suite of tests to run ,while a YouTube reviewer is WAY more biased.

Note that looking at the GB5 score of both Intel vs AMD , you can deduce the results of the gaming benchmarks to favour Intel (due to single core performance that is dominating gaming) and the AMD rendering prowess due to multicore score being much better.

TLDR - folks should not discount all the "synthetic" benchmarks , they are the same as any other benchmark being done by a reviewer who picks and chooses which SW to run for himself.

As if you ran the same software, that's what the performance you'll get... Ya, i guess it is a tad one sided. But its better than not knowing anything
 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
Edit: My thoughts on the topic: Apple is going to have a CPU that's competitive with a newer i5. If they don't, they'll lose a lot of developers who use Macbooks (but aren't tied to MacOS) for work.


I think you're missing the larger picture(s) here. As a hiring manager, that's probably not good.

1) Everything is moving to the cloud today. This means that the days of poping a CD into a PC to install Office and are long gone. The concept of a "file" that you keep on your PC is gone. Because now, you have "data" that you control, or allow others access to, or embed in other programs. And you're paying for a user to be able to manipulate and control this data. You could host it in a server at your company, but every company I've worked for in the last decade has moved to the cloud because it's easier than paying for server admins. You're not paying more, you're just outsourcing admin costs.

2) Windows is old. Like ancient. And Microsoft is improving it every update. 2004 isn't some random blobs of code that you don't need. It's a few app updates, UI improvements, and an update to the underlying display manager to allow monitors with different refresh rates to work well together, all while keeping the existing app base working properly. It's kind of like changing the engine on a car, while it's being driven down the freeway. And Microsoft has been releasing in a 6 month cycle like this for years now. It's probably better than a 3 year massive update, too, as was traditional (XP excluded)

3) Microsoft has something called a hardware abstraction layer. They can put Windows on ARM hardware, too, with nothing more than a recompile if they're lucky. The only difference between MacOS and Windows in this regard is that Apple has invested in the silicone, where Microsoft (up until about a year ago) was happy to let their OEMs take the lead. They do have an SQ1 chip (arm) in the Surface X, but it's probably not as optimized as Apple who has been making ARM a priority for the last decade.

1. This is a cop-out. Turning around and saying that everything is in the cloud is completely untrue. Where I work, the accounting department runs Simply Accounting, a common accounting package. I would like to put it on the cloud version, but the company owner won't permit it. So we run it locally. There is limited bandwidth our of our facility because we are located outside the city limits, and the local service providers don't make any provision for genuine high speed bandwidth outside the city limits. Continuing with the main part of this, Simple Accounting runs on our server, and does reasonably well. However, there are parts of Simply Accounting (and yes, I am talking about the latest, most up to date version) that DO NOT RUN ON A SERVER. They must run locally. This includes EFT (which we use for billing customers and paying vendors) and payroll. Those applications run locally on the head of accountings machine. There are a LOT of custom designed applications in all sorts of companies THAT DO NOT RUN ON THE CLOUD, and never will. So using the old "everything is moving to the cloud" is at the very least premature, and at this point in time, blatantly false, and not an excuse for not doing proper software verification for software that makes up approximately 90% of the desktop computer market, and a fairly high percentage of the server market. That is where the problem lies, it is not with me , or is it that "I don't see the big picture". I may not "see the big picture", but I have a fairly good grasp of the totally obvious, and a very good grasp on what I am seeing every day.

2. Please feel free to educate us all on how Microsoft is "improving with every update". Even in the worst days of Windows 7, we haven't seen updates like we have with Windows 10 (BTW, they just made it 6 months in a row with over 110 updates this month, including one which is currently being exploited). I would happily give up the "constant" improvements for fewer security issues. What Microsoft is doing rignt now with Windows is clearly absurd. It is not only not doing end users any good, but actually making systems unusable and losing user data (users at home, not in company or enterprise settings). The fact that "widnows is ancient" is not due to decisions I have made, but those made by Microsoft themselves.

3. The Hardware Abstraction Layer (HAL) is not designed to bridge architectures, it is designed to hide the lower down hardware from being written to directly, which allows, for example, various chipsets to be used on Windows motherboards without needing to customize applications for every different chipset, video card, hard drive interface, etc. Recompiling the HAL for different architecture, and expecting it to work is unrealistic, and I can guarantee that it will NOT work without extensive testing and rework. It won't be a "simple recompile".

As for you, as a Hiring Manager, I would suggest that you stick to being a Hiring Manager, and leave people in the trenches to do their work.
 

thejadedmonkey

macrumors G3
May 28, 2005
9,240
3,498
Pennsylvania
1. This is a cop-out. Turning around and saying that everything is in the cloud is completely untrue. Where I work, the accounting department runs Simply Accounting, a common accounting package. I would like to put it on the cloud version, but the company owner won't permit it. So we run it locally. There is limited bandwidth our of our facility because we are located outside the city limits, and the local service providers don't make any provision for genuine high speed bandwidth outside the city limits. Continuing with the main part of this, Simple Accounting runs on our server, and does reasonably well. However, there are parts of Simply Accounting (and yes, I am talking about the latest, most up to date version) that DO NOT RUN ON A SERVER. They must run locally. This includes EFT (which we use for billing customers and paying vendors) and payroll. Those applications run locally on the head of accountings machine. There are a LOT of custom designed applications in all sorts of companies THAT DO NOT RUN ON THE CLOUD, and never will. So using the old "everything is moving to the cloud" is at the very least premature, and at this point in time, blatantly false, and not an excuse for not doing proper software verification for software that makes up approximately 90% of the desktop computer market, and a fairly high percentage of the server market. That is where the problem lies, it is not with me , or is it that "I don't see the big picture". I may not "see the big picture", but I have a fairly good grasp of the totally obvious, and a very good grasp on what I am seeing every day.

2. Please feel free to educate us all on how Microsoft is "improving with every update". Even in the worst days of Windows 7, we haven't seen updates like we have with Windows 10 (BTW, they just made it 6 months in a row with over 110 updates this month, including one which is currently being exploited). I would happily give up the "constant" improvements for fewer security issues. What Microsoft is doing rignt now with Windows is clearly absurd. It is not only not doing end users any good, but actually making systems unusable and losing user data (users at home, not in company or enterprise settings). The fact that "widnows is ancient" is not due to decisions I have made, but those made by Microsoft themselves.

3. The Hardware Abstraction Layer (HAL) is not designed to bridge architectures, it is designed to hide the lower down hardware from being written to directly, which allows, for example, various chipsets to be used on Windows motherboards without needing to customize applications for every different chipset, video card, hard drive interface, etc. Recompiling the HAL for different architecture, and expecting it to work is unrealistic, and I can guarantee that it will NOT work without extensive testing and rework. It won't be a "simple recompile".

As for you, as a Hiring Manager, I would suggest that you stick to being a Hiring Manager, and leave people in the trenches to do their work.
1) A single piece of software that has local dependencies doesn't disprove my point. As a software developer, I'm well aware that the majority of the applications I use on a daily basis will never work in the cloud. But as a consumer, most of the apps I use on a daily basis are cloud-based, or offer a cloud component. To the point where I have a Windows tablet in S mode (so I can't install any non-store apps, but I don't have to deal with Windows slowdowns) and I don't have any issues with app availability.

2) 110 updates doesn't mean 110 new bugs. It's bug fixes and features too. I'm sure if Apple listed every individual fix instead of rolling them out into their giant updates, there would be 110 updates, too.

3) Uh, you're just wrong. Chipsets use drivers to interface with the OS, and drivers all have a fairly universal API to implement, such as DirectX, print spooler, and WDDM 2.7 (which came with the latest Windows update that you so detest)
The Windows NT kernel has a HAL in the kernel space between hardware and the executive services that are contained in the file NTOSKRNL.EXE[2][3] under %WINDOWS%\system32\hal.dll. This allows portability of the Windows NT kernel-mode code to a variety of processors, with different memory management unit architectures, and a variety of systems with different I/O bus architectures; most of that code runs without change on those systems, when compiled for the instruction set applicable to those systems. For example, the SGI Intel x86-based workstations were not IBM PC compatible workstations, but due to the HAL, Windows 2000 was able to run on them.

Since Windows Vista and Windows Server 2008, the HAL used is automatically determined during startup.[4]

 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
1) A single piece of software that has local dependencies doesn't disprove my point. As a software developer, I'm well aware that the majority of the applications I use on a daily basis will never work in the cloud. But as a consumer, most of the apps I use on a daily basis are cloud-based, or offer a cloud component. To the point where I have a Windows tablet in S mode (so I can't install any non-store apps, but I don't have to deal with Windows slowdowns) and I don't have any issues with app availability.

2) 110 updates doesn't mean 110 new bugs. It's bug fixes and features too. I'm sure if Apple listed every individual fix instead of rolling them out into their giant updates, there would be 110 updates, too.

3) Uh, you're just wrong. Chipsets use drivers to interface with the OS, and drivers all have a fairly universal API to implement, such as DirectX, print spooler, and WDDM 2.7 (which came with the latest Windows update that you so detest)



If you had read and understood what I was saying, you would have not posted this. The HAL itself must be written to the instruction set of the computer. You cannot take a HAL written for Intel, and run it on an ARM instruction set computer. That is what I meant by bridging architectures. Your quoted part says the same "....most of that code runs without change on those systems, when compiled for the instruction set applicable to those systems.", and this was my point. For the full version of Windows 10 to run on ARM, it MUST be written for ARM. If this were not true, it would be possible to run off the shelf Windows 10 on an AS Mac, and that is NOT true.

And for the record, I do not detest Windows. I detest what Microsoft, over the last 3 years has done to it. It was one thing to get software updates and patches, it is entirely something else to issue patches (for a hundred or more issues at a time) that not only do not fix the problems they are supposed to, but also damage user data or accounts, or in some cases, delete user accounts. While there are some feature additions (very few, mostly stuff that nobody has asked for), the vast majority are security patches, not all severe, but none the less significant enough to warrant a patch. Microsoft didn't need to make those 5,000 software testers disappear; they were doing great work in finding those bugs and getting them taken care of before the patches went out. Now I guess we all work as software testers for Microsoft.

And it isn't 110 updates, it is over 700 updates in six months. Maybe 10 of those were feature additions. Nothing like starting a patch Tuesday update, and waiting with bated breath to see if you will have a working system in the end, or if it will delete your user account. I know that I what happens when I update machines either at work or at home (and yes, I do have Windows machines at home, just like I have Macs at home). I can truthfully say that even though the Mac updates are huge (10.15.6 was 3.8GB), I have far less anxiety updating my Macs than I do my Windows machines. (my suspicion is that Apple is reloading a new, full copy of MacOS, and not just patching the existing MacOS version).

I work with both Windows (and have since Windows 2.04) and Mac. I actually can see the end of Windows coming up, as well as Intel's pre-eminence as a CPU supplier (if that hasn't already happened). I cannot see the end of the Mac. We are beginning to see the implosion of the Wintel duopoly, unfortunately, and Microsoft doesn't seem to care. They certainly are not making the effort to keep Windows quality where it was in the past (if that was anything to write home about).
 

vladi

macrumors 65816
Jan 30, 2010
1,008
617
Axx always destroys 8xx in any kind of benchmark yet in the real world test iPhone regularly looses to Android flagships when it comes to simple everyday performance such as opening apps, loading web and games. Where Apple flexes its CPU is camera, the only time all cores get real busy.
 

thenewperson

macrumors 6502a
Mar 27, 2011
992
912
Axx always destroys 8xx in any kind of benchmark yet in the real world test iPhone regularly looses to Android flagships when it comes to simple everyday performance such as opening apps, loading web and games. Where Apple flexes its CPU is camera, the only time all cores get real busy.

"Regularly"? As far as I'm aware (for opening apps) these were tests iPhones won handily. And even when they were winning these thing Federighi came right out and said they were bogus anyway.
 

Arctic Moose

macrumors 68000
Jun 22, 2017
1,599
2,133
Gothenburg, Sweden
Their marketing strategy is always to focus on things that only Apple has, like Mac OS, or the T-series chips, or their design, etc. etc. Even if these new chips outpace their x86 competition in any benchmark it's going to be claimed that it's "not comparable", and every time an x86 processor will outpace an Apple Silicon processor the peanut gallery will chime in with "Apple #rekt again!"

Apple has definitely used a performance lead in the marketing strategy when there has been a lead (or even a perceived lead) to use.

 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
Apple, from the days of the PPC, has always said that benchmarks don't matter, the user experience does. They were right then, they are right now. The examples of the accelerators, which are in EVERY current SoC, and will be in the AS Macs as well, are the most obvious. The HEVC accelerators is what allows iPad pros to easily edit two incoming 4K video streams down to one combineed 4K output streams. Points to be made:
a) if the benchmarks use the HEVC accelerators, they could inflate the numbers;
b) if the benchmarks do not use the HEVC accelerators, they would show CPU only numbers, yet for people working with HEVC video, they msy not be an accurate reflection of the capabilities (in this case, the iPad Pro) of the particular SoC running this workload
c) being a hardware based accelerator, it would be useless for video not using the format the accelerator was designed for.

This is why benchmarks should be looked at very carefully, and why people should not use a single number to asses the speed of a particular piece of hardware.
 

OldCorpse

macrumors 68000
Dec 7, 2005
1,758
347
compost heap
This is a silly discussion. Synthetic benchmarks are 100% useless, period. The only - ONLY - thing that matters is user experience. This is possible to measure, although it can be tricky.

Let's start with the simplest measurment: how long does it take to start up the computer and have it in a working state? Please answer in seconds. Back in the bad old days - which is still the case on older computers - you'd start your (usually windows) computer, and then go make yourself a coffee, meanwhile, the computer would shudder, flash, whine, shake, screech, change colors from the effort and by the time you came back, it would sit there... frozen. Time for a re-boot. Finally, after you've finished your coffee, the computer is hopefully ready to be used. You think this is a joke or old times - NO, it still happens when my wife has to fire up her work laptop (she works from home) and as soon as she does, windows decides it needs to do all sorts of updates and this and that... it can literally take long enough so you can have breakfast before it's ready.

OK, so let's ignore those cases. I'm on a late 2009 top of the line (at the time) iMac. I try not to shut down, but when I do, or have to reboot, it takes a LONG time, compared to my iPad. I want my computer to be instant on - cut the time down to literally a couple of seconds. I hate waiting. So that, right there, is a measurement - the complication is of course that it's not all down to the hardware, since OS and externalities such as getting wi-fi or the network going also count. But that should presumably be the advantage of AS, since Apple can control the whole stack. BUT THE USER DOESN'T CARE WHY the computer is slow, just that it's slow. It's not an excuse to say, gee, Bob, it's slow because of windows updates and this and that, but hey, the chips are FAST, aren't you glad they're FAST???!!! No, Bob is not glad - Bob doesn't give a fig about how fast the processor is, only that it takes him 2 minutes before he can use the computer. That's all.

And so on. How fast does it take to launch an app? Well, that's only partially down to the hardware, it's also about how the app has been programmed, security routines and so on. And the OEM can't control that - though Apple tries to set standards. The same goes for performance. How does the user measure performance - not through some abstract numbers, but through app use. This makes it a bit more complicated, because even if you have the same app, it may be programmed more efficiently for one platform than the other - so its performance is not reflective of the hardware in any way, but of how the developer built it.

That's why Apple was smart to try to control the whole stack as much as possible, and set pretty strict standards for third party apps, and generally try to sandbox. This, if done well, can result in a better user experience, including speed.
 

the8thark

macrumors 601
Original poster
Apr 18, 2011
4,628
1,735
Let's start with the simplest measurment: how long does it take to start up the computer and have it in a working state? Please answer in seconds. Back in the bad old days - which is still the case on older computers - you'd start your (usually windows) computer, and then go make yourself a coffee, meanwhile, the computer would shudder, flash, whine, shake, screech, change colors from the effort and by the time you came back, it would sit there... frozen. Time for a re-boot. Finally, after you've finished your coffee, the computer is hopefully ready to be used. You think this is a joke or old times - NO, it still happens when my wife has to fire up her work laptop (she works from home) and as soon as she does, windows decides it needs to do all sorts of updates and this and that... it can literally take long enough so you can have breakfast before it's ready.

I'd like to talk about the counter side do your point here. That older computers actually booted up quicker than modern computers.

Firstly lets talk ancient times. The Commodore 64 booted up in like 3 seconds. I found a video on youtube showing this. Sure the video was shot with a potato of a video camera but it's good enough to get the picture. Even without the extra cart, they booted up really fast.

Further on in History, I remember my previous Macs I've owned. 512k (upgraded 128k), SE, iMac G3, iMac 2006, iMac 2011 (my current daily driver). The Macs took progressively longer and longer to boot. Makes sense as there's more hardware to check each time and more complex boot processes to execute the further forward in time you go. My current iMac takes the longest but it's not overly long either.

I feel your wife's issue is basedon the specific circumstances of her OS and hardware setup. It's not at all representative of the general average use case. Even PC's like the iPhone and iPad don't take that long to boot (as in boot, not wake from sleep or suspended state).

********

To the rest of your post though I generally agree with it. User experience benchmarks are always the most useful. And your last couple of sentences really sum it up well. If the 3rd party app experience is garbage, the crowds will blame Apple claiming ****** hardware. They will either refuse or not fully understand that it could be the 3rd party app developer who is at fault here. Setting these standards is good so the overall experience is a much better one.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.