Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Oh, how nice of you😆. The point stands whether you "continue to interact" or not, so I'll let the historical record of this thread speak for itself.

Yes, it does mean that. Congratulations for not sharing your methodology to enable others to repeat your tests exactly as you allege you tested them.

And congratulations!

1707929386625.png
 
  • Haha
Reactions: za9ra22
....So in that sense, I do conclude Apple doomed themselves to being thrown a lifesaver ring by Intel as a contingency for Apple having weakened the AIM alliance by picking favourites as the G3 was being superseded. I do put this one squarely on Jobs’… capriciousness: whereas other might call this shrewdness, on this point, I don’t. Intel was his own CYA for being capricious on that picking-favourites move years earlier.
Doesn't it make you wonder exactly when the first 'serious' discussion regarding Intel architecture happened within Apple? Which might also beg the question as to when it was that they first ran a version of MacOS on an Intel platform?

They were very rigorous with NDAs then, and were more able to restrict 'need to know' since it was a smaller and much more tightly focused company.
 
  • Like
Reactions: B S Magnet
The iX has had thirteen generations (and several microarchitectures), with a new gen coming out every year or two. Despite a gen 2 Sandy Bridge and a gen 13 Raptor Lake both being called, say, i7, there’s a difference between those two buggers.


Apple chose to stick with it for the MB (until 2012), MBA and mini (both until 2011). Many others had moved on to iX by 2010.

Largely because Intel's iGPUs were seen as inadequate to run OS X at the time - the only Macs that moved to Arrandale/Ironlake were ones that also came with a separate dGPU. The C2D products used Nvidia chipsets to avoid X4500 graphics. Once HD3000 shipped with Sandy Bridge, Apple made the jump eagerly - it may have been a bit inferior to 320/330M, but it was serviceable, and the CPU gains were too much to ignore.

I think some might actually have forgotten how the Intel era began so shakily. The first generation MacBook Pros - the (2006, I believe) models built on the G4 15 and 17 inch design - were prone to severe heating and failure, which was a real irony considering that Apple's argument for the Intel transition was because they'd reached the end of the practicable thermal performance of PPC architecture.

I still have a 15 and 17 inch first-gen MBP, and they both get too hot to use on the lap - and the 17-inch crashes and freezes if used for more than about 30 minutes due to heat.

I used to use a laptop cooling wedge with my first gen 15" MBP, the kind that plugged in with USB and had some fans blowing at the bottom of the case. That thing loved to run hot.

We do forget though that systems engineering at that point in time was fairly poor in general, and I think the switch to Intel was in part at least due to a need for a predictable architecture roadmap around which product planning/design/implementation could then be mapped out too. As such, I doubt Apple would still be around if they hadn't switched. Or at least not in the same form it is.

Yeah, and Apple had their own foibles then as well - the Apple designed G5 chipset was always kinda crap, to the point that Apple had IBM design the chipset for the final multicore G5s (which not only is hilariously overbuilt, I've heard it actually cost more than the CPUs). Moving to Intel allowed Apple to get out of that business for a bit.

I do wonder this, and I think Apple would still be around today, as would Macs — namely, because iPod was still a strong revenue stream, and iPhone, by then under planning, if not early development, was in the pipeline. This alone would have avoided much of the software re-working for little-endlian MIPS processing, maintaining RISC, big-endian code across the PPC Macs and ARM handhelds.

Moreover, PA Semi were already up and coming with their multi-core chip design for portable applications, and that would have relieved some of the engineering issues IBM and Freescale were facing.

And had Apple actually and truly been all-in, commitment-wise, on the AIM alliance they helped to forge during the Sculley era (which they weren’t and probably hadn’t been since 1999), then this would have prompted better collaboration between IBM and Motorola/Freescale in chip design and deployment as the PPC750 was being superseded. But seeing as Apple had dropped their end of the alliance, by engaging in picking one or the other partner to design an entire generation of CPUs — Motorola for G4, IBM for G5 — this had yielded, by 2004–05, two very different PowerPC-based evolutionary branches which had no chance of being dovetailed back together.

So in that sense, I do conclude Apple doomed themselves to being thrown a lifesaver ring by Intel as a contingency for Apple having weakened the AIM alliance by picking favourites as the G3 was being superseded. I do put this one squarely on Jobs’… capriciousness: whereas other might call this shrewdness, on this point, I don’t. Intel was his own CYA for being capricious on that picking-favourites move years earlier.

Apple went to IBM for the G5 largely because Motorola/Freescale weren't willing to deliver anything beyond the G4. The original G4 also wasn't exactly a quantum leap either - it was the G3 with AltiVec/VMX bolted on, something IBM initially dismissed but later embraced and continue to use to this day. They later delivered the G4e which was an actual redesign, but it still wasn't competitive with what was on the scope, and the G5 being a cut down POWER4 chip meant IBM was never going to get it into a state where it would work in laptops, nor were they interested in actually making something suitable - Cell for Sony, Xenon for Microsoft, and the evolution of the G3 for Nintendo were the apples of their eye by then.
 
Which might also beg the question as to when it was that they first ran a version of MacOS on an Intel platform?
From what I understand the first two developer releases of Mac OS X's precursor Rhapsody (DR1 and DR2) were released for both PowerPC and x86 after being announced at the 1997 WWDC. This came after Apple acquired NeXT and began reshaping their OpenStep platform to look and feel more like Mac OS.

With this in mind Apple must have been building Mac OS X as a multi-arch system for a very long time. Prior to OpenStep (NeXT + Sun Microsystems collab), NEXTSTEP versions were built for 68K, x86, PA-RISC and SPARC platforms. It was much later OpenStep was ported to PowerPC during Rhapsody development. So, technically Mac OS X has some of it's founding roots in the form of the Mach kernel and Yellow Box (which evolved into AppKit/Cocoa) in both 68K, and x86 land (think 68040 and 486DX era).

I imagine Apple had the Intel switch in their back pocket for a long time...

And had Apple actually and truly been all-in, commitment-wise, on the AIM alliance they helped to forge during the Sculley era (which they weren’t and probably hadn’t been since 1999), then this would have prompted better collaboration between IBM and Motorola/Freescale in chip design and deployment as the PPC750 was being superseded. But seeing as Apple had dropped their end of the alliance, by engaging in picking one or the other partner to design an entire generation of CPUs — Motorola for G4, IBM for G5 — this had yielded, by 2004–05, two very different PowerPC-based evolutionary branches which had no chance of being dovetailed back together.

So in that sense, I do conclude Apple doomed themselves to being thrown a lifesaver ring by Intel as a contingency for Apple having weakened the AIM alliance by picking favourites as the G3 was being superseded. I do put this one squarely on Jobs’… capriciousness: whereas other might call this shrewdness, on this point, I don’t. Intel was his own CYA for being capricious on that picking-favourites move years earlier.

Exactly this. It was purely business to pursue the most profitable pathway. Apple have never been all-in on any one supplier or even technology for that matter. They would chop and change seemingly at a whim, which might not be more than a single business meeting with a supplier gone bad (i.e. not meeting Apple's expectations).

It reminds me of the whole Sapphire glass iPhone 6 fiasco. Apple "committed" to go all in (in the form of a $578M supply contract) and had the manufacturer pour all their money and assets into production of a single product for their biggest sole customer (Apple), then Apple pulled the pin at the last minute to drop the tech and the company (GT Advanced Technologies) tanked.
 
Last edited:
  • Wow
Reactions: B S Magnet
Doesn't it make you wonder exactly when the first 'serious' discussion regarding Intel architecture happened within Apple? Which might also beg the question as to when it was that they first ran a version of MacOS on an Intel platform?

The moment Jobs was brought back in as interim CEO, it was very likely on the table as a tabled item to which he, internally, ordered that parallel development proceed. NeXTStep/OpenStep, after all, was already built for MIPS/Intel architecture, so the foundational work to maintain a side-build, as a contingency, was probably always in place once Jobs took up Apple’s reins.

EDIT: I wrote this before I read @AphoticD ’s reply. :D

They were very rigorous with NDAs then, and were more able to restrict 'need to know' since it was a smaller and much more tightly focused company.

NDAs, whilst having a purpose in business for near-/mid-term applications, have no business being maintained once content/knowledge is obsoleted/expired/sunset.

Then again, an NDA, in perpetuity, makes it impossible to determine when there’s obsolescence of that knowledge.

That said, there is probably a great deal of ancillary information to come out of what a quashed NDA for obsoleted products, such as the entire PowerPC era, which would answer some of the radical/root questions we often find ourselves discussing on this forum. My own prurience, of course, is the NDA-protected stuff around, say, the development stream for Snow Leopard on PowerPC — which, to Apple of 2024, is not information worth holding onto (if stale information happened to be coupled with, say, a fee/cost for storing it under perpetual NDA protection).

tl;dr: NDAs should sunset the way patents sunset.
 
NDAs, whilst having a purpose in business for near-/mid-term applications, have no business being maintained once content/knowledge is obsoleted/expired/sunset.

Then again, an NDA, in perpetuity, makes it impossible to determine when there’s obsolescence of that knowledge.
NDAs are a necessary evil, and Apple's are not much different than anyone else's, except that as a company they are much more secretive than most, and have had a number of good reasons to be. They (Apple) don't tend to retire NDAs because even fairly generic ones c a contain references to products or services which remain active, even if dormant.

And of course just because they can, which given it's their time, energy and IP, is not necessarily unreasonable. Unlike patents, an NDA may well protect the kernel of an idea, which couldn't otherwise be protected.

Back in 1992: System 7 on a 486, aptly code-named Star Trek.
Ha! That's the year I was thinking I'd seen referenced - thanks for the link!
 
NDAs are a necessary evil, and Apple's are not much different than anyone else's, except that as a company they are much more secretive than most, and have had a number of good reasons to be. They (Apple) don't tend to retire NDAs because even fairly generic ones c a contain references to products or services which remain active, even if dormant.

And of course just because they can, which given it's their time, energy and IP, is not necessarily unreasonable. Unlike patents, an NDA may well protect the kernel of an idea, which couldn't otherwise be protected.

You will probably find chagrin in my take that IP & copyright protections, as written, are — statutorily — excessive and suppress fresh waves of creativity. They obstruct the time-tested means of being able to stand on the shoulders of giants from where new ideas and innovation get sprung.

The same goes for corporate “ideas” whose remits are passed: those ideas should not be closed in perpetuity. They merit statutory sunset provisions (which can, in their own way, also keep a company from devolving into being too sluggish/anti-competitive and/or monopolistic).

I also doubt there is much, if anything, substantive in, say, the PowerPC code base of Snow Leopard development builds (Table 1, namely Builds 10A246 and 10A250) which bear any significance to, say, Sonoma. An unexpired NDA for that is farcical in 2024.
 
You will probably find chagrin in my take that IP & copyright protections, as written, are — statutorily — excessive and suppress fresh waves of creativity. They obstruct the time-tested means of being able to stand on the shoulders of giants from where new ideas and innovation get sprung.

The same goes for corporate “ideas” whose remits are passed: those ideas should not be closed in perpetuity. They merit statutory sunset provisions (which can, in their own way, also keep a company from devolving into being too sluggish/anti-competitive and/or monopolistic).

I also doubt there is much, if anything, substantive in, say, the PowerPC code base of Snow Leopard development builds (Table 1, namely Builds 10A246 and 10A250) which bear any significance to, say, Sonoma. An unexpired NDA for that is farcical in 2024.
It isn't that I disagree necessarily, because much of your point is philosophically right. The problem however is that firstly, statutory protections can really only be applied to things and to some degree the concepts underpinning them, which does create an issue for companies such as Apple where there is a need (as far as they are concerned) to protect the roots of these things too, including research, materials development, manufacturing techniques, even the business decisions that took place. Secondly, NDAs can and often do relate to how ideas developed or are developed, not just what they are or were, or their relationship with individuals or other businesses.

I'm not able to say whether any particular NDA, past or present should still protect whatever they were or are subject to, though I will say that in cases where even the signing of an NDA is covered by an NDA does tend to suggest they are over used. I'd also say that in my personal opinion, Apple appears to be a little NDA happy. I've never worked for them, but I worked with them on projects since 1991, and the NDAs I'm aware of seemed pretty much entirely justified.... at the time, at least.
 
An even better test, I later discovered, is the classic.mouseaccuracy.com web-based test. With a good mouse, (or even a trackpad) a score of 17 is doable in Windows. Good luck hitting that in OSX though. You can even try it yourself... unless you're too scared to know the objective truth.

In the spirit of your anecdotal and unscientific test, my results using an MX Master mouse. I've been daily driving that mouse for about 6 years. I gave myself one practise try on each, then a proper go.
Mac : 15
Windows : 13

Still sounds like a skill issue to me.

If you need a ladder to get yourself out of the hole you've been digging, I can help you find one.
 
But you haven't. You've recalled in your experience a 20-40% increase in speed doing tasks. You haven't specified what tasks, what application(s), what OS or what hardware.
Because it's been universal. In the early part of the last decade, it was with a G5, Firefox browser, and a basic house-brand USB wireless mouse. Today, it's with an iMac and the white Apple USB mouse running the classic.mouseaccuracy.com test. The application and hardware make functionally no difference. Even varying the pointing device only modifies the root of the score (the point from which we are subtracting 20-40%).

No matter what, we have that scientific/repeatable test, Microsoft's academic paper, inquiries in hundreds of forum threads spanning decades, and the existence of widely-adopted software like SmoothMouse. It's clearly not a made-up phenomenon.

I did the test and my results were abysmal...
Nuff said.

Yes, it does mean that. Congratulations for not sharing your methodology to enable others to repeat your tests exactly as you allege you tested them.
See my first paragraph in this post. Or, if you're too scared to bear witness to the truth firsthand, well, you know what they say about ignorance.

In the spirit of your anecdotal and unscientific test, my results using an MX Master mouse :
Mac : 15
Windows : 13

Still sounds like a skill issue to me.
That seems like a definite outlier. 15 actually sounds a bit high for OSX, but 13 is woefully low for Windows. If it helps, I did have a Logitech MX Revolution mouse for a time, and they may have improved things between it and the Master, but the Revolution definitely had some funky ballistics of its own. I found it frustrating in any operating system (even Windows) because it was clearly trying to add its own acceleration to the mix. Not sure if that's what yours is doing, but definitely a thought worth pondering.

Some of the best mice I've ever used were basic, plain jane, and ordinary; not trying to do anything goofy to the input data.

As you put it, "skill issue" is a possibility too, but I don't want to be one to judge or insult. Thanks for having the courage to give it a try.
 
Because it's been universal.
Yet I've never encountered this dispute before and never felt any discomfort over pointer control when switching from Mac to Windows.

Nuff said.
As I said, the result is pointless in isolation. My iMac has a Razer Orochi mouse atttached which is pretty bad without it's needed driver/preference pane (installed but doesn't work on 10.11.6)

Despite that I just tried it again and on the site default settings scored 17 with 0 misclicks.

For a proper test, the kind I asked for but you've evaded detailing, I tried it on my dual booting Macbook C2D 2.4Ghz with a no-brand, inexpensive mouse attached:

OSX Mountain Lion InterWeb browser 12 and 1 misclick
Windows 8.1 Microsoft Edge browser 11 and 4 misclicks
 
I like the MX Master. I wouldn't daily drive it for 6 years if it sucked. 😂
If I had results like that, I'd have dumped both OSX and the mouse long before 6 years elapsed. The Revolution had nice hardware, (the buttons, switches, dials, and ergonomics, were all top-notch) but the "help" it was trying to provide behind the scenes was extremely counterproductive. I can't imagine that they kept doing that too long afterward, but when I think of all of the horrible trackpads and trackpad drivers that were doing the exact same thing in the early-2010s, it has to be considered as a possibility.
 
o matter what, we have that scientific/repeatable test, Microsoft's academic paper
I'm assuming you're calling the earlier link you provided as a "scientific and repeatable test."

I'd not consider telling people "go here and try this" to be a scientific and repeatable test.

You have a consistent platform, which could be the BASIS of a "scientific and repeatable test" but in and of itself isn't one as there are far too many variables unaccounted for.

For the record, I did try yesterday morning, while laying on the couch at about 5:00AM with my 2021 M1 Max MBP, halfway through my first cup of coffee of the day, using one hand on the trackpad while my other hand was giving my 1 year old a bottle. That's far from ideal conditions, and I was able to get a score of 10-13 in several tests.

I would have tried when I got into work on a real mouse but I also...well sort of had work to do...and considered that a bit more of a priority.

If you were in an actual consistent, repeatable environment, ideally with someone using proper posture at a desk, and actually controlling for every variable EXCEPT for the operating system behind the hardware, and repeated the test with a statistically significant sample of users, you'd have a basis to claim you had made a scientific and repeatable test.

As I keep saying, though, your tests are your experience. They are anecdotal. Yes you've pulled an arbitrary "20-40%" out and given us some generalized numbers, but haven't truly provided data analysis either. Even if you want to limit this to your own experience, how about if you repeat the test several times on both platforms and report an average and standard deviation? That's getting closer to a "scientific" approach to this.

Also, what Microsoft academic paper are you referring to? The only one I've seen was your link that described how mouse acceleration worked in Windows. It's a reference document, presumably for developers to be able access, or maybe someone with access to the information just wanted to write it up. It is not anything resembling a research paper. It does not explain the basis for the acceleration curves used, does not provide data on how they perform, or really anything beyond just explaining how the thing works. It could be a section of a proper academic paper, but it's by no means research.
 
An even better test, I later discovered, is the classic.mouseaccuracy.com web-based test. With a good mouse, (or even a trackpad) a score of 17 is doable in Windows. Good luck hitting that in OSX though. You can even try it yourself... unless you're too scared to know the objective truth.

I've never seen this test before but just tried it

got 18 first try, in macOS

makes one wonder if the rest of your screed suffers from as much misplaced confidence
 
Last edited:
  • Like
Reactions: bunnspecial
I like the MX Master. I wouldn't daily drive it for 6 years if it sucked. 😂

I tried to like the mx master but it just felt clunky to me

I find the g502 X gives similar ergonomics and functionality while still feeling light and breezy
 
I'm assuming you're calling the earlier link you provided as a "scientific and repeatable test."

I'd not consider telling people "go here and try this" to be a scientific and repeatable test.

You have a consistent platform, which could be the BASIS of a "scientific and repeatable test" but in and of itself isn't one as there are far too many variables unaccounted for.

For the record, I did try yesterday morning, while laying on the couch at about 5:00AM with my 2021 M1 Max MBP, halfway through my first cup of coffee of the day, using one hand on the trackpad while my other hand was giving my 1 year old a bottle. That's far from ideal conditions, and I was able to get a score of 10-13 in several tests.

I would have tried when I got into work on a real mouse but I also...well sort of had work to do...and considered that a bit more of a priority.

If you were in an actual consistent, repeatable environment, ideally with someone using proper posture at a desk, and actually controlling for every variable EXCEPT for the operating system behind the hardware, and repeated the test with a statistically significant sample of users, you'd have a basis to claim you had made a scientific and repeatable test.

As I keep saying, though, your tests are your experience. They are anecdotal. Yes you've pulled an arbitrary "20-40%" out and given us some generalized numbers, but haven't truly provided data analysis either. Even if you want to limit this to your own experience, how about if you repeat the test several times on both platforms and report an average and standard deviation? That's getting closer to a "scientific" approach to this.

Also, what Microsoft academic paper are you referring to? The only one I've seen was your link that described how mouse acceleration worked in Windows. It's a reference document, presumably for developers to be able access, or maybe someone with access to the information just wanted to write it up. It is not anything resembling a research paper. It does not explain the basis for the acceleration curves used, does not provide data on how they perform, or really anything beyond just explaining how the thing works. It could be a section of a proper academic paper, but it's by no means research.

"Variables unaccounted for" or not, (and I'll be the first to admit that classic.mouseaccuracy.com isn't perfect at simulating real-world tasks, and favors coarse accuracy slightly over fine/moderate accuracy; it's just the closest/easiest starting point that we have, and it's certainly quite consistent) the point still stands. It is a fact that the Windows mouse acceleration curve is more natural and intuitive. Is it going to make a huge amount of difference for people who take five minutes to move the mouse in position over the web browser icon to double-click on it and open it up? Probably not. But for the rest of us seasoned mousers, it's meaningful.

However, there is a reason why Microsoft retooled their acceleration curve for XP. There is a reason why keyboard shortcuts are as popular as they are in OSX. There is a reason why those who can hit 17 on that test in Windows struggle to get to 12-15 on OSX under otherwise-identical conditions. There is a reason why OSX isn't that popular in the enterprise (or among home users, for that matter). And there is a reason why SmoothMouse was/is as popular as it was/is. I'd make the case that they are all one and the same, even if those affected don't realize it yet, and that is because pointing in OSX is as miserable as the sky is blue. You can deny it to the bitter end, call me names, ignore me, yell that the tests aren't "scientific enough," and insist that the sky most definitely is green. All I can do is point out the obvious objective truth.
 
However, there is a reason why Microsoft retooled their acceleration curve for XP. There is a reason why keyboard shortcuts are as popular as they are in OSX. There is a reason why those who can hit 17 on that test in Windows struggle to get to 12-15 on OSX under otherwise-identical conditions. There is a reason why OSX isn't that popular in the enterprise (or among home users, for that matter). And there is a reason why SmoothMouse was/is as popular as it was/is. I'd make the case that they are all one and the same, even if those affected don't realize it yet, and that is because pointing in OSX is as miserable as the sky is blue. You can deny it to the bitter end, call me names, ignore me, yell that the tests aren't "scientific enough," and insist that the sky most definitely is green. All I can do is point out the obvious objective truth.

Can you clarify something?

When you say "There is a reason why those who can hit 17 on that test in Windows struggle to get to 12-15 on OSX under otherwise-identical conditions", are you referring to you, yourself or do you have this data for multiple people?

Also, I seem to recall you stating that 17 was a best case, sometimes score for you in Windows. How consistently are you able to reach this score? Again, what is your standard deviation? Did you apply a Q test or do some other statistics to determine if this "sometimes 17" is an outlier?

You were the one who threw out "scientific and repeatable", but that phrasing-especially claiming repeatability-is only valid within the confines of a controlled experiment with a large enough data set AND analysis of that data using proper statistical methods.

You threw out "science." I AM a scientist, and I'm asking you the sort of questions I'd ask a peer who made any sort of claim, no matter how believeable or spectacular it was. BTW, half the point of graduate school is to learn how to critique others' research-those of you who have been there know what I'm talking about. I spent two years in a required hour-long class every Thursday afternoon where we'd read an assigned journal article then spend the entire hour in class picking apart everything we could find that was wrong with it, and in those cases we were looking at actual published articles(sometimes big names on the author line) that had been through the peer review process to be published in a journal. 2 years is a long time and a lot of papers-we looked at top tier journals(JACS and Agewandte Chemie for my field specifically, as well as Science and Nature both) along with some bottom rung ones and, well, no one is perfect.

I'll also point out that through this, at least my sentiment, and I think most of us feel "That's great that Windows works better for you, but OS X still works better for me." I don't think any of us are denying YOUR individual conclusions, we are denying the generality. It doesn't exactly help your case when you need to throw in insults about those disagreeing with you, such as telling someone who's been using a mouse for tasks that require varying amounts of precision(some of them needing very high precision) quite literally for most of their life that their mouse skills must be "poor."

While we're at it too, if both ultimate speed amd precision are the requirement, why not use a graphics tablet? The Wacom one I have, which is pretty much bottom the bottom end of their range, lets me scoot across the screen insanely fast given that, by design, tablets are 1:1 mapped to the screen. Wacom still puts their own acceleration magic in, though, that lets you be both very precise without feeling sluggish if you are working precisely in a small area. Mine's done a LOT of Zoom lectures, writing on the virtual whiteboard, which is why I bought it in the first place. The learning curve on a tablet is high, but once you get there, it's an incredibly valuable and versatile input device. I know people who use them as their primary pointing device regardless of the task, or others who use them for specific tasks(such as Photo editing).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.