Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Blair Paulsen

macrumors regular
Jun 22, 2016
211
157
San Diego, CA USA
A slight slowdown is a serious issue for some programs/tasks, but not all. As long as Rosetta2 (and/or other tools) allows most app developers to keep up without committing a lot of resources to writing new code, I'd expect most will.

The more critical issue is the apps that need to utilize resources efficiently to perform heavy duty tasks. I'd also note that while the ARM transition is the reason for the season, there are a lot of programs in serious need of ground up re-writes for at least a portion of their code. Moreover, with multi-core becoming the norm even on smartphones, devs who only utilize a single core for most tasks need to step it up.
 

satcomer

Suspended
Feb 19, 2008
9,115
1,977
The Finger Lakes Region
Even if ARM Macs have impressive synthetic benchmarks, it won't get us very far if app developers aren't interested enough to devote sufficient coding resources. I could easily see a future where Apple's homegrown apps are blazing fast on new silicon, but not much else. For starters, what about Adobe? Will they cobble together a port just good enough to keep their Mac audience feeding them subscription income, or will they dig deep?

I think the number of plugins will disappear over time! It will take to new OS and new processor and X-Code for new chips! That is how it was during the PPC to Intel change! So wait at least until the new devices come out that support the new chips!
 

Blair Paulsen

macrumors regular
Jun 22, 2016
211
157
San Diego, CA USA
Part of what makes this discussion interesting is the big thing no one knows yet;
will ARM be the next wave due to perf/watt advantages.

Apple can take the risk, in part, because pushing development of A-series ARM chips is of huge value in the iDevice world even if they never saw the inside of a Mac. They seem pretty confident they can deliver a desirable user experience into traditional laptop form factors - and an iMac could evolve (devolve?) into a really expensive stand flying the latest iPad.

Note: If the vast majority of your day is online, the data I/O available via your connection will impact performance long before lack of compute capacity in hardware in most cases. YMMV.
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
All of that is true. But if some software isn't ported over and we lose functionality, then speed makes no difference. If you are only running Apple friendly apps, it makes sense. If you are running some of the other (quite amazing) apps, that developers will just not waste time porting, then you are left worrying about the future. Do we even know what the landscape will look like post-intel. Apple's pleb consumer vision is great for the stock price (lots of consumers of content) but likely bad for those trying to make complicated, high end content. We may be forced out, which I am not happy about.

The changes between Intel to ARM are so minor I don’t think we’ll see much software opt not to port. Unless you’re writing a lot of assembly (which isn’t many pro apps), the short recompile time seems like a better investment than throwing in the towel completely.

Most developers are seeing that they recompile in Xcode and they are done. One app that runs on Intel and ARM comes out the other end. No code changes.

ARM is a standard architecture that runs standard code from standard compilers. It’s the exact same Xcode with the same languages and the same compiler used for Intel. It’s not really that big of a deal.

Remember PowerPC -> Intel required a lot of developers to switch compilers, and that was a huge deal. Not a problem here.

[automerge]1594190881[/automerge]
These would be Catalina compliant apps or apps that have yet to be ported to Catalina?

Yep. That’s the line. Catalina was the prep for ARM. If you can run on Catalina, you can probably recompile on ARM with little to no problems.

If you don’t run on Catalina, you won’t make it to ARM. But if you don’t run on Catalina, you won’t run on newer Intel Macs anyway.
 
Last edited:

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
My '07 Intel iMac running Snow Leopard is good for nothing. I can run an older version of Firefox on it, but nothing that will work with modern security certificates and web standards. I can't even use it as a player for Apple Music. Something messed up Bootcamp on it, otherwise I would have been happy to just use it as a Windows computer.

The hardware still works fine, slow but quiet. It still looks like new. But Apple deems it not worthy of a functional OS and software.

I also have an ASUS laptop from around the same time that huffs and puffs, gets ridiculously hot under moderate loads, but modern browsers work on it, and it still gets used for the occasional game of C&C3.

Even an old version of Firefox, running on an Intel chip, is still a lot better than using a PPC browser. You can run SL on up to a 2010 Mac mini, if the above poster was looking to buy one for their parents.

You could probably get Windows 10 on that iMac - it would likely have the necessary drivers built in. Or could install Windows 7 via Bootcamp then upgrade to 10 from there. An SSD also makes a huge difference, even in old machines.
 

pasamio

macrumors 6502
Jan 22, 2020
356
297
The changes between Intel to ARM are so minor I don’t think we’ll see much software opt not to port. Unless you’re writing a lot of assembly (which isn’t many pro apps), the short recompile time seems like a better investment than throwing in the towel completely.

Actually I'd expect the pro apps would be the precise candidates for leveraging assembly or some of the more advanced Intel instructions to squeeze out that last little bit of performance from a platform. Multiplatform pro apps are also likely to have these sorts of architecture specific code pieces because it doesn't really matter what operating system you're running on for assembly that you're targeting to CPU capabilities.

ARM is a standard architecture that runs standard code from standard compilers. It’s the exact same Xcode with the same languages and the same compiler used for Intel. It’s not really that big of a deal.

As I suggest above, many high end applications leverage the extra instructions that Intel provides (their vector instructions come to mind) to make their applications go faster and these instructions don't exist on ARM to the best of my knowledge. ARM is a completely different architectural approach than Intel's CPU line and whilst the standard compilers will happily take your code and compile equivalent instructions for your target CPU, depending upon how folk have implemented stuff like AVX support could require effort to re-evaluate.

Remember PowerPC -> Intel required a lot of developers to switch compilers, and that was a huge deal. Not a problem here.

More than just compilers, the move to Intel killed support for OS9 apps or "Classic" apps and the apps that hadn't been moved to MacOS yet. MacOS X 10.4 Tiger still included support for the Classic environment which meant those apps that hadn't made the transition to MacOS X on PPC had no pathway to Intel. This cut off support for a large number of older but mostly functional apps that hadn't been updated for the new platform. Even back then recompiling generic C code between PowerPC and Intel wasn't hugely problematic and Apple had the original Rosetta that did a decent job of running PPC apps on Intel hardware (so long as you didn't need too exotic an instruction).

The move from OS 9 to MacOS X also meant either rewriting your application into Cocoa or porting it to run with Carbon. Carbon itself had only existed since the year 2000 and MacOS X itself was released in 2001 with a rather rocky start. The Intel transition, announced in 2005, caught many off guard including Metrowerks who ended up giving up entirely having sold their Intel compilers earlier that same year. That move forced those who hadn't moved over to Apple's developer tooling to make the jump.

Yep. That’s the line. Catalina was the prep for ARM. If you can run on Catalina, you can probably recompile on ARM with little to no problems.


As you point out that with Catalina, Apple already effectively cleaned house. Carbon never got 64-bit support and was killed long ago and Apple have been progressively deprecating "32-bit only" API's for a while now. Catalina's 64-bit mandate took the hit for removing the older applications that hadn't been updated to support 64-bit even with over a decade's opportunity. For those not doing something leveraging specific Intel CPU features, the transition is smooth but with all of these larger code bases moving and testing takes time.
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
Actually I'd expect the pro apps would be the precise candidates for leveraging assembly or some of the more advanced Intel instructions to squeeze out that last little bit of performance from a platform. Multiplatform pro apps are also likely to have these sorts of architecture specific code pieces because it doesn't really matter what operating system you're running on for assembly that you're targeting to CPU capabilities.

As I suggest above, many high end applications leverage the extra instructions that Intel provides (their vector instructions come to mind) to make their applications go faster and these instructions don't exist on ARM to the best of my knowledge. ARM is a completely different architectural approach than Intel's CPU line and whilst the standard compilers will happily take your code and compile equivalent instructions for your target CPU, depending upon how folk have implemented stuff like AVX support could require effort to re-evaluate.

Depends. Stuff like AVX512 isn't on most Intel CPUs, so most applications should already have support for non-AVX512 CPUs. Adobe seems like the biggest problem here. They still do most stuff on the CPU, and would have done things in a non-Apple way. But... they already have most their suite running on ARM either via Windows on ARM or iOS. So I'd assume they have workable substitutes for any Intel specific optimizations they did.

Metal or OpenGL/OpenCL code shouldn't require any changes. On ARM Macs, there will be new Metal optimizations available that Intel Macs won't get. But you can port over without taking those optimizations, and it will at least run just as well as it does on an Intel Mac. So any GPU centric program should be fine.

I would think the single product apps should be able to roll out relatively quick. The giant uber suites (hi Adobe) might take a bit more time just due to their density and testing schedules. And any Apple stuff (FCPX, Logic X) is built on Metal and Accelerate, so it'll be optimized day 1.
[automerge]1594231882[/automerge]

A slight slowdown is a serious issue for some programs/tasks, but not all. As long as Rosetta2 (and/or other tools) allows most app developers to keep up without committing a lot of resources to writing new code, I'd expect most will.

ARM builds are going to be automatically run by new versions of Xcode. There isn't really a long term future for Rosetta 2 for developers. It's really just going to be for older versions. Developers won't really get a choice to ignore ARM. Developers will download new Xcode, and they'll be producing universal binaries. They might not even be aware.

But as I said, there also isn't really much new code to be written, beside the Intel assembly edge case talked about above. Same code generally works just fine on ARM.

The more critical issue is the apps that need to utilize resources efficiently to perform heavy duty tasks. I'd also note that while the ARM transition is the reason for the season, there are a lot of programs in serious need of ground up re-writes for at least a portion of their code. Moreover, with multi-core becoming the norm even on smartphones, devs who only utilize a single core for most tasks need to step it up.

Devs should be using multi core no matter which architecture they're on. But Apple silicon has Intel level single core performance. It's not ideal, but it's just fine to be single core on Apple silicon.
 
Last edited:
  • Like
Reactions: 09872738

iindigo

macrumors 6502a
Jul 22, 2002
772
43
San Francisco, CA
Depends. Stuff like AVX512 isn't on most Intel CPUs, so most applications should already have support for non-AVX512 CPUs.

On top of this, x86 is not just Intel, it's also AMD, which adds another level of variability to available instructions. Any company worth its salt either has alternative implementations for instruction-dependent optimizations/features or puts them behind an availability check.

My guess is that gigantic codebases like Adobe CC will run natively on AS pretty easily sans optimizations, which will allow companies to publish Universal 2 binaries quickly, with optimizations being added over the following year via updates.
 

Ryan P

macrumors 6502
Aug 6, 2010
362
236
I suppose there are tasks where the 12 core 5,1 is too slow, the i9 imac too hot and noisy etc.... they may sell more 16inch macbooks in the interim although they must be pretty hot as well when worked hard.

A maxed out 16” MacBook Pro could not meet my workload. Was fast, but fans would go nuts...and it would start to throttle down. The Mac Pro is far faster under load. I was able to recoup the cost of the Mac Pro in 3 days.
 

teagls

macrumors regular
May 16, 2013
202
101
As I suggest above, many high end applications leverage the extra instructions that Intel provides (their vector instructions come to mind) to make their applications go faster and these instructions don't exist on ARM to the best of my knowledge. ARM is a completely different architectural approach than Intel's CPU line and whilst the standard compilers will happily take your code and compile equivalent instructions for your target CPU, depending upon how folk have implemented stuff like AVX support could require effort to re-evaluate.

A good example of this is FFMpeg. Some of it is written in assembly to heavily optimize for x86. There is an iOS version, but I don't know how optimized it is. My guess x86 is probably more optimized since it has a larger install base. FFMpeg is very widely used!
 

iindigo

macrumors 6502a
Jul 22, 2002
772
43
San Francisco, CA
A good example of this is FFMpeg. Some of it is written in assembly to heavily optimize for x86. There is an iOS version, but I don't know how optimized it is. My guess x86 is probably more optimized since it has a larger install base. FFMpeg is very widely used!

FFMPEG has been building on other ARM platforms for a decade+ though, with the two most obvious places being Android devices and Raspberry Pi's. x86 is likely still a bit better optimized, but I'd bet the ARM build is plenty serviceable.
 

teagls

macrumors regular
May 16, 2013
202
101
FFMPEG has been building on other ARM platforms for a decade+ though, with the two most obvious places being Android devices and Raspberry Pi's. x86 is likely still a bit better optimized, but I'd bet the ARM build is plenty serviceable.

I'm aware. While it's been developed for awhile it doesn't mean it's optimized to the teeth like x86. My major worry is lots of these large open source frameworks like OpenCV already lack features on MacOS due to things like the languishing of OpenCL.

The developers who maintain those frameworks aren't going to rush out and buy an Apple ARM Mac and start porting everything over. Heck it doesn't even support Metal. MacOS benefited from the open source community through x86 because of Linux and Windows. But it's only going to get worse.

This article by Linus Torvalds sums it up well.

 

pasamio

macrumors 6502
Jan 22, 2020
356
297
FFMPEG has been building on other ARM platforms for a decade+ though, with the two most obvious places being Android devices and Raspberry Pi's. x86 is likely still a bit better optimized, but I'd bet the ARM build is plenty serviceable.

It's been over a decade but in a former role one of the teams decided they wanted to use ffmpeg to do video transcoding on some Solaris boxes running SPARC chips. The SPARC chips had support for either 32 or 64 threads so excelled at the web server role they were tasked with but fell flat on the transcoding performance. They got it to build and run on Solaris (no small feat) but the performance with those SPARC chips just wasn't there. I think in the end we bought a Windows server and a commercial software package that handled what they were trying to do.

Just because you can get the thing to compile on the target platform, it doesn't mean it's fit for purpose when you take away those architecture specific optimisations.
 

Ethosik

Contributor
Oct 21, 2009
8,143
7,120
Even if ARM Macs have impressive synthetic benchmarks, it won't get us very far if app developers aren't interested enough to devote sufficient coding resources. I could easily see a future where Apple's homegrown apps are blazing fast on new silicon, but not much else. For starters, what about Adobe? Will they cobble together a port just good enough to keep their Mac audience feeding them subscription income, or will they dig deep?
Yep agreed. This is why I am getting a Mac Pro in a few months. If the transition goes poorly, I still have a powerful Intel system. With proper cooling and expansion. I’m so sick of my i9 iMac. They never should have put it in an iMac with that poor cooling.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
...
More than just compilers, the move to Intel killed support for OS9 apps or "Classic" apps and the apps that hadn't been moved to MacOS yet. MacOS X 10.4 Tiger still included support for the Classic environment which meant those apps that hadn't made the transition to MacOS X on PPC had no pathway to Intel.

OS9 was on it is way out whether Apple shifted to x86/Intel or not. There was still a 64-bit change coming regardless. And the iPhone/iPad were coming regardless which would have expanded MacOS X kernel focus over to spanning another platform. Jobs wasn't going to put gobs of effort into keeping a OS9 'container' hobbling along versus putting effort into the "next big thing".


10.4 meant there had been four iterations already to get your Cocoa or Carbon port done already. 10.5 didn't come until 2007 ( when Mac OS X transition started in 2001 ). Six years and still haven't gotten your ports done. Apple was going to wait longer for folks to get their act together? Really?


The move from OS 9 to MacOS X also meant either rewriting your application into Cocoa or porting it to run with Carbon. Carbon itself had only existed since the year 2000 and MacOS X itself was released in 2001 with a rather rocky start. The Intel transition, announced in 2005, caught many off guard including Metrowerks who ended up giving up entirely having sold their Intel compilers earlier that same year. That move forced those who hadn't moved over to Apple's developer tooling to make the jump.

Metrowerks caught off guard how? In 1999, Motorola ( Semiconductor) bought them. Trying to postulate that the maker of PPC chips and member of PPC alliance didn't know that Apple was jumping off the bandwagon? By 2003, Motorola has spun out their Semiconductor business into a separate company Freescale. As Freecale didn't make x86 products where was the synergy ? That is primarily why those got sold off. Metrowerks didn't have a free standing, growth business anymore.

With the appearance of Mac OS X Apple started giving away XCode for free. ( ProjectBuilder early on, but branded XCode starting in 2003. The major compiler for Mac OS X was gcc. ). That was Metrowerks' major problem. By 2005 there had been basically four full years of that eroding at their business. XCode 3.0 brought DTrace and Instruments support to the base level of the Mac OS X. The foreshadowing for that probably was also evident in 2005. Also the "Interface Builder" (with Cocoa) abilities that XCode grew over time.

gcc on x86 code was too competitive for Metrowerks. Intel's compliers are even better at optimizations. That is another reason it got dropped. There were other deep entrenched players in the x86 for embedded systems compiler space (e.g., GreenHills ) . [ and metrowerks foray into Windows development brings them up against Microsoft's tools. They were picking lots of battles to be in with deep pocketed players. It probably looked attractive to do, but any major missteps and there would be large price to pay though. ]


MPW was a 'slow mover' on the 68K-PPC transition which allowed CodeWarrior to get some traction. There was no "slow mover" on the Apple side when the next hardware transition came. Or OS transtion ( OS 9 -> OS X ) .



As you point out that with Catalina, Apple already effectively cleaned house. Carbon never got 64-bit support and was killed long ago and Apple have been progressively deprecating "32-bit only" API's for a while now. Catalina's 64-bit mandate took the hit for removing the older applications that hadn't been updated to support 64-bit even with over a decade's opportunity. For those not doing something leveraging specific Intel CPU features, the transition is smooth but with all of these larger code bases moving and testing takes time.

Again ... Apple was going to hold onto to the OS9 container longer even if no x86 transition because why? Rigidly wedded to the past forever is not their policy.
 
  • Like
Reactions: codehead1

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
.....I could easily see a future where Apple's homegrown apps are blazing fast on new silicon, but not much else. For starters, what about Adobe? Will they cobble together a port just good enough to keep their Mac audience feeding them subscription income, or will they dig deep?

Adobe? Unlikely to simply "mail it in". Three major factors.

1. Adobe has some apps in the App Store. The likelihood that Mac App Store applications can lollygag without doing a port and not running into friction with Apple is very slim. Apple isn't going to heavily promote "stuck in the mud" apps at all. In just the opposite, they hare highly likely to select the competitors of "stuck in the mud" apps to heap on more "pain" to those slow movers.


The cherry on top of that is Apple herding iOS/iPadOS apps into the Mac App store. The ability to get visibility is going to drop further as that flotsam of apps rush in. App Store apps are in the suicidal zone if not trying to get ported sooner rather than later if have any decent visibility now in the Mac App Store.


2. Adobe apps share libraries for a subset of functions. Adobe has growing iOS and iPadOS apps. It isn't like Adobe doesn't already have folks building foundational libraries with Apple ARM targets already. This goes for all of the players that have a substantive number of both Mac and iOS apps that do substantive value add "work". For much fo the non GUI aspects of the apps this largely boils down to "same stuff, different day" with respect to complexity.

The same code isn't going to manage to run spectacularly faster. But shouldn't run slower either. For the Adobe apps that aren't blazingly fast before why would ARM instruction set change that. That isn't going to shift the algorithm explicitly present in the code now.

Adobe may have small sections that are tuned for Intel Quicksync specifics that probably will need some tweaks but these are likely relatively minor parts of the apps.



3. Things like Metal are unlikely to be faster for Apple apps than it is for 3rd party apps if it is being used correctly. ( don't do things like 'if apple GPU then implicitly must be iOS or iPadOS usage so therefore go more limited" . Bad assumptions injected into the code. ).

There will be some libraries that Adobe will use that aren't Apple's so there will be differences there, but the folks who have written macOS X only apps that have been keeping up with the changes Apple has pointed at for the last 3-4 WWDC conferences shouldn't be left out of most of the possible performance increases.

Apple hasn't built a large stack of "internal only, don't use " APIs here. Nor are they mostly unavailable for the x86 macOS X version. Improvements made following the optimization illustrations from WWDC 2018 , 2019 would work well on both 'sides' of mac OS 11.



That doesn't mean Adobe is going to "dig deep" and radically throw out most of their current code. All that needs to be done here is simple port to what is mostly just a "newer version" of the OS the code already runs on.




All the above said that doesn't anything for applications that are "dead in the water". Either economics ( not enough revenue flow ) or not enough resources assigned ( maybe money around, but multiple fronts to fight on and Mac isn't a growth target. ) . Going to see a large number of languishing apps, that were really languishing anyway. Just harder to see because no other changes happening. 32-bit was a "market and sweep garbage collection" trigger. mac OS 11 probably will be one also; on both sides ( x86 and arm64). More than a few developers are going to do "almost nothing" because they were doing that before arm64 came along. That won't be a change for them, but some folks will try to attribute it to arm64.
 

pasamio

macrumors 6502
Jan 22, 2020
356
297
Again ... Apple was going to hold onto to the OS9 container longer even if no x86 transition because why? Rigidly wedded to the past forever is not their policy.

I don't disagree with Apple's policy of moving on and pushing forward however my point was that the shift from PPC to Intel was disruptive on a number of levels beyond a "compiler shift" which also downplays the extra libraries that those tools provided. I think we'll have to agree to disagree on Metrowerks being surprised, I don't think they'd have made the same decision but we will also never know the answer there. I do agree though that the move to Windows seemed like an over extension.

I feel Apple have learned from that last move and proactively deprecated support and has been relatively open about communicating the future. As you point out Apple now have better control of their own development tooling compared to the last transition where there was competition in this space. I'm of the camp that the 64-bit requirement for Catalina is long expected with folk having been given the option of compiling 64-bit binaries for over a decade, anything left on 32-bit at this point was almost going out of their way to avoid it. I think making Catalina the cutting point was the push to clean out the 32-bit apps to simplify Rosetta on ARM and get people used to the idea that an app that might not have been updated for a decade is not going to work moving forward.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I don't disagree with Apple's policy of moving on and pushing forward however my point was that the shift from PPC to Intel was disruptive on a number of levels beyond a "compiler shift" which also downplays the extra libraries that those tools provided.

There were things in the PPC -> x86 shift that matter that the compilers couldn't automagically. But endian shift ( big versus little) and some vectorization code ( SSE, SSE2, ..... Intel's alpahbet soup , 'yet another try' at vectorizing). Sloppy type usage ( int treated at pointer ) which was a 32-bit -> 64-bit issue too.


But Metrowerk's proprietary library wasn't it. Apple had their libraries under control. Mac OS X had always been built against x86 from the beginning ( like NeXTStep was ). Apple had a x86 track on Mac OS X from the start of 'retiring' OS9.


I think we'll have to agree to disagree on Metrowerks being surprised, I don't think they'd have made the same decision but we will also never know the answer there.

It wasn't "Metrowerks" decision to make. It was Freescales. Your "agree to disgree" is hooked on the premise that Metrowerks was a free-standing company. It was not at that point. Holding onto a x86 compiler was not what Freescale needed that most in 2004-5.



A newsgroup thread from the time period.

"...

The Metrowerks website now seems to indicate that Metrowerks has been
absorbed into Freescale as a "team" and no longer exists as a separate
company:

http://www.metrowerks.com/MW/About/default.htm

> Freescale Semiconductor’s Metrowerks organization is a silicon
> enablement team that helps customers experience and fully leverage the
> performance of Freescale products. The organization’s embedded
> development leadership, technology and talent are focused on driving
> success for Freescale and its customers.

"

and later in that thread.

"...
It occurs to me that Nokia also saw the writing on the wall, and presumably
being Codewarrior users themselves, bought the x86 compilation stuff so that
it wouldn't sink along with Metrowerks (or at least with the MW desktop
tools dept). ..."


Clinging to the x86 passing up the money that Nokia was willing to hand over would have helped Freescale sell more chips how? You are handwaving around that very relevant question with "agree to disagree" arm flapping. Metrowerks moves from 1999-2003 set the ground work for that x86 compiler 'bail out' sale to Nokia. Apple's processor move was indicdental.



I do agree though that the move to Windows seemed like an over extension.

Which was a major issue for a company that didn't have "print money" kinds of margins and other major investments to get returns on. It was owned by a semiconductor fabrication company that needed billion dollar fabs to be competitive. Holding onto to that x86 compiler wasn't going to be a license to print money.



I feel Apple have learned from that last move and proactively deprecated support and has been relatively open about communicating the future. As you point out Apple now have better control of their own development tooling compared to the last transition where there was competition in this space. I'm of the camp that the 64-bit requirement for Catalina is long expected with folk having been given the option of compiling 64-bit binaries for over a decade, anything left on 32-bit at this point was almost going out of their way to avoid it.

Apple killed off 32-bit on iOS long before finished killing it off on macOS side. Getting rid of 32-bit was not just part of the ARM move. iOS was on ARM then and now. Hasn't gone anywhere and 32-bit is dead there. Apple killed off booting 32-bit macOS kernels a long time ago also. Apple saw both 32-bit ARM and 32-bit x86 as legacy 'baggage' and unloaded them.

The major theme is that Apple keeps things more streamlined over time. Yes the transition is coming but making the software stack more streamlined is just something Apple is going to do even when there is no impending major hardware change coming.


I think making Catalina the cutting point was the push to clean out the 32-bit apps to simplify Rosetta on ARM and get people used to the idea that an app that might not have been updated for a decade is not going to work moving forward.

Dumping 32-bit ARM from iOS helped make Rosetta more simple how? The *BIGGER* issue was that canned 32-bit ARM apps years before.



There was no actively maintained 32-bit library target to get to on the arm side even with arm code. So what would a 32-bit x86 emulator even naturally hook too? ( a 32->64 thunk would technically be possible but misses the forest for the trees as to the huge mindset mismatch there. )


That Rosetta also got simpler was a happy side-effect of something Apple had already done.
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
I'm of the camp that the 64-bit requirement for Catalina is long expected with folk having been given the option of compiling 64-bit binaries for over a decade, anything left on 32-bit at this point was almost going out of their way to avoid it. I think making Catalina the cutting point was the push to clean out the 32-bit apps to simplify Rosetta on ARM and get people used to the idea that an app that might not have been updated for a decade is not going to work moving forward.

Apple established what it takes to get to a 64 bit application in 2007 with Leopard. 64 bit compilation has been the default in Xcode for at least a decade.

Anyone who hadn't ported to 64 bit yet has literally been sitting on their butts since 2006 or had already decided to abandon the Mac and was just coasting. A lot of the software I've heard about that didn't make it over was still Carbon, which is just completely inexcusable at this point.

One big aspect of the cut is so that 32 bit Intel libraries could be cut and no longer maintained. But the ARM transition is clearly involved as well. It simplifies what Apple needs to do for ARM, and they don't have to worry about doing something like maintaining Carbon under Rosetta.
 

iindigo

macrumors 6502a
Jul 22, 2002
772
43
San Francisco, CA
Anyone who hadn't ported to 64 bit yet has literally been sitting on their butts since 2006 or had already decided to abandon the Mac and was just coasting. A lot of the software I've heard about that didn't make it over was still Carbon, which is just completely inexcusable at this point.

Yeah, pretty much. These devs are trying to position their software as one-time investments that they can toss over the wall and use as cash conduits indefinitely, but that doesn't really work, generally speaking. The level of backwards compatibility found on Windows is an anomaly — even in the Linux world, if you try to compile a codebase from 10 years ago under a distro from 2020, you're going to have a bad time. Things change. Software is an ongoing investment.
 

Blair Paulsen

macrumors regular
Jun 22, 2016
211
157
San Diego, CA USA
Over the last few years, pushing a lot of pixels in most Adobe apps without plenty of CUDA cores was painfully slow. The 7,1 MP went to market with some bomber AMD BTO graphics cards for the brute force needed to roughly match a workstation with one high end NVIDIA board (ie 2080Ti or Titan).
I am under the impression that Metal is the key to getting performance metrics on par with CUDA. I know that won't happen overnight, but it needs to happen.

If someone can delineate the evolving situation in more detail please share. The engineers where I work are highly dependent on CUDA (mostly on Linux, some Windows), for them it's gotta be NVIDIA.
 

teagls

macrumors regular
May 16, 2013
202
101
Over the last few years, pushing a lot of pixels in most Adobe apps without plenty of CUDA cores was painfully slow. The 7,1 MP went to market with some bomber AMD BTO graphics cards for the brute force needed to roughly match a workstation with one high end NVIDIA board (ie 2080Ti or Titan).
I am under the impression that Metal is the key to getting performance metrics on par with CUDA. I know that won't happen overnight, but it needs to happen.

If someone can delineate the evolving situation in more detail please share. The engineers where I work are highly dependent on CUDA (mostly on Linux, some Windows), for them it's gotta be NVIDIA.

CUDA and Nvidia is actually now Apple's strategy. Nvidia designs the hardware and software. You get absolute performance guaranteed across a wide range of Nvidia GPU hardware. Nvidia has very good engineers who maintain a plethora of GPU optimized frameworks for any task you could imagine. The depth and breadth of Nvidia's CUDA optimized frameworks is staggering!

AMD/Apple is an amalgamation of hardware and software between 2 different companies with 2 different agendas. For everything to work for Apple – they need to develop some killer GPUs on par with Nvidia/AMD whilst building out a strong software ecosystem for GPU compute. To be honest they lack both right now and they have a ton of catching up.
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
CUDA and Nvidia is actually now Apple's strategy. Nvidia designs the hardware and software. You get absolute performance guaranteed across a wide range of Nvidia GPU hardware. Nvidia has very good engineers who maintain a plethora of GPU optimized frameworks for any task you could imagine. The depth and breadth of Nvidia's CUDA optimized frameworks is staggering!

AMD/Apple is an amalgamation of hardware and software between 2 different companies with 2 different agendas. For everything to work for Apple – they need to develop some killer GPUs on par with Nvidia/AMD whilst building out a strong software ecosystem for GPU compute. To be honest they lack both right now and they have a ton of catching up.

I think AMD is now on notice, but if they can deliver with their next generation GPUs they’ll probably continue to be a part of the Mac.

I would guess Apple’s first desktop SOC’s will Mac out at around 10 teraflops of GPU performance. If AMD can beat that Apple isn’t going to have a huge reason to do their own discrete GPUs.

The hole in the WWDC announcements is that stuff like shared memory probably won’t work with a discrete GPU. So whatever their plan is for the Mac Pro or eGPU... we haven’t seen it yet and it wasn’t at WWDC.
 
  • Like
Reactions: whfsdude

ssgbryan

macrumors 65816
Jul 18, 2002
1,488
1,420
I think AMD is now on notice, but if they can deliver with their next generation GPUs they’ll probably continue to be a part of the Mac.

I would guess Apple’s first desktop SOC’s will Mac out at around 10 teraflops of GPU performance. If AMD can beat that Apple isn’t going to have a huge reason to do their own discrete GPUs.

The hole in the WWDC announcements is that stuff like shared memory probably won’t work with a discrete GPU. So whatever their plan is for the Mac Pro or eGPU... we haven’t seen it yet and it wasn’t at WWDC.

According to one of the slides, Apple is also dumping AMD for graphics.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.