Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
As for developer effort, for 95% of situations the extra effort to develop an Apple Silicon version of your software over an x86 version is literally 0. You check a little checkbox and the compiler will make an AARCH64 version. That's it. You may need to add a line or two to a build script if you're not using Xcode.

There is no checkbox, at least for Xcode 12. When building a Mac app there are two options:

1) Build a debug executable for the Mac you are running Xcode on.

2) Build an archive for any Mac (which creates a fat binary with Intel and Apple Silicon binaries.

Mac OS has had fat binary support since day one of course (The OS it was based on supported multiple CPU architectures) and Xcode has supported building ARM and Intel binaries from the same source code since they started working on the development tools for the iPhone.
 
  • Like
Reactions: KeithBN and grandM

3rik

macrumors newbie
Apr 27, 2021
24
19
As others have said its really important to note what the M1 is.
Its the 1st gen of the lowest end apple silicon chip, made for apple's lowest end Macs. And its essentially a quad core chip, the 4 low power cores could be counted as a 5th core as together they perform similarly to one high power core.

The biggest obstacle for Intel and AMD is power consumption (and thermal output which is a direct result of that). With desktops they have more room to play as battery life isn't an issue and theres much more room for heatsinks, fans and airflow but in laptops its a big problem. The m1 is as performant as it is despite having a relatively unimpressive clock speed (chips with similar single core performance boost up to 4.9-5.3 GHz to get there, e.g intel 11th gen) and drawing 1/4 the power. This gives them a whole load of headroom to increase clock speeds and add cores before reaching the thermal output and power draw of current high performance amd and intel laptop chips. Add node improvements and assuming apple keeps their recent track record of ~20% single-core performance improvements year on year, I don't doubt that apple can at the very least keep up and probably continue to give intel and AMD a run for their money.

My prediction is that 5 years from now arm based laptops (not just from apple) are all over the place. Chances are apple keeps the performance lead on arm chips that they've had for ages with iPad & iPhone chips though. However I don't expect desktops, especially gaming desktops, to shift for a lot longer than that.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
There is no checkbox, at least for Xcode 12. When building a Mac app there are two options:

1) Build a debug executable for the Mac you are running Xcode on.

2) Build an archive for any Mac (which creates a fat binary with Intel and Apple Silicon binaries.

Mac OS has had fat binary support since day one of course (The OS it was based on supported multiple CPU architectures) and Xcode has supported building ARM and Intel binaries from the same source code since they started working on the development tools for the iPhone.

Sure, okay - it's a drop down menu not a checkbox
1630854788041.png

But since it's a boolean toggle anyway it could as well be a check box
 
  • Like
Reactions: KeithBN

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
Pointers exist in the exact same way in C++ though. I mean C++ also has smart pointers with the fancy-smanchy std::shared_pointer and all that jazz and instead of allocating space for your structs with malloc you can use new and delete, but C++ still has pointer arithmetic and everything C has so I struggle to imagine someone who can deal with pointers in C++ but panic when they see a C pointer. Maybe that's just me though :)
I agree with you. In addition to pointers, C++ adds references, universal references, move semantics, closures etc. Then there is the standard library.

OTOH at least you don't need as many macros.
 

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
If it's just this "check the box to make an Apple Silicon version" why is there still so much software on the Mac that is without a version that run natively, 1.5 years into the transition? It's not like it's abandonware and the devs always seem to say something like they have a "dependency" that needs updating first.

Production Apple Silicon Macs first shipped less than a year ago. The developer kits were shipped just over a year ago.
 
  • Like
Reactions: KeithBN

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
The point is you have to opt out of building for Apple Silicon, cross compilation is the default setting.

Ah right. That only goes for newer projects or projects built against the newer SDKs though. If you have a legacy codebase and your set your SDK to not target the latest it's an opt-in. You may also be using a non-Xcode workflow.

But valid point :)
 

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
Ah right. That only goes for newer projects or projects built against the newer SDKs though. If you have a legacy codebase and your set your SDK to not target the latest it's an opt-in. You may also be using a non-Xcode workflow.

But valid point :)
True, if you had binary dependencies you would need to opt out of any arch you did not have the libraries for. I did this once to opt out of support for the A6 in the iPhone 5 because I had binary dependencies.
 

grandM

macrumors 68000
Oct 14, 2013
1,520
302
If you're developing specifially for Apple Silicon, rather than the APIs provided by apple which are CPU/GPU agnostic, you're probably doing development wrong, in this current century.
Quality software is made natively. There may be business arguments to be made not to do so.
 

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
The STL also gives you a hash table (unordered_map) so you don't have to deal with the God-awful POSIX hash table you get with hcreate or create your own :p
True, but C++ developers had to wait until the C++11 standard to get unordered_map, with the original STL implementation you had to roll your own or incorporate someone else's non-standard implementation.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
True, but C++ developers had to wait until the C++11 standard to get unordered_map, with the original STL implementation you had to roll your own or incorporate someone else's non-standard implementation.
What, really? I didn't write any C++ until after C++14 was a thing so I wasn't in that world for this change. Did they not even have the red-black tree based map back then?
 

k27

macrumors 6502
Jan 23, 2018
330
419
Europe
But there is no 8 Firestorm cores in M1 yet. M1 is right now 4 Firestorm(big) 4 Icestorm(LITTLE).
M1 GPU is around gtx 1050/ti, of course current gen NVIDIA laptop cards beat M1.

M1X should have 8 Firestorm cores and more GPU cores.
But there ist no 5nm Intel or AMD.

CPU XYC from AMD and Intel should have 5 nm...

;)

You have to compare what is there.
And the current M1 is slower than corresponding Windows laptops in applications that use multiple cores and/or a powerful GPU.
In a German forum, someone switched to Windows because the M1 is too slow for his workflow. The M1 just stutters while the Ryzen 3 laptop with current Nvidia GPU still runs relatively fine. He uses 4K, RAW and I think even 8K (with multiple filters, colour grading, etc.).
But he also said that he can see himself getting a MacBook Pro again when Apple Silicon is more powerful.
 

ADGrant

macrumors 68000
Mar 26, 2018
1,689
1,059
What, really? I didn't write any C++ until after C++14 was a thing so I wasn't in that world for this change. Did they not even have the red-black tree based map back then?
They had the tree based std::map(). Prior to C++98 though, we were using third party libraries like RogueWave which did have support for hash dictionaries. RogueWave support was bundled with the Solaris C++ compilers which slowed down migration to C++98 and the STL.
 
  • Like
Reactions: casperes1996

grandM

macrumors 68000
Oct 14, 2013
1,520
302
If it's just this "check the box to make an Apple Silicon version" why is there still so much software on the Mac that is without a version that run natively, 1.5 years into the transition? It's not like it's abandonware and the devs always seem to say something like they have a "dependency" that needs updating first.
If the software isn't native they must integrate native code in their JS app. I once heard this is a challenge. Moreover they need a native dev on-board which they most likely don't have. So they sit and wait for an API which may never come. Imagine that.
 
Last edited:

grandM

macrumors 68000
Oct 14, 2013
1,520
302
In terms of performance, Apple has to catch up (M1x, M2) and not vice versa. A current Ryzen in a laptop is sometimes significantly faster in corresponding applications because of its 8 cores.
And the integrated hardware encoding in the M1 is super fast but very poor in terms of quality and file size. That's why I don't use it in my M1 (I use x264 and x265 on the CPU).
But hardware encoders are usually not good. An exception could be Nvidia NVENC.

However, due to the snooping that Apple wants to introduce, my interest in new Apple hardware has dropped considerably already.
Now take a look at performance per Watt.
 
  • Like
Reactions: KeithBN

hagjohn

macrumors 68000
Aug 27, 2006
1,867
3,709
Pennsylvania
No, I'm not worried about Apple Silicon future. Apple was able to give us a fast SIP with low temperatures. I think the future is bright but we will know more after the next SIP or 2 comes out to know if Apple has a real handle on yearly updates (I think they do) and features.

Getting rid of dysfunctional intel, which hasn't done anything exciting in CPU's in a long time (even AMD is starting to drive X86 now), while timetables have slipped year after year, was a must for Apple to grow. There are some drawbacks but I think Windows will be going more and more ARM in the next few years (as long as ARM chips can perform and Apple has proved they can).
 

grandM

macrumors 68000
Oct 14, 2013
1,520
302
Having to code for a specific hardware and OS is coming to an end anyways isn't it? Seems the future of high power applications and console gaming will be letting servers do the hard work.
I don't know. There's a strong tendency at the moment. Security, energy and resources concerns, etc might cause a relapse.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.