Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

v0n

macrumors regular
Mar 1, 2009
106
60
I’m not sure I understand the question. What is “proprietary CPU tech?” I mean, is your definition of “proprietary … requiring platform specific coding” “not Intel?”

The problem SGI, Motorola, and some of these others had is that they didn’t keep up with Intel, or they DID surpass Intel (i.e. DEC) but they couldn’t sell at a price that made it worthwhile to switch.

Apple has had ridiculous success with Arm over the years, and the whole industry is heading toward Arm, so I don’t think there’s anything to worry about.
Interesting take. And yes. I mean "not Intel", (waves very wide circle pattern with hand) - not "x86" - not what 95% of personal computer market wants in their machines.

It's interesting that you see "industry heading toward arm".

Personally, I'm not convinced they have 'it' with Silicon. Don't get me wrong - M1 is lovely and endearing, I have a M1 mac mini right in front of me and I'm happy to be part of this weird journey into Apple's proprietary **** (again). But in the same time I don't think it ends well. And I'm not sure it is what we all think it is. I'm even going to go as far as to say - I don't think this game is played fair.
I think Apple's been preparing for this transition for at least two or three macos cycles. I think part of this stunning success of this random iPad chip with bottlenecks everywhere may be a long process of artificially slowing down older machines on Intel architecture and violently killing off genuine dGPU accelerations just so those 'pro' M1 perks in very specific apps look really good.
I don't think it's a coincidence that third party drivers and CUDA have been 'murdered' in shady circumstances. I don't think it's a coincidence that Big Sur runs like **** on Intels. With enough time persistent bunch of curious anoraks on internet forums might even figure out some really odd things happening under the bonnet. Like 'legacy' CPUs running in state of constant thermal throttling while doing basic things on Big Sur. And just how much better benchmarks can become if you take away SMC control from macos (hackintosh users are probably squinting their eyes on all those weird benchmark outcomes they just cannot replicate). And just how rarely this whole "oh my god, this $700 mac mini beats my $10,000 iMac Pro" malarky crops up when you use genuine CPU hungry tasks OUTSIDE of apple's carefully selected programs - even simple tasks like converting something in ffmpeg using all cores with arm optimised binary.. But that's just my weird outlook on it.
I'm not particularly fussed about Apple going into Silicon, I'm more annoyed by the lack of software support so many months later. Noone is coding for it. Even big corps are struggling. That means YEARS of misery for us - editors.
 
Last edited:

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Interesting take. And yes. I mean "not Intel", (waves very wide circle pattern with hand) - not "x86" - not what 95% of personal computer market wants in their machines.

It's interesting that you see "industry heading toward arm".

Personally, I'm not convinced they have 'it' with Silicon. Don't get me wrong - M1 is lovely and endearing, I have a M1 mac mini right in front of me and I'm happy to be part of this weird journey into Apple's proprietary **** (again). But in the same time I don't think it ends well. And I'm not sure it is what we all think it is. I'm even going to go as far as to say - I don't think this game is played fair.
I think Apple's been preparing for this transition for at least two or three macos cycles. I think part of this stunning success of this random iPad chip with bottlenecks everywhere may be a long process of artificially slowing down older machines on Intel architecture and violently killing off genuine dGPU accelerations just so those 'pro' M1 perks in very specific apps look really good.
I don't think it's a coincidence that third party drivers and CUDA have been 'murdered' in shady circumstances. I don't think it's a coincidence that Big Sur runs like **** on Intels. With enough time persistent bunch of curious anoraks on internet forums might even figure out some really odd things happening under the bonnet. Like 'legacy' CPUs running in state of constant thermal throttling while doing basic things on Big Sur. And just how much better benchmarks can become if you take away SMC control from macos (hackintosh users are probably squinting their eyes on all those weird benchmark outcomes they just cannot replicate). And just how rarely this whole "oh my god, this $700 mac mini beats my $10,000 iMac Pro" malarky crops up when you use genuine CPU hungry tasks OUTSIDE of apple's carefully selected programs - even simple tasks like converting something in ffmpeg using all cores with arm optimised binary.. But that's just my weird outlook on it.
I'm not particularly fussed about Apple going into Silicon, I'm more annoyed by the lack of software support so many months later. Noone is coding for it. Even big corps are struggling. That means YEARS of misery for us - editors.


That’s a lot of conspiracy theories, but we know the IPC, the memory bandwidth, the issue width, the power consumption, etc. for M1. Those aren’t fake. And tons of third-party apps run MUCH faster on M1 than on any other machine burning 3x the power. M1 is a very good chip. It was predictable, though - Apple’s A-series chips told us what M1 could do, and not having to deal with x86 garbage (and I say that as someone who invented some of x86-64’s garbage) and being able to control the compiler and the OS, gives them an added advantage over qualcomm and nvidia-based Arm systems.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,229
Interesting take. And yes. I mean "not Intel", (waves very wide circle pattern with hand) - not "x86" - not what 95% of personal computer market wants in their machines.

It's interesting that you see "industry heading toward arm".

Personally, I'm not convinced they have 'it' with Silicon. Don't get me wrong - M1 is lovely and endearing, I have a M1 mac mini right in front of me and I'm happy to be part of this weird journey into Apple's proprietary **** (again). But in the same time I don't think it ends well. And I'm not sure it is what we all think it is. I'm even going to go as far as to say - I don't think this game is played fair.
I think Apple's been preparing for this transition for at least two or three macos cycles. I think part of this stunning success of this random iPad chip with bottlenecks everywhere may be a long process of artificially slowing down older machines on Intel architecture and violently killing off genuine dGPU accelerations just so those 'pro' M1 perks in very specific apps look really good.
I don't think it's a coincidence that third party drivers and CUDA have been 'murdered' in shady circumstances. I don't think it's a coincidence that Big Sur runs like **** on Intels. With enough time persistent bunch of curious anoraks on internet forums might even figure out some really odd things happening under the bonnet. Like 'legacy' CPUs running in state of constant thermal throttling while doing basic things on Big Sur. And just how much better benchmarks can become if you take away SMC control from macos (hackintosh users are probably squinting their eyes on all those weird benchmark outcomes they just cannot replicate). And just how rarely this whole "oh my god, this $700 mac mini beats my $10,000 iMac Pro" malarky crops up when you use genuine CPU hungry tasks OUTSIDE of apple's carefully selected programs - even simple tasks like converting something in ffmpeg using all cores with arm optimised binary.. But that's just my weird outlook on it.
I'm not particularly fussed about Apple going into Silicon, I'm more annoyed by the lack of software support so many months later. Noone is coding for it. Even big corps are struggling. That means YEARS of misery for us - editors.

Sigh ... again, no the architecture is not Apple proprietary, they’re using Arm. I’m not sure if you understand the meaning of proprietary. They are using a proprietary uarch but user apps don’t care about that except for how fast it is, which yes the M1 is quite fast.

No Apple didn’t deliberately cripple their previous Intel Macs or GPUs to make the M1 look good. Intel screwed up their process royally and their designs faltered failing to deliver on their roadmaps for years. This was true for both PC and Mac. Sadly also during those years AMD GPUs weren’t as good as their Nvidia counterparts, especially in the power department which is important since Apple prioritizes thin and light designs. Thin and light designs require power efficiency for both battery life if in a laptop and thermals for every other kind of Mac save the pro. Neither Intel nor AMD could deliver on that in those years. AMDs recent designs can, but Apple has, at the moment, a better design.

We know Apple has a better design because third party reviewers have tested it and it’s M1 CPU and GPU are the best in their weight class.

Software porting for corner cases takes time - software that utilizes ... how to put this charitably ... the quirkiness of x86 will need to be rewritten. Most it’s a simple recompile. It’s been 8 months and many programs and plugins have indeed been ported. It’ll take time for others and yes some unmaintained legacy code may never be so.
 

v0n

macrumors regular
Mar 1, 2009
106
60
That’s a lot of conspiracy theories, but we know the IPC, the memory bandwidth, the issue width, the power consumption, etc. for M1. Those aren’t fake. And tons of third-party apps run MUCH faster on M1 than on any other machine burning 3x the power. M1 is a very good chip. It was predictable, though - Apple’s A-series chips told us what M1 could do, and not having to deal with x86 garbage (and I say that as someone who invented some of x86-64’s garbage) and being able to control the compiler and the OS, gives them an added advantage over qualcomm and nvidia-based Arm systems.
That's why I don't like discussing this stuff, because everyone immediately sums it up as "nice conspiracy". It isn't, really. You can run the tests yourself or browse those artificial benchmarks, Geekbenches etc, and very quickly discover that once it's benched on macos outside of Apple's control - be it hackintoshes or similar - M1 doesn't actually do that much better than 3 year old i5s. Which is still ok and fine for what M1 is.

M1 IS a very good chip. For what it was designed for. It's just... what we are doing, what internet reviewers are doing, is painting a bit of a... unrealistic legend around a very basic arm chip. Let's stop with the stupid "beats $10k Pro machine" headlines. It doesn't. Not unless all you do on it is run geekbench all day. It's a cool chip. But just an iPad chip. And it's not like people browsed the net on 2018 i5 and thought - "oh my god, this is so slow, I just wish Apple would switch to some exotic architecture and bolloxed it up a bit".
The tradeoff of this unrequested revolution at the moment is number of particularly nasty artificial limitations and drawbacks. Current Silicon has memory limits, crippled buses, features are artificially stripped, and the system itself is also completely non upgradable and irreparable. And we lost a lot of really good **** for Pros, like CUDA, like external dGPUs, like multiple screens - in the process of climbing this hill.
If we are also going to loose native pro software titles, this will pretty much be SGI scenario. It will be the same "so what it looks pretty and runs benchmarks faster - WTF do you use it for" we've seen way too many times before.
Let's enjoy our 13" MacBook "Pro"s, mac mini, small iMacs - you know, the "chromebooks" of Apple world, but let's not go overboard with the praise. Just yet. Let's keep it realistic and down to earth?
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,229
That's why I don't like discussing this stuff, because everyone immediately sums it up as "nice conspiracy". It isn't, really. You can run the tests yourself or browse those artificial benchmarks, Geekbenches etc, and very quickly discover that once it's benched on macos outside of Apple's control - be it hackintoshes or similar - M1 doesn't actually do that much better than 3 year old i5s. Which is still ok and fine for what M1 is.

M1 IS a very good chip. For what it was designed for. It's just... what we are doing, what internet reviewers are doing, is painting a bit of a... unrealistic legend around a very basic arm chip. Let's stop with the stupid "beats $10k Pro machine" headlines. It doesn't. Not unless all you do on it is run geekbench all day. It's a cool chip. But just an iPad chip. And it's not like people browsed the net on 2018 i5 and thought - "oh my god, this is so slow, I just wish Apple would switch to some exotic architecture and bolloxed it up a bit".
The tradeoff of this unrequested revolution at the moment is number of particularly nasty artificial limitations and drawbacks. Current Silicon has memory limits, crippled buses, features are artificially stripped, and the system itself is also completely non upgradable and irreparable. And we lost a lot of really good **** for Pros, like CUDA, like external dGPUs, like multiple screens - in the process of climbing this hill.
If we are also going to loose native pro software titles, this will pretty much be SGI scenario. It will be the same "so what it looks pretty and runs benchmarks faster - WTF do you use it for" we've seen way too many times before.
Let's enjoy our 13" MacBook "Pro"s, mac mini, small iMacs - you know, the "chromebooks" of Apple world, but let's not go overboard with the praise. Just yet. Let's keep it realistic and down to earth?

No *this* is the problem with discussing such things on the internet. People believe silly conspiracy theories not because they have any validity but because they don’t understand the subject matter and the conspiracy theory validates their world view. They then get defensive when called out because most people would rather believe a validating lie than the truth and confronting them with evidence only reinforces the lie. Let’s see if you’re one of the few who can overcome this pathological but all too human trait.

No Apple did not cripple macOS for Intel chips. We can indeed run the tests and they either don’t show a difference or ... if they do, not for the reasons you put forth. The reason hackintoshes can *sometimes* outperform their similarly specced OEM builds is almost always due to larger, bulkier cases and larger, more powerful cooling. With increased thermal mass and cooling the chips can run hotter for longer ... thus less throttling. This is especially crucial for Intel chips (and especially the recent ones) which have very poor thermals.

When people talk about the M1 beating powerful workstations they are referring to single threaded and lightly threaded applications (ie most applications people run). Yes i5s from several years ago can indeed beat the M1 in heavily multithreaded applications ... but only when power and thermals are no object as they draw much much more power to attain their performance. That’s true for every chip on the planet. More powerful M1’s with more cores are on the way. These are the base models only with the lowest power draw.

We effectively lost CUDA years and years ago. I know, I do CUDA development and that sucks, but that would’ve been true with or without the M1. *Most* of the other limitations are not actually limitations for the models of Mac they’re replacing (*some* are indeed due to the M1). Those machines were also limited. Again more powerful M1’s are coming.

EDIT:


In your opinion - do you think this time around (cause Apple's been down that road once before and almost collapsed in the process before Jobs switched them to Intel at the last minute) it is going to be any different for Apple and their silicon?

Your original post was so flawed that I missed this, but this is wrong too. You've got the history and causality all screwed up. The G3 and G4 iMacs are what helped save Apple from bankruptcy and were some of the best selling computers of their era. The switch to Intel was *8 years* after Jobs came back and was mostly because IBM no longer had any interest in continuing the PPC roadmap for PC, especially not for laptops. They just wanted to design server/HPC chips. The G5 was too power hungry for laptops and IBM didn't really want to design a version that had better thermals. It was not because Apple was still struggling - heck it was 4 years after the huge success of the iPod! At the time, some analysts even predicted that the switch would be catastrophically bad for Apple given the advantages of the G-series chips and that Intel in particular was seen as underperforming at that point relative to AMD in x86 (Core was only just coming out).
 
Last edited:
  • Like
Reactions: jdb8167

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
That's why I don't like discussing this stuff, because everyone immediately sums it up as "nice conspiracy". It isn't, really. You can run the tests yourself or browse those artificial benchmarks, Geekbenches etc, and very quickly discover that once it's benched on macos outside of Apple's control - be it hackintoshes or similar - M1 doesn't actually do that much better than 3 year old i5s. Which is still ok and fine for what M1 is.

M1 IS a very good chip. For what it was designed for. It's just... what we are doing, what internet reviewers are doing, is painting a bit of a... unrealistic legend around a very basic arm chip. Let's stop with the stupid "beats $10k Pro machine" headlines. It doesn't. Not unless all you do on it is run geekbench all day. It's a cool chip. But just an iPad chip. And it's not like people browsed the net on 2018 i5 and thought - "oh my god, this is so slow, I just wish Apple would switch to some exotic architecture and bolloxed it up a bit".
The tradeoff of this unrequested revolution at the moment is number of particularly nasty artificial limitations and drawbacks. Current Silicon has memory limits, crippled buses, features are artificially stripped, and the system itself is also completely non upgradable and irreparable. And we lost a lot of really good **** for Pros, like CUDA, like external dGPUs, like multiple screens - in the process of climbing this hill.
If we are also going to loose native pro software titles, this will pretty much be SGI scenario. It will be the same "so what it looks pretty and runs benchmarks faster - WTF do you use it for" we've seen way too many times before.
Let's enjoy our 13" MacBook "Pro"s, mac mini, small iMacs - you know, the "chromebooks" of Apple world, but let's not go overboard with the praise. Just yet. Let's keep it realistic and down to earth?

But it DOES very well against 3 year old M5s. Of course I can take any chip and run it at 200W and beat it. What’s that tell us? Nothing.
 
  • Like
Reactions: dmccloud

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
Neither M1 nor the A Series which it is a superset of are "basic ARM chips". Firestorm and Icestorm are the current iterations of Apple's in house microarchitecture. They use (and expand on a lot) the ARM ISA but the microarchitecture is unique and is not in any way a derivative of Cortex. For years it has been far and away the fastest SOC microarchitecture out there and now is proving to be pretty much the same for laptop and desktop.

Like cmaier said we know the characteristics of M1. And it is superb whether some want to admit it or not. It is ridiculously efficient because it does a LOT more work per clock cycle than anything else out there. To beat it in single thread you need extremely high end Core i9 / Threadrippers with 5X the power draw and heat. It even performs extremely well in multithreaded only stumbling a little when you have very high thread counts (and to be honest above a certain level multithreaded scenarios become corner cases).

M1 is the Apple equivalent of Intel Core Y in terms of the role it plays, which is for fanless and small form factor lower end systems. M1X (or M2 or whatever it's called) will scale up in performance core counts, GPU cores, cache and controllers for the higher end use cases it is supposed to support (like the multiple screens which I really never have felt the need for).

eGPU is another corner case and again scaling up the GPU core counts addresses the use case better than screwing with the base architecture.

Let's see. MacOS is NOT crippled on Intel Macs. I run Big Sur on my Intel MBP 16 and it flies nice and fast and does everything well.

And as to CUDA.....CUDA is just nVidia's attempt at achieving vendor lock-in. The Apple Silicon SOC architecture removes the need for CUDA with its interoperating CPU, GPU and ML blocks using the same memory with no copying.
 

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
But it DOES very well against 3 year old M5s. Of course I can take any chip and run it at 200W and beat it. What’s that tell us? Nothing.
I gather someone overclocked an i5 to some ridiculous level and beat an M1 with it? Shades of me in my misspent past clocking up a Pentium 4 to something like 6.3 Ghz and beating a Dual Core Athlon II with it. Sure it was completely unstable and throttled after about ten seconds but I beat it :D
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,229
And as to CUDA.....CUDA is just nVidia's attempt at achieving vendor lock-in. The Apple Silicon SOC architecture removes the need for CUDA with its interoperating CPU, GPU and ML blocks using the same memory with no copying.

Yes and no. Yes Nvidia achieves vendor lock-in (which they like ... so does Apple) but it is also the nicest, best supported/documented GPU compute language and it isn’t close. They would claim, like Apple, the control over the hardware and software stack gives them the greatest flexibility to add features and capabilities which, especially when the language and GPU compute was young and growing, was a key aspect for its success. There’s a reason why ML and HPC is so CUDA heavy and it isn’t just lock-in.

This is not a diss on metal or Apple’s GPU/NPU/ML-CPU hardware. Both are great and on the desktop we’re only seeing the beginnings. So exciting times ahead.

They use (and expand on a lot) the ARM ISA but the microarchitecture is unique and is not in any way a derivative of Cortex.

Apple’s expansion to the ARM ISA is for matrix multiplication/machine learning and the instructions are not public. I believe they have been reverse engineered but they’re only supposed to be accessible through Apple’s APIs. But yes the uarch is wholly Apple designed and is indeed great. Unsure if you’re referencing me here, but for myself I was pushing back against the notion that this was some unique proprietary Apple ISA chip where developers somehow don’t know how to code or optimize for its instructions (the implication of the poster I responded to). It isn’t, it runs standard ARM code. It’s the uarch that really differentiates it.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,522
19,679
And because most of the miraculous speed advantages of Silicon at this stage are driven by software optimisation not hardware advances, the moment it has to go back to layer that needs physical memory and raw power, it's all out of the window.

If you need a lot of physical memory, sure, that's not what these chips are designed for. In regards to raw power however: in any general-purpose workload M1 performs exceedingly well. It's not software, it's hardware. Apple simply has better branch predictors, more execution units, lower latencies, larger OOE windows.

And obviously, the software has to adapt to the new hardware technologies of the ARM chips if you want to take advantage of the chip capabilities efficiently. It's not any different for any other ARM or x86 CPU. Need vector processing? Well, then you'd have to design your algorithms with platform technologies in mind. This is not unique to M1. You have the same dilemma with technologies like AVX512 on x86 platform or with things like mesh shaders on the GPU.

That of course doesn't mean the idea is bad, or architecture is bad, it's just very limited in this generation, because it's still just a 'ported' iPad chip that's been severely crippled - in current form it can't give you more memory, or more IO, half of TB3 is removed, USB4 is not really that, etc, etc.

Well, this crippled chip still performs as well as chips using much more power, so I am not sure what you are trying to say here. It's capabilities are adequate for the products it is designed for. RAM limitation is a pragmatic choice to reduce costs and improve manufacturing volume. It has two thunderbolt controllers, just like any Intel MacBook Pro out there, with the main difference that Apple's controllers are properly isolated and less vulnerable. It's more than enough for an MacBook Air, not so much for a 16" MacBook Pro — which is why Apple does not use M1 chips in the prosumer machines.



I think Apple's been preparing for this transition for at least two or three macos cycles. I think part of this stunning success of this random iPad chip with bottlenecks everywhere may be a long process of artificially slowing down older machines on Intel architecture and violently killing off genuine dGPU accelerations just so those 'pro' M1 perks in very specific apps look really good.

This is where it gets a bit odd. Intel CPUs perform just as well on Apple machines and Apple OS as the same CPUs in machines of different branding. What is this artificial slowing down you are talking about?


I don't think it's a coincidence that Big Sur runs like **** on Intels. With enough time persistent bunch of curious anoraks on internet forums might even figure out some really odd things happening under the bonnet. Like 'legacy' CPUs running in state of constant thermal throttling while doing basic things on Big Sur.

I have two Intel machines at home that run Big Sur. One of them is my main work driver. It doesn't thermally throttle and overall performs just as you'd expect the i9 with those specs to perform. Neither did I see any mentions of widespread issues with Intel CPUs and Big Sur. So again, not sure what you are talking about.


I don't think it's a coincidence that third party drivers and CUDA have been 'murdered' in shady circumstances.

I definitely agree that you are right on this one, even if I would word if differently. It is very clear that Appel removed Nvidia from their platform because they want users to use Metal. Currently, Apple's GPU technology outperforms AMD or Nvidia by a factor of 2-3 with same power usage, but their tech is sufficiently different from the mainstream GPUs. The chance of devs successfully adopting Apple's superior GPU technology increase if parasitic technology like CUDA is not available. Sure, it makes things more annoying short-term, but it's a win for Mac users long-term.

That's why I don't like discussing this stuff, because everyone immediately sums it up as "nice conspiracy". It isn't, really. You can run the tests yourself or browse those artificial benchmarks, Geekbenches etc, and very quickly discover that once it's benched on macos outside of Apple's control - be it hackintoshes or similar - M1 doesn't actually do that much better than 3 year old i5s. Which is still ok and fine for what M1 is.

Here is a random i5-10600K (a six-core Intel desktop CPU with 125W TDP) hakintosh I found on Geekbench:


Doesn't look that well compared to M1... or let's have a look at a 3 year old mobile i5 (Kaby Lake i5-7440HQ):

https://browser.geekbench.com/v5/cpu/search?utf8=✓&q=7440HQ+macOS

But wait, why not take something newer? Like the Tiger Lake? Here is a Tiger Lake (i7-1165G7) Hackintosh:

https://browser.geekbench.com/v5/cpu/search?utf8=✓&q=1165G7+macOS

So yeah, I am still a bit confused what you are trying to say here.

M1 IS a very good chip. For what it was designed for. It's just... what we are doing, what internet reviewers are doing, is painting a bit of a... unrealistic legend around a very basic arm chip. Let's stop with the stupid "beats $10k Pro machine" headlines. It doesn't. Not unless all you do on it is run geekbench all day. It's a cool chip. But just an iPad chip. And it's not like people browsed the net on 2018 i5 and thought - "oh my god, this is so slow, I just wish Apple would switch to some exotic architecture and bolloxed it up a bit".

It's faster at building software than my 8-core Intel i9 that uses 4-5 times more power, but sure... besides, I am not sure why you would call the most ubiquitous CPU architecture on the planet "exotic".

If we are also going to loose native pro software titles, this will pretty much be SGI scenario. It will be the same "so what it looks pretty and runs benchmarks faster - WTF do you use it for" we've seen way too many times before.

Depends on your usage domain I suppose. Of course, if you are relying on certain proprietary software and the developer of that software is not interesting to serve the new Apple platform, then you are out. For us other people, who are using widely available tools and need a very fast portable machine, Apple Silicon is literally a game changer. Already the M1 runs my data processing pipelines 20% faster than a much larger Intel machine. Upcoming Apple chips with more cores and higher power consumption will likely make it 100% faster. It's a big deal for me because it means that I can do much more stuff done much more quickly.

Now - I have a question for @cmaier - since he's experienced enough to remember giants like Cray or SGI and worked for Sun Microsystems. In the entire history of personal computers, workstations and desktops - do we have a good example of a computing giant that dived as deep into proprietary CPU tech as Apple did this time, including software that requires platform specific coding across their entire range - and not ended up with chapter 11 blackeye in a long run? In your opinion - do you think this time around (cause Apple's been down that road once before and almost collapsed in the process before Jobs switched them to Intel at the last minute) it is going to be any different for Apple and their silicon?

Yes, Microsoft. They did quite well.
 
Last edited:

dmccloud

macrumors 68040
Sep 7, 2009
3,146
1,902
Anchorage, AK
I worked at AMD for 10 years, designing a bunch of CPUs. (Plus a couple of other places - Sun, and Exponential Technology, which, if you don’t know about it, should be very interesting to folks interested in Apple)
Didn't Apple either acqire Exponential outright or bring in some of their talent? I know Srouji came from Intel, so they know how to put together a competent chip design team.
 

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
Apple raided Intel a lot, especially Intel Israel which was the group that developed the Core and Core 2 microarchitectures.
 

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
Yes and no. Yes Nvidia achieves vendor lock-in (which they like ... so does Apple) but it is also the nicest, best supported/documented GPU compute language and it isn’t close. They would claim, like Apple, the control over the hardware and software stack gives them the greatest flexibility to add features and capabilities which, especially when the language and GPU compute was young and growing, was a key aspect for its success. There’s a reason why ML and HPC is so CUDA heavy and it isn’t just lock-in.

This is not a diss on metal or Apple’s GPU/NPU/ML-CPU hardware. Both are great and on the desktop we’re only seeing the beginnings. So exciting times ahead.



Apple’s expansion to the ARM ISA is for matrix multiplication/machine learning and the instructions are not public. I believe they have been reverse engineered but they’re only supposed to be accessible through Apple’s APIs. But yes the uarch is wholly Apple designed and is indeed great. Unsure if you’re referencing me here, but for myself I was pushing back against the notion that this was some unique proprietary Apple ISA chip where developers somehow don’t know how to code or optimize for its instructions (the implication of the poster I responded to). It isn’t, it runs standard ARM code. It’s the uarch that really differentiates it.
Hi Dave!

I make the remark about the microarchitecture in response to the other poster, who was insinuating that Apple Silicon is just Cortex. I agree that Apple Silicon uses the same ISA - all of the ISA is there. Apple's version is just a superset of it.

The beauty of Apple Silicon is you don't need a special language for it - Objective C, Swift, C++ even all the Visual Studio supported languages can be used as the proper compilers are available.

My CUDA remark was just pointing up that the design of the M and A series SOCs, with their CPU, GPU, ML and other processors existing as blocks in a common memory architecture, basically removes the need for CUDA which at its core is having a GPU do CPU work (GPGPU). In a world where you have a separate CPU and GPU (especially discrete GPUs) there was a logic behind it. In the Apple Silicon world it doesn't gain anything because all of the needed resources are playing right there in the same space.
 
  • Like
Reactions: crazy dave

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
But it DOES very well against 3 year old M5s. Of course I can take any chip and run it at 200W and beat it. What’s that tell us? Nothing.
M5-- bad line of mobile wanabee chips, yuck.

As for your second question, it tells me I can, and do, have a much faster PC than an M1 PC. That means something to me...
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
M1 is the Apple equivalent of Intel Core Y in terms of the role it plays, which is for fanless and small form factor lower end systems.
So you and others say, but they put it in an iMac and Macbook Pro, which are none of those things. So is Apple designing it for only small FF and fanless?? No, definitely not, they designed it precisely what they use it for. (to fit all those roles)
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
I gather someone overclocked an i5 to some ridiculous level and beat an M1 with it? Shades of me in my misspent past clocking up a Pentium 4 to something like 6.3 Ghz and beating a Dual Core Athlon II with it. Sure it was completely unstable and throttled after about ten seconds but I beat it :D
Actually a non overclocked Intel Core i5-11600KF out multicore benchmarks an M1 by a good margin. (and that isn't the only one)
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
So you and others say, but they put it in an iMac and Macbook Pro, which are none of those things. So is Apple designing it for only small FF and fanless?? No, definitely not, they designed it precisely what they use it for. (to fit all those roles)
Describing the M1 as a Y series Intel CPU is definitely wrong. But I don't think Intel or AMD have a SoC/CPU that fits in the same slot as the M1 does in Apple's lineup. A fast single-threaded with a reasonable multi-thread performance that uses less than 20 W at peak load. Basically, the M1 single-thread performance on such low power is something that has never existed before so there can be no 1:1 analog.
 
  • Like
Reactions: Joelist

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
But I don't think Intel or AMD have a SoC/CPU that fits in the same slot as the M1 does in Apple's lineup.
I agree, they don't, even though they may try it it various devices, but they do have CPU's that fit all of those roles and more.

As for single threaded performance, that just doesn't do anything for me.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
So you and others say, but they put it in an iMac and Macbook Pro, which are none of those things. So is Apple designing it for only small FF and fanless?? No, definitely not, they designed it precisely what they use it for. (to fit all those roles)

It’s designed for entry-level consumer devices. Nothing more and nothing less. Apple does not offer a wide range of SKUs that you can mix and match to your preferences. They design their chips to fit the products they want to build and sell. For this iteration, they have decided that a single chip is sufficient for a wide range of devices from a passively cooled ultraportable, to a compact all in one kitchen computer. This is a business decision. Not a technical one. If they wanted, they could have offered a more refined range of SKUs. They decided against it.
 
  • Like
Reactions: Joelist and jdb8167

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
That 2021 Rocket lake CPU outperforms the M1 by small margin only (under 20%), which is particularly sad considering it is a six-core desktop 125W part (vs. a 15W 4-core M1).
The key, for me, is the outperforms, not the 125W part. Did you know that 125W part costs about 30 cents to run for a solid 24 hours here? Or $9 a month if running full out? I'm supposed to worry about that?
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
It’s designed for entry-level consumer devices. Nothing more and nothing less. Apple does not offer a wide range of SKUs that you can mix and match to your preferences. They design their chips to fit the products they want to build and sell. For this iteration, they have decided that a single chip is sufficient for a wide range of devices from a passively cooled ultraportable, to a compact all in one kitchen computer. This is a business decision. Not a technical one. If they wanted, they could have offered a more refined range of SKUs. They decided against it.
That's pretty much what I said in more complicated terms. :) I know they could do different, but they didn't, yet.
 

v0n

macrumors regular
Mar 1, 2009
106
60
Mhmm... yes, those weird Apple only obsessions - device anorexia at all cost, performance per watt to the point of destroying its entire third party software support and app market, thinness to the point of being able to fit proper fans (or better potato cameras, or space for USB plug), thinness at the cost of not having any height adjustment to the workstation monitor. You know - Apple stuff. ?

What did our Sun/AMD chip designer got himself suspended for?
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
The key, for me, is the outperforms, not the 125W part. Did you know that 125W part costs about 30 cents to run for a solid 24 hours here? Or $9 a month if running full out? I'm supposed to worry about that?

Well, the key for me is that I can put it in my messenger bag and use it on the go. Can’t really do that with a Rocket Lake :)
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Mhmm... yes, those weird Apple only obsessions - device anorexia at all cost, performance per watt to the point of destroying its entire third party software support and app market, thinness to the point of being able to fit proper fans (or better potato cameras, or space for USB plug), thinness at the cost of not having any height adjustment to the workstation monitor. You know - Apple stuff. ?

Well, different folks have different fetishes. Some have an unhealthy obsession with building more efficient, elegant hardware. Others prefer to crank up the power and sell the resulting stovetop as a “revolutionary new product”. Pick your poison.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.