Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mrkek

macrumors newbie
Jan 9, 2021
7
1
I guess you've never tried to uninstall Crossover. There are a number of macOS programs that hide files in various directories and dragging the App to the bin doesn't fully remove it.
Just install Appcleaner.
Drag the app to it, it will find all associated folders and files, and you can uninstall them all.
Simple!
 

Captain Trips

macrumors 68000
Jun 13, 2020
1,860
6,355
I agree with others saying they are excited about Apple Silicon & M series SoCs in particular, and in general about the innovation that can come to the computer industry as a whole.

Especially if the other major players can up their game in addition to Apple. More widespread innovation and competition will help us all, regardless of what platform(s) we use.
 

David Hassholehoff

macrumors regular
Jul 26, 2020
122
90
The beach
Enjoy Dependency Hell!

And sh!tty power management performance on notebook computers!

And that world-class end user documentation quality that Linux is so well known for!
Well, to be fair, dependency hell became a non-issue after the various package managers were developed.
My issues with GNU/systemd today is well, you guess. And the inconsistent documentation.
So Mac/BSD is a good combination. Mac for desktop/laptop and BSD wherever GUI is a waste of resources.
 
  • Like
Reactions: Captain Trips

Maconplasma

Cancelled
Sep 15, 2020
2,489
2,215
Especially if the other major players can up their game in addition to Apple. More widespread innovation and competition will help us all, regardless of what platform(s) we use.
You're oversimplifying this. Apple has spent years creating these processors. A company (Intel, Samsung, etc.) doesn't just wake up the following morning and say "We're gonna do what Apple does" because it takes years of innovation, testing and OS optimization of said processors to move forward with performance that will rival Apple Silicon. While Intel has been busy making processors with new names that are still running insanely hot and draining batteries Apple has been busy creating their next generation OS, M1 chips that contain subsections to control many operations and security of their computers. That takes years of innovation. Intel has spent too much time "marketing" rather than innovating so I don't see them ever catching up as Apple will continue to innovate further with Macs.
 

Captain Trips

macrumors 68000
Jun 13, 2020
1,860
6,355
You're oversimplifying this. Apple has spent years creating these processors. A company (Intel, Samsung, etc.) doesn't just wake up the following morning and say "We're gonna do what Apple does" because it takes years of innovation, testing and OS optimization of said processors to move forward with performance that will rival Apple Silicon. While Intel has been busy making processors with new names that are still running insanely hot and draining batteries Apple has been busy creating their next generation OS, M1 chips that contain subsections to control many operations and security of their computers. That takes years of innovation. Intel has spent too much time "marketing" rather than innovating so I don't see them ever catching up as Apple will continue to innovate further with Macs.

No I am not oversimplifying this. Not any aspect.

I am well aware that Apple has a large and well-earned edge in the processor / SoC world. They invested wisely and and performed research and hired & retained good people and let them do their thing.

What I am saying is that I agree that competition and innovation amongst the various tech companies is good. And there are more areas of invocation than SoC / CPU / GPU.

And I absolutely want to see Apple keep innovating and staying at or near the top, because I love the Apple devices I have currently and have used in the past, and I want to continue to use the things that Apple will make on the future.
 

Captain Trips

macrumors 68000
Jun 13, 2020
1,860
6,355
Well, to be fair, dependency hell became a non-issue after the various package managers were developed.
My issues with GNU/systemd today is well, you guess. And the inconsistent documentation.
So Mac/BSD is a good combination. Mac for desktop/laptop and BSD wherever GUI is a waste of resources.
This is why I switched over to a G3 iBook with Mac OS X in the early 2000s. I had been using Linux exclusively, and was happy with it, but I really liked the BSD foundation & command line access, along with the polished (in my opinion) Mac OS X GUI.
 

Maconplasma

Cancelled
Sep 15, 2020
2,489
2,215
No I am not oversimplifying this. Not any aspect.

I am well aware that Apple has a large and well-earned edge in the processor / SoC world. They invested wisely and and performed research and hired & retained good people and let them do their thing.

What I am saying is that I agree that competition and innovation amongst the various tech companies is good. And there are more areas of invocation than SoC / CPU / GPU.

And I absolutely want to see Apple keep innovating and staying at or near the top, because I love the Apple devices I have currently and have used in the past, and I want to continue to use the things that Apple will make on the future.
That's not what I replied to your post about. I was referring to you saying Apple's M1 chips will create competition and get other companies to do better. It doesn't work that way. That's why you're oversimplifying things. Companies like Intel and AMD have a very large customer base which they perhaps will never lose to Apple. This reason alone is enough to stagger innovation from Intel, AMD and Samsung. This is also the same reason why Lenovo has never and will never do any innovations on those tired Thinkpads. They are ugly, boring, have thick bezels, poor thermals, issues with build quality and bad battery life. Lenovo knows they have their audience of lemmings that will continue to buy Thinkpads even though they will never be any better.
 

Captain Trips

macrumors 68000
Jun 13, 2020
1,860
6,355
That's not what I replied to your post about. I was referring to you saying Apple's M1 chips will create competition and get other companies to do better. It doesn't work that way. That's why you're oversimplifying things. Companies like Intel and AMD have a very large customer base which they perhaps will never lose to Apple. This reason alone is enough to stagger innovation from Intel, AMD and Samsung. This is also the same reason why Lenovo has never and will never do any innovations on those tired Thinkpads. They are ugly, boring, have thick bezels, poor thermals, issues with build quality and bad battery life. Lenovo knows they have their audience of lemmings that will continue to buy Thinkpads even though they will never be any better.
We will have to agree to disagree.

That isn't to say I disagree with the points you are making, far from it. I do, however, disagree strongly with your interpretation of what I saying.
 

dmccloud

macrumors 68040
Sep 7, 2009
3,142
1,899
Anchorage, AK
We will have to agree to disagree.

That isn't to say I disagree with the points you are making, far from it. I do, however, disagree strongly with your interpretation of what I saying.

There's a few key reasons why people are taking issue with your arguments regarding competition in the market. In the best-case scenarios, Intel and AMD are a minimum 2 years out from releasing new CPUs that could truly compete with the M1. However, Apple will already be on their M2 or M3 SoC at that point, thereby setting the bar even higher. In essence, both Intel and (to a lesser extent) AMD will be aiming at a moving target regardless of whether they stick with x86 or try to create their own ARM-based processors.

One of the key limitations of the x86 architecture is the fact that machine level instructions can vary in length, which means the decoder units have to basically check every point of the stream to determine if a new instruction has begun. This is the biggest reason why AMD has stated that 4 decoders is the upper limit for x86, because any more and the complexity increases exponentially. The M1 has three major advantages over x86 in this area. First, instructions are a fixed length, so the decoders only have to count the length out instead of testing and rejecting every new bit that doesn't begin a new instruction. Second, Apple went wide with their decoders in the M1 (8 vs 4). This means that the M1 is process twice as many instructions per clock cycle as the x86 CPUs. The third difference is that Apple has embraced out of order execution with the M1, which means the processors can "park" an instruction that relies on a future instruction to be completed. It is widely accepted that the M1 design is scalar (if not super scalar) in its design, which means it would be relatively simple for Apple to release a desktop-class chip for the iMac and Mac Pro with anywhere between 8 and 32 performance cores and an even wider set of decoders.

Here is the situation both AMD and Intel are facing: x86 is really reaching the end of its usefulness (it's been around since 1974), and decades of grafting new stuff on top of the original x86 ISA and architecture has created an extremely complex CPU that is a hodgepodge of old and new technologies. Meanwhile, Apple licensed the ARM ISA, but built its own silicon designs that uses the ARM ISA while the actual processing cores are built according to Apple's own specifications. This is a big departure from either Qualcomm or Samsung's (Exynos) ARM development, as they are licensing both the ISA and architectures from ARM, and are consequently limited by the available core designs. This is why the M1 runs Windows on ARM significantly faster than the SQ1/SQ2 used in Microsoft's Surface Pro X. Apple's M1 GPU solution is also more powerful than even Intel's new XE iGPUs used in the 11th generation Core series, and that shares the same scalability and design philosophy as the CPU cores in the M1.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
One of the key limitations of the x86 architecture is the fact that machine level instructions can vary in length, which means the decoder units have to basically check every point of the stream to determine if a new instruction has begun. This is the biggest reason why AMD has stated that 4 decoders is the upper limit for x86, because any more and the complexity increases exponentially. The M1 has three major advantages over x86 in this area. First, instructions are a fixed length, so the decoders only have to count the length out instead of testing and rejecting every new bit that doesn't begin a new instruction. Second, Apple went wide with their decoders in the M1 (8 vs 4). This means that the M1 is process twice as many instructions per clock cycle as the x86 CPUs. The third difference is that Apple has embraced out of order execution with the M1, which means the processors can "park" an instruction that relies on a future instruction to be completed. It is widely accepted that the M1 design is scalar (if not super scalar) in its design, which means it would be relatively simple for Apple to release a desktop-class chip for the iMac and Mac Pro with anywhere between 8 and 32 performance cores and an even wider set of decoders.

I think you might be simplifying these issues somewhat. While decoding variable-length instructions is definitely less straightforward, it doesn't necessarily have to be as big of a drawback as you make it sound. In practice, the decoder looks at an entire chunk of code at once and analyses relevant bit batters to detect where one instruction ends and where another instruction begins — there exist very fast hardware solutions to this problem. Not to mention that modern CPUs cache decoder output, so you don't need to do any decoding in hot loops at all. Out of order execution is not anything exclusive to M1, x86 CPUs have been superscalar for almost 20 years, which is the main reason behind their speed. And finally, out of order execution is a tricky beast in itself, as instructions often depend on results of other instructions — it's not like you can just execute an arbitrary amount of instruction in the future, you have to wait until the required data becomes available. The existing number of instruction decoders (4 to 5, experts seem to disagree) in the modern x86 CPUs is most likely a pragmatic limit beyond which the architects didn't obtain any meaningful performance increases.

I think it's still a bit of a puzzle how Apple CPUs are so fast given their relatively low clock. In the end, it's probably a combination of very large caches, excellent branch prediction, smart prefetches and some sort of secret sauce that allows them to somehow extract more instruction level parallelism from the code. But I doubt that this has much to do with ARM vs. x86, it's just that Apple engineers have found some sort of solution that Intel and AMD have missed. And of course, let's not forget that mainstream CPU design has for long time been abut trying to make the chips run as fast as possible (high frequencies), where Apple took a different route. But it's not like that can arbitrarily scale up the width of their architecture — at some point you won't be able to feed all those processing units anymore.

Here is the situation both AMD and Intel are facing: x86 is really reaching the end of its usefulness (it's been around since 1974), and decades of grafting new stuff on top of the original x86 ISA and architecture has created an extremely complex CPU that is a hodgepodge of old and new technologies. Meanwhile, Apple licensed the ARM ISA, but built its own silicon designs that uses the ARM ISA while the actual processing cores are built according to Apple's own specifications. This is a big departure from either Qualcomm or Samsung's (Exynos) ARM development, as they are licensing both the ISA and architectures from ARM, and are consequently limited by the available core designs. This is why the M1 runs Windows on ARM significantly faster than the SQ1/SQ2 used in Microsoft's Surface Pro X. Apple's M1 GPU solution is also more powerful than even Intel's new XE iGPUs used in the 11th generation Core series, and that shares the same scalability and design philosophy as the CPU cores in the M1.

There is no doubt that x86 is a very old and cumbersome ISA. And while many experts appear to argue that ISA doesn't matter much, I think it makes perfect sense that it would be easier to design a high-performance architecture for a modern, designed-from-scratch ISA such as AArch64, that is built on decades of best practices. Still, x86 CPUs are as fast as ever, it's just that current state-of-the art implementations are not energy efficient. Or at least, we though they were until Apple came along :)

As to the GPU, Apple has a huge advantage because they use a much more efficient TBDR approach to rendering. No matter how efficient Intel or others get with their GPUs, no optimization is better than simply doing less work altogether.

P.S. And of course, there is also this:

 
Last edited:

neinjohn

macrumors regular
Nov 9, 2020
107
70
Things may flip on the next 5 years.

Supposedly a lot of developers jumped ship or got tired of Apple from 2015 onwards while they were chopping stuff paving the way for Apple Silicon. If they keep key design decisions from the M1 to all the other M chips to come they can provide a stable base for developers.

Meanwhile, the PC world can get super, super messy and fragmented if everybody gets their hardware "strategy" on point because we go from a Intel-NVidia simple duopoly (you get a Intel CPU and if you need a little juice on graphics side get this NVidia GPU) to Intel, NVidia, AMD, Qualcomm, maybe Microsoft all diping on CPU, GPU, AI, FPGAvsASIC, decoders and security. All super integrated to get a boost, with different approaches for the same goal, proprietary/unique APIs and even ISA. Google is also speculated to dip on developing their own SoC for servers, Android and Chromebooks.

For the Apple vs AMD/Intel I found this comment from hollyjester on Reddit pretty good:

Hi, doctoral student in computer architecture here. This article did a surprisingly good job of explaining the M1’s success which can be summarized in the following equation:
Fast Decode + Big Ass ROB + Shared Memory Hierarchy = Success
You show that equation to any student of an advanced architecture course, and they’ll be able to tell you that following it will give you a fast computer. In that sense, Apple’s SoC isn’t novel. That isn’t a knock against Apple. Actually manufacturing this chip is a huge achievement and a touch engineering challenge. Instead, what I’m trying to dispel is the notion that Apple has all the talent, and Intel and AMD have unimaginative engineers. Because what the article doesn’t go into, and I haven’t heard mentioned often, is the piece that is left out of a purely technical explanation of the M1: cost.
60% of industrial architecture decision making is cost. And the fact that Apple sells to themselves and not a broad, varied market, means that they can do outrageous things that Intel, AMD, or Samsung cannot. It’s not that the engineers at these companies don’t realize that they can do these things; it’s that those things don’t make sense given their constraints. Below is a snippet (with a few edits) where I tried to explain this to someone when the M1 first came out. Sorry if it is a little out of context.

Yeah Intel/AMD do different designs for different markets, but the way they do that is a hierarchical approach. You start with a shared ISA across the full spectrum, then you have microarchitectural implementations that are separated broadly by market, and finally the actual bins that correspond to the different tiers of final products. Each point in the hierarchy will have a different cost associated with features: - The ISA (e.g. x86) will go across everything (i.e. servers, laptops, mobile). Here you might add instructions like AVX since their support is easily included/excluded per design. You probably won’t do something like give up a bit of the opcode for conditional execution if you know it will only benefit server chips. If you were to do something like that, the profit expected on server sales due to increased performance would need to justify it. - The microarchitecture (e.g. Skylake) is where most of the cost-benefit decisions will be. Like the article pointed out, here is where your ISA might limit your ability to scale up. If you have a complicated instruction for decoding (like x86 does), then the amount of circuitry added will be greater per additional issue width than a simpler ISA (like Arm). Another example might be the number of total cores that the microarchitecture supports. Does increasing the max core count for a i9 justify the additional costs for the i3? Because even if you don’t use the full core count in the i3, the additional circuitry to support it still exists in each core IP. All these decisions will be closely tied to the third bullet: binning. - Binning will greatly impact the decisions on the previous two bullets, because this is where physically manufacturing a chip comes into play. Let’s say you added a wider decoder — how does that affect your lowest bin where the TDP is distinctly lower? Or, more likely, how does it affect the max clock frequency that you can reliably run the silicon at? If only the highest bin supports your target frequency, and you cannot reliably produce that bin on the wafer, then you will end up with greater lower-tier volume who’s max frequency is limited by the highest tier (which you don’t have a lot of to sell anyways!). Another thing that Apple did was bump up the cache size. When a cache isn’t reliably fabricated, this is usually a common point where the die goes to a lower bin. If you can’t produce the full cache size on most of your yield, there are very few parts that are going to receive the benefit of your beefy cache. And let’s say in the end you are willing to have lower yield to have this amazing high end part that you will charge a lot for — will the lower end market be saturated by the increase volume, or will it accept the volume so you can recoup costs?
There is no easy way to address all these issues which is why companies like Intel/AMD are forced to be more conservative than Apple where there is a much smaller market and fewer bins. The total vertical integration from Apple also plays a role in this. Since they control the software stack and most apps running on a Mac use their APIs, they can be sure that the program is being compiled to take advantage of hardware features. Since they make an SoC with lots of accelerators and not just the CPU, they can predict the characteristics of the workload running on the CPU, and they can be sure they are getting the most utilization out of their core.

I would add that on management and how Apple makes and manages money their model is very different from Intel or AMD. They have access directly to the consumer so the lowest/cheap SoC is sold for 699$/999$ if you buy directly from their Apple store, if big retail have a 30% margin on their products it's 500$/700$ on volume buys. Of course there is a lot of other costs on components, operation, etc, but the cash-in cash-out is very different from other competitors which may get their fat gains on servers or have to deal with Dell/HP/Lenovo/Asus/AIB/Sapphire, etc.
 
Last edited:
  • Like
Reactions: Captain Trips
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.