Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
They're also not driven by instruction set fan-boyism. The instruction set is an implementation detail, not a deciding factor. They want the fastest possible chips at the best power efficiency at the best prices. Intel does not do well on those metrics right now compared to the competition. 4 years ago they did, but not these days.

oh, boy, instruction set fanboyism.

I only mentioned x86 because that is the relevant issue in which I pointed out my two-player space thing above.
 
Apple can see through AMD's sell. They know exactly where and what is AMD (Apple is also a tech company, as well). And, this is a second place x86 CPU maker. And there are only two places in this space.
....

Why would Apple listen?

Errr,

One. Not having to fix firmware where the CPU asks for 20-70% maybe 100% power over the TDP specs that Apple as given for the CPU.

https://www.macrumors.com/2018/07/24/throttling-fix-2018-macbook-pro-improvements/

The current stuff can be contained to work, but it isn't what Apple wants. They are looking for stuff not stuck on the same process for last 3-4 years. ( that was AMD GPUs for a while but they aren't at the moment).


Two, Sky High pricing. Intel was pitching the W 3175X at one point for $8K it came out at $3K. Was Intel blowing smoke to confuse AMD or smoking large tacks of legal weed over in Santa Clara. It is kind of tough to tell.
Apple doesn't like paying more than they have too. (e.g. sold "FirePro" GPUs in Mac Pro 2013 so less than what AMD was selling discrete FirePros for. Not cheap as mainstream , but less. )




Haven't seen first in a long time.

Apple isn't manically occupied with first hot rod status. Bread of solutions probably at least as much.
Something competitive would probably work for them. AMD doesn't have to completely bury Intel, they just need to be in the same range.



Why would Apple wanna be like, you know what CPU-Wannabe-Usain-Bolt-always-second-place company? I'll pick you?

Apple puts no CPU stickers on the their computers at all. Their tech specs pages doesn't even tell you the processor model number. If Yoyodyne Propulsion Systems made viable CPU (using Apple's criteria) for the Mac Pro they'd probably use it. They not caught in a fan boy loop.
 
You are not criticising AMD. What you are writing is incorrect, because of your "second row vendor" outlook for AMD.

Want to know why Server place is switching to EPYC? Because it is good product, and in some cases miles better, than Intel. And the ROADMAP AMD has is miles better than Intel. That is MAIN REASON why Server is switching to AMD.

Then stop bashing AMD, based on your perception of AMD, because you are looking like clueless person.

I think I can bash AMD if I want to. And, it doesn't mean I am a fanboy or for Intel.
 
I think everybody, especially, @namethisfile forgets, in the discussion about AMD, is that with Zen2, and Chiplet architecture, they can make specific, custom chips only for Apple.

APU based on 4 CPU Chiplets, and two GPU chiplets, for BGA package, for iMac Pro? No Problem. And whats best - this design ticks every box of HSA design.

I think I can bash AMD if I want to. And, it doesn't mean I am a fanboy or for Intel.
Sure, you can bash AMD if you want to. But if there is no reason for bashing - you will look ridiculously clouless.

Its only up to you, to speak with knowledge, or to speak without it, and look clueless.

AMD CPU's have always been a niche product. That's the truth. Just look at the sales.

For past 5 months AMD CPUs have been outselling Intel 2:1 ;). Yeah, nieche.

AMD simply hasn't got good products until Zen. Right now - they do. Thats why they are selling. End of the story.
 
  • Like
Reactions: Boil and ssgbryan
I think everybody, especially, @namethisfile forgets, in the discussion about AMD, is that with Zen2, and Chiplet architecture, they can make specific, custom chips only for Apple.

APU based on 4 CPU Chiplets, and two GPU chiplets, for BGA package, for iMac Pro? No Problem. And whats best - this design ticks every box of HSA design.

And that sounds suspiciously like the arrangement they already seem to have around Radeons. Weird versions of Polaris and Vega keep popping up on Macs.

I wonder if they could even throw on a few Apple ARM cores in addition to the x86 ones.... That might close the deal with Apple. Apple could move stuff they've had on T2 onto the CPU.
 
  • Like
Reactions: Boil
Only in iOS system. We probably need to wait till 2020 after they put their own processor in their MacBook.
Not according to some. One even implied the OS is irrelevant. There's even the belief ARM is ready to go today. Yet despite all of the advantages ARM is supposed to offer today we don't see any ARM based Macs. Why is that?
 
And that sounds suspiciously like the arrangement they already seem to have around Radeons. Weird versions of Polaris and Vega keep popping up on Macs.

I wonder if they could even throw on a few Apple ARM cores in addition to the x86 ones.... That might close the deal with Apple. Apple could move stuff they've had on T2 onto the CPU.
For power management, operating of the internals of the iMac - yep, it might be possible. That ARM chip could have direct control over the System-on-Package, for security reasons.

And at the same time, a recipe for potential disasters, with reliability...
 
Not according to some. One even implied the OS is irrelevant. There's even the belief ARM is ready to go today. Yet despite all of the advantages ARM is supposed to offer today we don't see any ARM based Macs. Why is that?

You DO know that ARM is an instruction set/programming language, and not a processor, right?

The A series isn't any sort of standard processor these days.
 
It's possible. I'm just skeptical of the Intel GPU hype because that's been going on for a while now.

But that is when they were trying to share the same die as the CPU. I think going forward they have seen that multiple die packages are going to get more common. Lots of aspects of their December architecture day pointed at that. When they are not solely in the game of "how to we split the transistor budget between the CPU and GPU" mode then dedicated GPU dies would be a strategic path they'd have to take much more serioursly.

The flaw they clung far too long was that x86 cores made good GPGPU cores. Errr no. They aren't on that path anymore. Haven't been for a couple of years now by all indications from Gen11 details they released. Gen 11 isn't the complete story but it is a clear way point.



There's also the possibility that Apple jumps in with discrete GPUs. That might end their relationship with AMD, but it's possible. If they keep moving away from PCIe, they could just build a big GPU and throw them right onto their workstation boards.

There is possibility that a meteor will hit the Apple space ship. I suspect there could be some talking smack about how Apple could do discrete GPU but that is about as detached as Intel's x86 GPGPU play. There is even less volume there than there would be for a Mac "A series" CPU. As pointed out their whole GPU strategy has been a shared memory ( not just flat address space but shared integrated memory). It is doable if throw gobs of money at it but where is the return on investment (ROI)? It is utterly vacant there. Apple is hemming and hawing about making Mac Pro systems because of volume issues. Even more expensive components isn't going to help.


I don't think Apple even needs to make a final decision on the mid end or high end until after they ship the first ARM Macs. I just suspect it will won't take much effort for them to jump into mid end portables/Macbook Pros, and they won't be able to resist the allure of owning all the hardware on the mid end.

That will probably turn into another blow gobs of money on the Apple Car decision.
 
it is macrumors so some folks will complain that the sky is blue.

How big. Just about as large as it was for the Mac Pro 2013. If the default is an AMD GPU ..... the Nvidia fan boys aren't going to be happy. There would be some slight shift in complaint set if there was an Nvidia GPU. It isn't a generic off the shelf GPU replaceable by any random board. So the folks who are complaining about not buying stuff off the shelf would still be complaining.

Corsair doesn't have Thunderbolt, but I think does some video port redirect. Is Apple going to hide the "loop back" Rube Goldberg solution inside the case? .... probably not. So even less "off the shelf" GPU card than that. [ There are some other vendors that use a MXM GPU card but again will have folks moaning about it isn't a broad standard slot. ]
The mac pro 2013 put video out and TB on the same buses (even HDMI) and they cut PCI-E slots + NO SATA (with only 1 pci-e sdd)

More then one disk is an needed + maybe room for cheaper 2.5 sata ssd's. Some people may even want one 3.5 bay for a say a BIG HDD
6TB $200 HDD vs 4TB ssd $700 vs 5.2 TB 48K pci-e
 
  • Like
Reactions: barmann
But that is when they were trying to share the same die as the CPU. I think going forward they have seen that multiple die packages are going to get more common. Lots of aspects of their December architecture day pointed at that. When they are not solely in the game of "how to we split the transistor budget between the CPU and GPU" mode then dedicated GPU dies would be a strategic path they'd have to take much more serioursly.

Wasn't this Knights Corner? I just feel like we've been here before.

The flaw they clung far too long was that x86 cores made good GPGPU cores. Errr no. They aren't on that path anymore. Haven't been for a couple of years now by all indications from Gen11 details they released. Gen 11 isn't the complete story but it is a clear way point.

Well, I'll certainly keep an eye on what they're doing. But I think it would have to be a heck of a release to pry Apple away from AMD.

(Not being x86 based probably explains how this is different from Knights Corner.)

There is possibility that a meteor will hit the Apple space ship. I suspect there could be some talking smack about how Apple could do discrete GPU but that is about as detached as Intel's x86 GPGPU play. There is even less volume there than there would be for a Mac "A series" CPU. As pointed out their whole GPU strategy has been a shared memory ( not just flat address space but shared integrated memory). It is doable if throw gobs of money at it but where is the return on investment (ROI)? It is utterly vacant there. Apple is hemming and hawing about making Mac Pro systems because of volume issues. Even more expensive components isn't going to help.

Is any of the new stuff Intel announced using a shared memory space? I was really disappointed when their Vega i7 wasn't a shared memory space. That would have been a slam dunk for Apple.

That will probably turn into another blow gobs of money on the Apple Car decision.

I know you're joking, but it's not entirely irrelevant that they would need a bunch of other ARM chip sizes possibly for whatever car/VR/AR headset they're working on. I think given where they are going it's not impossible that they'd want to have an internal strategy for being able to produce A series chips at many sizes.

And if they ever start building their own in house servers for iCloud/whatever...
 
You DO know that ARM is an instruction set/programming language, and not a processor, right?

The A series isn't any sort of standard processor these days.
Even if A series is different than standard ARM processor, it still is bound by typical ARM "problems" which make this architecture low power.

Architecture is designed to work with very low Amperages. I remember David Kanter talking about this issue with Apple engineers and they said to him that essentially the designs do not allow to put more than 40 Amps, which is ridulously low, for desktop usage. Threadripper, 32 core monster requires 200 Amps of power, on desktop. Because of this very reason, we may never see ARM CPU in a workstation. In spaces where you can make a lot of cores, on low amperages, with relatively high IPC - sure. But we are years from this point.

And the fact that 16 core ARM Server is 5 times slower than simple Threadripper 16 core server shows the mountain ARM has to climb, as a whole. Apple CPUs are not "that" much faster than standard A76 cores. Unfortunately.
 
For past 5 months AMD CPUs have been outselling Intel 2:1 ;). Yeah, nieche.
Context, and links?

OEM sales, DIY sales, servers, gamers, first tier vendors, ????

It really means nothing if gamers building systems buy AMD 2:1, but 99% of the HPE/HP/Dell/Lenovo/Supermicro pre-built systems have
intel_inside_2014[1].png
.
 
Even if A series is different than standard ARM processor, it still is bound by typical ARM "problems" which make this architecture low power.

Architecture is designed to work with very low Amperages. I remember David Kanter talking about this issue with Apple engineers and they said to him that essentially the designs do not allow to put more than 40 Amps, which is ridulously low, for desktop usage. Threadripper, 32 core monster requires 200 Amps of power, on desktop. Because of this very reason, we may never see ARM CPU in a workstation. In spaces where you can make a lot of cores, on low amperages, with relatively high IPC - sure. But we are years from this point.

And the fact that 16 core ARM Server is 5 times slower than simple Threadripper 16 core server shows the mountain ARM has to climb, as a whole. Apple CPUs are not "that" much faster than standard A76 cores. Unfortunately.

I'm totally with you on that. But what I'm saying is I don't think the instruction set is the cause. No more than x86 ensures a CPU is fast, there are plenty of slow x86 CPUs.

With that in mind, even at the scales we're talking about, the entire portable line including the Macbook Pros is still in striking distance.

And Apple is one of the few in a position to scale that hill. But the challenges are why we're talking about AMD. And the inverse has been hampering Intel. Intel is designed for the high power end of the scale, and has tons of trouble scaling down.
 
It really means nothing if gamers building systems buy AMD 2:1, but 99% of the HPE/HP/Dell/Lenovo/Supermicro pre-built systems have
It means a lot. Because of very simple reason. Mindshare. AMD does not have mindshare, yet, and Intel has all of the mindshare. Both companies are playing the long game, here.

Gaming, and DIY is the most reactive market to the situation. Pre-built market is least reactive to the situation. Cause and effect. Two different perspectives. All based on mindshare.

The only thing that is stopping AMD from pushing more hardware - it is money. They simply have no money, yet, to give enough support, to push the hardware down the throats of consumers, like Intel does. In time, it will change.

And hopefully we will have healthy 50:50% ratio between both companies on the market.

I'm totally with you on that. But what I'm saying is I don't think the instruction set is the cause. No more than x86 ensures a CPU is fast, there are plenty of slow x86 CPUs.

With that in mind, even at the scales we're talking about, the entire portable line including the Macbook Pros is still in striking distance.

And Apple is one of the few in a position to scale that hill. But the challenges are why we're talking about AMD. And the inverse has been hampering Intel. Intel is designed for the high power end of the scale, and has tons of trouble scaling down.
Instruction set limits software. Circuit design limits hardware, that runs the software.

Which one limitation is more important, in the grand scheme of things?

Who cares that your software has limits, if your hardware can push 200 Amps, and not be limited? Software can mature, over time.
Would you are if your software had no limits, but your hardware would have blatant wall over which you cannot go over? Hardware cannot mature over time, in any other way, than... buying a new version of said hardware.
 
Instruction set limits software. Circuit design limits hardware, that runs the software.

And that's where we get to vague implications about the ARM instruction set no one has really explained. It's been implied several times in thread that there is some fault in the ARM instruction set keeping it from scaling. But that's not any opinion I've seen in industry and it doesn't align with other work being done on ARM.

I've poked around and I can't find anyone talking about issues in the ARM instruction set. The most I can find is issues with the canonical ARM CPU implementation sold by ARM that would prevent that specific chip from scaling. But none of those issues are with the instruction set. And Apple's custom implementation, from teardowns and interviews, is clearly moving past the canonical implementation.

Edit: The other problem I find in these arguments is they insist that ARM should not be able to sustain these levels of performance, while the reality is that ARM is. Ars Technica's review of the A12X does a pretty good dig in.

https://arstechnica.com/gadgets/2018/11/2018-ipad-pro-review-whats-a-computer/4/#h5

So am I supposed to believe that the ARM instruction set has some fault that prevents it from getting into higher end CPU territory with absolutely not justification, or am I supposed to believe my own lying eyes?

Who cares that your software has limits, if your hardware can push 200 Amps, and not be limited? Software can mature, over time.
Would you are if your software had no limits, but your hardware would have blatant wall over which you cannot go over? Hardware cannot mature over time, in any other way, than... buying a new version of said hardware.

But again, Intel has the reverse problem. They're doing well on the workstation end, but the smaller you go the worse things get. The limit is my 15" laptop. That's the limit Intel is unable to contend with well.

To flip someone's argument around, if Intel chips were so great and fast, we'd all be running them in our phones right now. We aren't.
 
Last edited:
It means a lot. Because of very simple reason. Mindshare.
So, you don't have any context or links to support your claim. Not surprised.
[doublepost=1549066391][/doublepost]
To flip someone's argument around, if Intel chips were so great and fast, we'd all be running them in our phones right now. We aren't.
My espresso machine runs a Linux OS - my guess would be on a low-end 32-bit ARM processor. It simply doesn't need a Xeon.

[quit now to avoid an automotive analogy ;)] If dual turbo 32-valve DOHC V-8s were so great and fast, all the soccer moms would be running them in their minivans right now. They aren't.

Use the right tool for the job. That doesn't mean that there is anything wrong with the other tools.
 
My espresso machine runs a Linux OS - my guess would be on a low-end 32-bit ARM processor. It simply doesn't need a Xeon.

That's actually exactly my point. I don't think any of this is a technology or instruction set problem, it's a business problem.

It's not worth Intel's time and money to compete in the low power space.

It's not worth any ARM vendors time and money (yet) to compete in the high end space.

It doesn't have anything to do with ARM or x86 instructions. Intel would have to front cash to get into low end, and someone would have to front enough cash on the ARM side to get into the high end. Apple might be crazy enough to do that, or they might stick with x86.

So far we've been in a place of "if it ain't broke, don't fix it." What's changing is Intel is clearly broken.
 
  • Like
Reactions: AidenShaw
I suppose like with the Intel boxes, they could just throw ARM boards into a Mac mini case. The developer Intel boxes were literally built out of generic parts with not even a special Apple firmware. It would be a bit more effort to do a dedicated board run for the developer boxes, but I guess Apple has the resources and the will to do it.

I had just assumed they wouldn't want to do any production runs specifically for developers. But maybe they could even just stuff their existing iPad Pro boards into a different case. iPad Pro has USB-C now which means they could re-use the board probably without including an internal display.

Don't forget about the development systems for the new Space Grey Mac mini...
 
And that's where we get to vague implications about the ARM instruction set no one has really explained. It's been implied several times in thread that there is some fault in the ARM instruction set keeping it from scaling. But that's not any opinion I've seen in industry and it doesn't align with other work being done on ARM.

I've poked around and I can't find anyone talking about issues in the ARM instruction set. The most I can find is issues with the canonical ARM CPU implementation sold by ARM that would prevent that specific chip from scaling. But none of those issues are with the instruction set. And Apple's custom implementation, from teardowns and interviews, is clearly moving past the canonical implementation.

Edit: The other problem I find in these arguments is they insist that ARM should not be able to sustain these levels of performance, while the reality is that ARM is. Ars Technica's review of the A12X does a pretty good dig in.

https://arstechnica.com/gadgets/2018/11/2018-ipad-pro-review-whats-a-computer/4/#h5

So am I supposed to believe that the ARM instruction set has some fault that prevents it from getting into higher end CPU territory with absolutely not justification, or am I supposed to believe my own lying eyes?



But again, Intel has the reverse problem. They're doing well on the workstation end, but the smaller you go the worse things get. The limit is my 15" laptop. That's the limit Intel is unable to contend with well.

To flip someone's argument around, if Intel chips were so great and fast, we'd all be running them in our phones right now. We aren't.
Instruction Set can be designed around power efficiency and can result in CPU design that will hit the wall, based on the diminishing returns, of the design. Power, Area, Clocks. Caches, Front - end, Back - end of the CPU. Instruction set is part of the silicon design. Im not saying that Apple will not hit certain frequency ranges. But I think everybody forgets that Frequency is just part the equation. ARM instructions can do 50% of the work that x86_64 can do in one cycle.
So, you don't have any context or links to support your claim. Not surprised.
I do have sources, and links. Why don't you find them yourself, however? Mindfactory sales, for example. Motley Fool and Seeking Alpha analysts discussing the market share of AMD and Intel CPUs, for example?

Why do you have to be lazy, and never do research yourself?

It's not worth Intel's time and money to compete in the low power space.

It's not worth any ARM vendors time and money (yet) to compete in the high end space.
They wanted to compete, and failed miserably. Why?

Because x86_64 Instruction set requires certain CPU design that is not as efficient as ARM is in low-power thermal envelopes. Intel CPUs are faster than ARM, if we compare 7W vs 7W thermal envelopes, in anything that is not Geekbench. If we will compare them in 3W vs 3W thermal envelopes - here ARM will run circles around Intel CPUs. Why? Because ARM designs in 3W's will clock much higher than Intel.

When compared 7W vs 7W = you will see the difference in IPC, Clocks, and sheer power of x86_64 and ability to do more than ARM each cycle.
 
Last edited:
Instruction Set can be designed around power efficiency and can result in CPU design that will hit the wall, based on the diminishing returns, of the design. Power, Area, Clocks. Caches, Front - end, Back - end of the CPU. Instruction set is part of the silicon design. Im not saying that Apple will not hit certain frequency ranges. But I think everybody forgets that Frequency is just part the equation.

That’s kind of the same vague thing you said the first time around, with more vagueness. You’re describing all the parts of a CPU, yes, but still not being specific.

koyoot said:
ARM instructions can do 50% of the work that x86_64 can do in one cycle

Again, vague vague vague. Which instruction? Is this a RISC vs CISC argument that RISC can’t scale? Is this a pipelining argument? Hyper threading? Vectorized math?
 
  • Like
Reactions: AidenShaw
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.