Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If it takes another year or two for Apple to get a desktop ARM ready, it doesn't make sense to switch now. If they do, they've probably given up the idea of switching to ARM at all.

I don't think there is any way for Apple to do a workstation ARM CPU without losing a ton of money or passing the extra expense onto Mac Pro customers.

Designing a chip, even one based on a consumer chip, is not cheap. The Xeon is not cheap to produce. But Intel makes it up in volume. Apple does not have enough volume to make up the cost.

I think we'll be stuck on a dual CPU architecture platform. Not as big a deal as it sounds, everyone else is doing it. Apple doesn't have to use only one single architecture.

Even a desktop version seems iffy. Does Apple sell enough iMacs? Do they sell enough MacBook Pros? We know they likely sell enough MacBook Airs and MacBooks to justify it, and those machines would likely use the same architecture from the iPad anyway.

The only exception would be if they somehow figured out a way to use the exact same architecture up and down the line. Do something like a 4-8 CPU system somehow using the existing CPUs. Maybe then with the iMac and Pro lines combined into a single desktop CPU it would make sense.
 
Last edited:
Do something like a 4-8 CPU system somehow using the existing CPUs. Maybe then with the iMac and Pro lines combined into a single desktop CPU it would make sense.
Pretty much this, yes. I'm not expecting anything else from ARM based Macs. All they need is a bump in single core performance, maximize cores per CPU and but a bunch of CPUs into a single machine.
 
Im not sure anyone spotted.

Modularity of Zen 2 design: IO die, Zen cores, allows AMD to even more increase IP capabhilities of their Semi-Custom division. Which is 100% perfect for Apple needs, if they want, specific, semi-custom designs for their ocmputers(smaller, more compact packages in BGA factor, etc).

The fact that Northbridge is separate, can allow AMD to design anything custom for Apple, if they want it, and add it to the IO die.
 
  • Like
Reactions: SillySammy
And let's face it, the MacPro is a professional machine. I get it, alot of home users would like such a machine at home, despite the fact they don't need it. I don't need one at home anymore either, but that wouldn't stop me from buying one even if it's just to answer mail and browse the web. .

How about you let us decide what we need, mmmkay?

I run a Mac Pro because it is the ONLY computer Apple sells that can run the software I use in my hobby (3D Art). Most of this is at the hobbyist level, ZBrush being the only expensive part of my graphic pipeline.

Even hobbyist level software is driven by core-counts and ram, as well as the need for a high-end video card. An iMac or Mac Mini is a non-starter, much less a laptop
 
Pretty much this, yes. I'm not expecting anything else from ARM based Macs. All they need is a bump in single core performance, maximize cores per CPU and but a bunch of CPUs into a single machine.

Ehhhhhh but what you're talking about is still a custom design. And the other problem with this theory is workstation CPUs have additional capabilities not present in the generic desktop CPUs. Even having multiple CPUs alone requires a lot of custom design work. You can't just slap the 4 existing CPUs in a box and have it magically work. Which is why I ultimately left this idea at the bottom of the post because I'm skeptical it would work. At least in the next 5-10 years.

You're still stuck in a position where the CPU that goes in the Mac Pro can't be the same CPU they ship on an iPad. And why bother? AMD already makes what Apple needs. With AMD, Apple could make higher profits on the Mac Pro because they're not sinking money into custom design work. The only cost is having two architectures, which again, is not a huge deal.

None of the ARM-on-Mac-Pro theory makes sense, beyond some vague idea that all the things need to be ARM.
 
How about you let us decide what we need, mmmkay?
I'm not deciding anything. Buy whatever you want for your hobby. Just don't complain stuff is too expensive. I've spent alot of money in audio and video as a hobby. Does one really need a $100k projector? $70k for a pair of speakers? $30k on audio processor and so on and so on? I doubt it, but it's a hobby so I don't mind. If I think it's too expensive I'm not buying it or start collecting stamps as a hobby. MacPro is meant for professionals, not for hobbyists. That doesn't mean it can't be used for a hobby, despite it comes with a pro price-tag. I have no private usecase for a $9k Intel CPU, but if I think my hobby requires it, I pay the price or buy something cheaper if I think it's too expensive. Years ago I looked at some serious telescopes for astro photography, as a hobby, but decided I'm not going to throw >$50k on a telescope for the little time I actually use it. I stick to something cheaper and smaller and if I want that >20" RC than that's the price it is. We all have to make decisions.
[doublepost=1547240216][/doublepost]
You can't just slap the 4 existing CPUs in a box and have it magically work.
I didn't say that. Of course it requires work and it will require a ARM desktop CPU, but it won't just be a single CPU. It will be something with sufficient single core performance, max number of cores possible on a single CPU and as many CPUs as possible. I never said they will use a single CPU, leave alone one from an iPad, as a replacement for a Intel/AMD solution. Again, new CPU, max cores, as many as possible. That's what it has to be in the long run. It's not like people are starting with this now, it's been going on for years, works well in scientific research (crypto in particular), but single core is a huge bottleneck and that's what future ARM CPUs have to fix.
 
I didn't say that. Of course it requires work and it will require a ARM desktop CPU, but it won't just be a single CPU. It will be something with sufficient single core performance, max number of cores possible on a single CPU and as many CPUs as possible. I never said they will use a single CPU, leave alone one from an iPad, as a replacement for a Intel/AMD solution. Again, new CPU, max cores, as many as possible. That's what it has to be in the long run. It's not like people are starting with this now, it's been going on for years, works well in scientific research (crypto in particular), but single core is a huge bottleneck and that's what future ARM CPUs have to fix.

Right, but a multiple CPU design would be custom for the Mac Pro which takes us back to where we started.... It would be hard for Apple to make money on a Mac Pro design, when they have to pick up the cost of a custom CPU design for the Mac Pro all themselves.

If they can work out some sort of infinity fabric'y thing for CPUs, it might be more workable that they could use commodity CPUs. But if Apple has to design a CPU just for the Mac Pro I don't see how they make their money or keep making the Mac Pro.

And again, there is absolutely no reason to move the Mac Pro to an ARM CPU. So with all the downsides why move the Mac Pro?
 
These are the leftovers (not passing the binning process) of the full dies I guess. 7nm must still have yield issues.
I must say that marketing wise, AMD seem to have all the bloody morons there. What a nightmare their naming options.
They either copy competitor's naming schemes (sure, it's so much easier for us all) or come up with nonsense one.
Radeon VII (seven). Seven for 7nm? Wow, that's creative, and tied to the process node (sort of). Should the original Vega have been Radeon XIV? Roman, really? Does it look better? Well, everyone likes different stuff.
There was already the Vega 10, 56, 64, 20, whatever... issue. We all know better, we know if the number refers to CU count or revision. But then there's also Pro Vega 20/16, not to mention 3/8/11... Come on, is this to confuse the unaware? Wasn't there a better way to keep things clear and separate one thing from the other?
And Zen (1)/+/2/3 that seem shifted as 1xxx/2xxx/3xxx/4xxx models. What gives? It seems it was made up as we go, no prior thought given as to what would at least seem to match.
Don't get me wrong, finally they have something to show. But to me, this should have been better thought out.
OK, that's out of my chest now.
Maybe that's just me, I'm being picky.
Let's move on...

Will this GPU be the 3080 (again, on pair with nVidia 2080 since it seems to be on their crosshairs) and the full Vega 20 will become 3090? Or will they name it 3080Ti just for kicks? :)
Or 3060/3070/3080 for the 56/60/64 CU if/when available.
Maybe these will be the 2 top GPU options for the mMP, and MI50/60 the compute optional (custom) boards.
 
But if Apple has to design a CPU just for the Mac Pro I don't see how they make their money or keep making the Mac Pro.
Not just for the MacPro, their whole lineup, mobile and desktop. Others will follow, Linux, MS with Windows. Current architectures are dead unless they find new tricks for manufacturing. Everyone is still using them of course, that's what keeps the whole thing alive. The progress Intel and AMD made the past years is painful to look at. We need something new in the long run.

I must say that marketing wise, AMD seem to have all the bloody morons there.
The keynote was a mess. Su was cringeworthy to watch. I don't like Huang, but at least he knows how to show off a keynote for the masses. AMD was like "hey we're great, we have CPUs and GPUs, here's a new GPU and new CPUs, look what the GPU can do, games, GPU, CPU, genetics, GPU, CPU+GPU, games ...". I've seen first semester students giving better presentations. In the end, it's the hardware that matters, but they should learn how to present it properly and focus at one thing at a time. Would have loved to see some more in depth scientific computing from them (Huang showed this at GamesCon). Said this before, more effort and resources from AMD please.

If AMDs new GPU is what's in the new Mac Pro, there is going to be a whole lot of complaining.
MI50/60 for the MacPro. They need to compete with Quadro/V100.
 
By that logic, AMD users are not professionals.

Your words, not mine.

Don't try so hard, man. Even pros who use AMD - like me - can see the amazing things nVidia has accomplished that can make our work faster, better, more enjoyable.
 
Last edited:
  • Like
Reactions: Mago
These are the leftovers (not passing the binning process) of the full dies I guess. 7nm must still have yield issues.
I must say that marketing wise, AMD seem to have all the bloody morons there. What a nightmare their naming options.

This doesn't point to yield or naming problems at all if the primary intention here is just a "one off" product set for Vega at 7nm.

The name isn't an issue if there will only be exactly one of these 7nm Vegas brought to the consumer side. So they don't need a 7xx series or something with multiple model numbers if there is just one.

Vega 56 , Vega 64 , Vega VII

until they are all retired later, probably this time next year. ( maybe earlier if things get ahead of schedule).

I don't think there is a huge yield issue here. What they need is more affordable cards. The MI50 and MI60 will sell but that is a limited volume market due to the prices. At some point that will likely either plateau or even go backwards. The lower priced Vega VII will help fill out the volume when the MI's versions peak out. There is probably is a threshold of wafer starts they need to hit to have 'priority' in the queue at TMSC and the packaging ( combo of GPU and HBMv2 assembly ). The MI's have fat profits but not volumes.

If the MI cards happen to do completely miss expectations then perhaps they would need a "Vega VII Plus" later in the year , but I suspect they pretty have that as a "Plan C" option. [ Plan B being drop the effective prices on the MI series a bit to generation some more demand. ] Similar if Navi has some roll out hiccups later in the year at larger sizes. The other major issue is the uptick on the software associated with the MI cards. If that stalls then the MI card sales will stall.


We'll see when the final stats come out but if the Vega VII is being run at higher clocks and higher TDP then this isn't binning and selling rejects. (current specs put the clocks higher. The TDP is probably up also) That's binning stuff for peaker, 'hot rod' loads. Typically those aren't rejects. Certainly there will be a few 1-4 CU failure 'rejects' thrown into the pile but that probably won't be the core basis of the binning. Same for infinity link failure 'rejects'; some but not core volume.


The MI's will be binning for more sustained loads characteristics. ( also can pick up DisplayPort subsystem bugs and ones that don't clock all the way up. The Vega VII clocks could have been the targets and but are much more stable at the MI's rates. )


They either copy competitor's naming schemes (sure, it's so much easier for us all) or come up with nonsense one.

if this isn't a broad 'line up' then that may come with Navi. Which is the point of their 2019 offerings. Just not until more so the second half than the first half.


Radeon VII (seven). Seven for 7nm? Wow, that's creative, and tied to the process node (sort of).

More than several months ago they said they said they weren't doing anything in Consumer space for Vega 7nm. I think they are primarily jumping into a pricing umbrella 'hole' that Nvidia is holding open to them. Much of the NVidia RTX was thrown at non traditional graphics ( not at generalize computation or traditional graphics computation pipeline ). That opened a window to be closer with a placeholder for a substantively longer period of 2019. That's fine.

If Nvidia had done something to completely leave the 7nm Vega in the dust (on pricing and performance) I suspect AMD may not have done a Vega VII. AMD would have rolled out with more affordable MI cards and avoided the consumer side for additional volume. So Vega 7nm would have been MI Instinct only exactly how they were forecasting a year ago.

The other factor is the bottom falling out of the crypto mining craze. There is going to be are larger number of used top end cards floating around out there. AMD needs a stopgap product that is incrementally faster than all that lower priced used stuff out there relatively soon. [ At this point it is far more painful for them to float the Vega 56/64 for 9-10 months of 2019. ] The Vega VII volume should help the MI card's margins because can spread the costs out over a wider base.


P.S. To loop this back closer to the topic of the Mac Pro ..... I have doubts this makes much of a difference to Apple's plans. They have been so "asleep at the wheel" with Rip van Winkle tactics that AMD doing a 'shift' on Vega 7nm has a very good chance of going over Apple's head if these was done in May/June 2018-Nov 2018 planning/execution by AMD . If Apple was asking for Vega 7nm for Mac Pro / iMac Pro and AMD was resisting them a bit then it will work out, but Apple's ability to keep up to fluid changes is highly suspect.
[doublepost=1547246370][/doublepost]
Preliminary info on the Vega VII suggest it to be the same MI60 with video output, no PCIe4 no Infinity Link, while seems VegaVII to have a custom xGMI bridge inferior to the one on MI60, or at least coming later as the MI60 market gets cold (driven by FP64 performance, a small niche)

No, the current specs match the MI50 , not MI60 . Specs for MI50:

"...
GPU Specifications
GPU Architecture Vega20
Lithography TSMC 7nm FinFET
Stream Processors 3840
Compute Units 60 ..."
https://www.amd.com/en/products/professional-graphics/instinct-mi50


For the MI series the "50" and "60" is not the number of CU units. 50 is just a smaller (implicitly more affordable) number than 60. It isn't coupled to the GPU core count.

The MI60 only has 4 more CUs. It has a bucket load more HBMv2 RAM. ( 32GB versus 16GB). That's a big difference in terms of cost (harder to make in addition to more capacity) and on large data sets workloads ( more data kept local).

What AMD is doing to taking the more affordable MI50 and selling it at an even more affordable price in the Vega VII by lopping off some functionality (to make more unattractive to machine room deployments) and up-clocking it a bit to run a "hot rod" state by default.
 
  • Like
Reactions: Mago
Not just for the MacPro, their whole lineup, mobile and desktop.

I don't think that's strictly true. The MacBook Air and the MacBook could probably use the exact same chip that's in the iPad. They don't have to redesign much for those two lines, except maybe throw on a Thunderbolt controller. And by the time they get there, the iPad might already have Thunderbolt anyway.

For the basic Macs, they could literally just take an iPad, throw on a trackpad and keyboard, pre-install macOS, and be done. No custom design specific to the Mac.

Others will follow, Linux, MS with Windows. Current architectures are dead unless they find new tricks for manufacturing.

None of that helps Apple. The question is how does Apple handle the additional R&D costs of doing their own chips on low volume products. The Mac Pro and iMac Pro are their lowest volume products and the hardest to make up that cost for.

Which again, if Apple has to now sell the Mac Pro at a loss because it's sinking money into custom chip design for the Mac Pro, how is the transition worth it?

Workstation chips are custom designs. Any work that Apple does for the MacBook Pro or the iMac helps with the Mac Pro, but it doesn't get them out of the Mac Pro CPUs having to be a different design and a different line on the fab.

It doesn't even really matter if it's faster. If Apple is losing money on every Mac Pro because it has a custom chip that they had to spend money on to design, Apple will stop selling the Mac Pro.
 
By that logic, AMD users are not professionals.
Let's put it this way, if you need computation power for scientific work, there's no way around Nvidia. Look at all the data centers and cloud providers around the world. I spoke to Oracle in december, they have several data centers around the world for commercial and government application as well as what they call edge point of presence. Want to make a guess how many AMD GPUs are in their systems? Hint: starts with "No" and ends with "ne". I have no doubt they have test systems somewhere (I can ask them later this month), but they're not using them in their products. Everything is Nvidia based, mostly V100s and HGX-2 is coming this quarter. CPU wise... Intel, mostly. They have recently added AMD 7551 though (2x for 64 cores with 2x 25Gbe Interfaces to their network). Like it or not, Nvidia is still the gold standard for the industry and it's up to AMD to change that by providing the hardware, software and services at a level of Nvidia or even better.
 
Not entirely correct. Radeon 7 supports PCI-e 4.

Got a reference? Just because the core chip die supports it doesn't mean AMD is going to turn it on. Anandtech could have goofed in their reporting ( that would be unusual), but it is off there. There is going to be a huge gap between the Vega VII and MI50 in price. AMD is going to point to something (and this is another point upon which they can do some binning.) .
[doublepost=1547248012][/doublepost]
I don't think that's strictly true. The MacBook Air and the MacBook could probably use the exact same chip that's in the iPad. They don't have to redesign much for those two lines, except maybe throw on a Thunderbolt controller. And by the time they get there, the iPad might already have Thunderbolt anyway.

Doubtful that Thunderbolt is coming to the iPad Pro. Pragmatically, TB means more drivers to a wider set of equipment and also more storage solutions. The Type-C port on the iPad Pro doesn't particularly support an external drive (not even at USB drive yet). So Apple isn't probably not bring random software that needs to load kernel extensions for PCI-e card hookups to the iOS and iPad Pro any time soon either.

When they have flushed out just plan USB 3.1 gen 2 then maybe might bring Thunderbolt to iPad Pro.

The MacBook is a corner case. Honestly, iMHO I can see that just plain flipping over to iOS. They really don't need it much on the macOS side. The new MBA is lightweight enough for almost everyone and has a Retina screen. the MacBook is in the "princess and pea" mode of diversity coverage.


iOS getting a "iBook" still is the best fit to these "Apple going to use ARM in a laptop" rumors.


For iMac , new Mini , and Mac Pro it doesn't make any sense at all. Even MBP is extremely dubious rumor.
 
How about you let us decide what we need, mmmkay?

I run a Mac Pro because it is the ONLY computer Apple sells that can run the software I use in my hobby (3D Art). Most of this is at the hobbyist level, ZBrush being the only expensive part of my graphic pipeline.

Even hobbyist level software is driven by core-counts and ram, as well as the need for a high-end video card. An iMac or Mac Mini is a non-starter, much less a laptop
I've never bought a 'home user' Mac in 25 years- not a Performa, iMac, Mini (though, that said, the 6-core i7 looks very nice indeed and would be perfectly good for most of what I do). Do I need to spend that much? Nope. But these machines stay respectably performing for longer, even without upgrades (which I have done on the four more recent machines). I went from a PM 6100/66 in '95 to a G3/266 tower in 98 to a MDD Dual in '02, to my MPs (1.1 brand new in '06, flashed 4.1 in 2013). If Apple doesn't launch an MP I want to buy at a price I wish to pay, I'll either go to a Mini (assuming it gets updated at some point this year....!!) or for a hack.
Oh, while I'm here, my (utterly guessed) 'predictions'....
AMD CPUs: nope, Apple will be sticking with Xeons. If it were me in charge of Apple's Mac line, I'd have an AMD project running in a quiet corner of Apple Park (in the same way that 'Marklar' was running for some years before the Intel switch), a Plan B is always good....but I'm not, obviously. If/when AMD can do everything Apple wants it to do, they might actually think about shifting, perhaps. If nothing else, to keep Intel on its toes....Nor am I convinced that Apple's going to jump to its own chips. Unless, and until, they can get Intel code running as least as well on those A-chips as the real thing (as with the PPC and Intel transitions) they're not going to make that jump. I think they're more likely in the short term to use their own chips like the T2 as a performance enhancing & security feature, and a differentiator from every other Intel box.
Speaking of Intel....an option as GPU supplier, perhaps? Nvidia....wish they would, doubt they will.
PCIe slots: hopefully they've got the hint that people would really really like them....or at least some new proprietary standard that can actually be swapped out. If they at least design the thing so that it's easier for Apple to update, end-user upgrade options will be much better. From a factory/stock control point of view, I'd have thought modular machines are rather easier than locked-in ones. OK, I might be wrong about this, but for the MacBook Pros, absolutely every possible configuration must be kept in reasonably close stock (for the UK, that'll be in Cork), because if you want your new machine in a week or two (and that seems to be as long as the estimated delivery gets), you don't want to be told it's on a boat from China, or that they haven't made it yet. Whereas, with BTO configurations on a machine where everything isn't soldered in, even an iMac/iMac Pro where everything's behind the screen including that nice safe exposed power supply (!), swapping out parts is a rather easier option....
Overall, expecting something midway between the most pessimistic forecasts/guesses and the most optimistic- not as open to the end user as the cheesegrater, but not a Xeon-specced Mac Mini or trashcan MK2 either.
 
Last edited:
Again, I think the problem is the fact that Apple will very soon start to move to proprietary ARM, which is a wise choice on so many aspects, especially given by how fast the Ax chips are developing.
No, Apple stated no desktop to switch to ARM (not yet), but include ARM IP into desktops.

So if they plan to switch to AMD, then they will do so for every machine, MacPro, iMac, MacMini, MBP, etc. Probably not all at once, but step by step. If they stick to their usual updates, they'll probably have this done within a year. And then what, switch to ARM a while later? So AMD would be a temporary "fix"?

Apple switch to AMD (consider also AMD could provide taylor-made CPU/APU to Apple including Apple's own IP -and lock downs-), wont come in block, even with GPUs Apple took many years to fully ditch nVidia (selling notebooks on AMD and iMac with nvidia -2011-, then viceversa 2012, from 2014 all on AMD GPUs).

Consider Apple Hired(poached or whatever) Jim Keller from AMD to develo the first A- SOC for the iPhones, the released Keller so he helped to develop Zen and Polaris (Apple become among AMD's top investors then), all this go and back didn't tell you something?
I think Apple has an not so secret roadmap to enable 2nd CPU vendor in the mac line, helping AMD, and also AMD helping Apple, dont get surprised if soon or later Apple buys AMD (as controlling capital, not merger) and ditch intels after AMD provides "tweaked" APUs/ CPU while based on X86-ia64 it could include special Apple-only instructions banning Hackintosh from mac ecosystem for ever.

This is just an plausible (pray fr it to be unlikely) scenario,

But likely we will see some semi-custom AMD silicone for Apple, maybe not locking down macOS but deep embedding apple features into the CPU as security enclave, peripheral management etc).
 
I can envision Apple switch to AMD more believable than them switching to nVidia for default graphics.

But, Intel has provided pretty solid graphics with their Iris solution for Macbooks, I can't see them ditch that overnight.

AMD is increasing cores/clocks and shrinking dies much quicker than Intel is, but, Intel has better single thread performance. Time will tell if this is a strategy that Intel can keep up with.

I'm just now starting to use an eGPU with my 2013 (Rx 580/8GB) and so far, so good... I can see eGPU options (via "sidecars") with TB3+ become more real world going forward.
 
The Roman numeral seven in the Radeon VII name is a direct reference to Vega II...

"V" for "Vega" & "II" for "two"...

This is the GPU that we get until mid-2019 & Navi is introduced...

Then Navi will take us to 2020, at which point Arcturus will be the new hotness...
 
I can envision Apple switch to AMD more believable than them switching to nVidia for default graphics.

But, Intel has provided pretty solid graphics with their Iris solution for Macbooks, I can't see them ditch that overnight.

I don't think it is the graphics that is "saving" Intel. The priority order AMD is coming after Intel is in the Server, Workstation , HEDT , mainstream, and then laptops. Probably somewhere in the 65-80% range of Macs Apple sells is in the laptop space. AMD has competed in higher TDP because haven't been able to get a significant jump on Intel. Not sure their 7nm move is going to reach that trailing end of the AMD line up before Intel gets themselves uncorked from their 14nm logjam. Intel is on track to work that list in about the opposite order. ( low TDP first and then move up )

Where Intel is hurting is in the new mainstream 6-8 core where are in a zone that the 14nm products were not intended to go.

If Intel can't get their fabrication together then 2019 may be the end of the road for Intel in "new start" Apple Mac projects. If AMD can get to the point where they are shipping a whole line up that is a better fit for the direction Apple is going then they'll have a much better chance. Some of Intel's 'win' is they "do it all" and right now have a more balanced laptop edge. They don't have something knock out at the 4-7W range but then AMD doesn't either. ( that's one reason why the ARM stuff keeps popping up. )

If Intel's discrete GPU is decent then they'll probably slip in as the alternative to AMD. Nvidia could find themselves in 3rd place ( well technically already are if toss iGPUs into what does Apple buy and deploy now.) Nvidia's problem isn't their silicon. It is strategic misalignments ( and outright positions opposite of apple), software methodology, and "good partner" metrics that are probably mainly keeping them out. If Intel gets a decent dGPU up and running... Nvidia will just be that much further behind since Intel doesn't have those problems.


AMD is increasing cores/clocks and shrinking dies much quicker than Intel is, but, Intel has better single thread performance. Time will tell if this is a strategy that Intel can keep up with.

Just cranking clocks isn't going to make it in vast majority of Mac designs if that means more TDP. [ Yeah AMD is somewhat doing that in the GPU space but Apple probably isn't happy about that. That is a contributing factor to the Mac Pro 2013 being in a corner. If AMD solidly excutes their low-midrange 7nm GPU product roll out without tripping up any Mac ship dates they'd be digging themselves out of a hole more so than pulling away from the pack. ]

Intel's "better single thread" hasn't particularly been a 'strategy'. It is more so what they are left with. Intel hasn't been forgoing other stuff to keep single thread higher. That is simply just the place where they were furtherest ahead before they popped the clutch and mostly stalled. That isn't a strategy. They didn't actively plan to do that.

Intel's dGPU for Apple would probably show up first as a GPU die in a chip package. I think AMD is going to loose any "huge" edge on laptop graphics quicker if Intel doesn't shoot themselves in the foot.


I'm just now starting to use an eGPU with my 2013 (Rx 580/8GB) and so far, so good... I can see eGPU options (via "sidecars") with TB3+ become more real world going forward.

For the rest of the Mac line up there is more synergy so eGPU probably will grow (not to most laptops have them but significant more than use them now). For Mac Pro Apple would be missing a number of folks if that was the only option.
[doublepost=1547267398][/doublepost]
The Roman numeral seven in the Radeon VII name is a direct reference to Vega II...

"V" for "Vega" & "II" for "two"...

AMD is probably going for the double entendre. It is probably both 7 as in 7nm and " Vega the second".
If it is a singular, 'stop gap' product all the more so. They don't have to keep the 'V' going forward nor keep II or III of dropping the V. It won't be a new product branding framework. It would be just something catchy to give this product a name.


This is the GPU that we get until mid-2019 & Navi is introduced...

Technically, Navi may not replace this. Or at least the instance of next micro-architecture that is queue up to ship mid-2019. There are a number of indicators that AMD is going to move off of the "one architecture for the whole product line". A pretty good chance that what is the foundation of Navi will get a substantive tweak to be turned into the next "data center" targeted "big" GPU for AMD that is the real replacement.

The pressure to put some custom stuff specifically for "Neural Engine" into the follow on to Vega II is going to be high. That probably doesn't make sense in the mainstream market ( and isn't in Navi. ). AMD isn't going to start completely over but the internal bandwidth, memory controller(s), the function unit emphasis , display subsystems , and interconnect focus are probably going to be different.

Navi will be faster than Polaris which will allow it to cover more ground for folks using mainstream apps that they have now, but it probably isn't going to be a winner in the upped end of the HPC/ML/data center compute space.

Vega VII will probably be around for at least a year.


Then Navi will take us to 2020, at which point Arcturus will be the new hotness...

AMD is going to have new codenames every year for something.
 
Got a reference? Just because the core chip die supports it doesn't mean AMD is going to turn it on. Anandtech could have goofed in their reporting ( that would be unusual), but it is off there. There is going to be a huge gap between the Vega VII and MI50 in price. AMD is going to point to something (and this is another point upon which they can do some binning.) .

You are definitely right, I am sorry. I rushed my answer :oops:. It made sense to me, since they plan to implement PCI-e 4 on their new motherboards and since VII and MI60 had basically the same chip. After all, the price differentiation is a good reason to evirate the consumer card.

No, Apple stated no desktop to switch to ARM (not yet), but include ARM IP into desktops.

I wouldn't be sure about that. Apple said and denied many things and changed its mind in a few years.
For sure the next 2-3 years will see many CISC chip being installed aside ARM RISC, but I am pretty sure Apple will move to proprietary APUs on their notebook line, at least. ARM chips are less power hungry, offer higher IPC gain year-to-year, give Apple full control on the instruction set to use and they just make sense, since Apple has been investing a lot to develop the Ax-Tx-Sx-Wx line and in a company optic it would not make sense to use that investment and percolate it onto the Mac line.

Besides, the ARM chips are making their way aggressively even in the server market. It's a siege! :D

https://www.nextplatform.com/2019/01/08/huawei-jumps-into-the-arm-server-chip-fray/
 
Last edited:
Besides, the ARM chips are making their way aggressively even in the server market. It's a siege!
ARM is big on server and mobile market due its efficiency, the challenge to reach CURRENT(actually 2017) X86 IPC is targeted to 2020, to do this either ARM has to resign some power saving constrains or develop a new approach to scale up thread processing w/o burning more watts, not easy, lets see.

Note that the RISC/CISC differentiation is vanishing among ARM/x86. Intel has adopted parts of RISC with its use of micro-ops, and ARM has grown increasingly more complex with each new Cortex version (and less efficient, that's why you see mobile device to mix cortex A56 witn A72, the former its efficient the later is faster, you cant get both that easy).

The real challenge for ARM is not to reach some x86 generation IPC, is to beat current generation x86, sameway ARM is evolving, x86 is doing so, a major evolution is planned for 2020-22 frame when Intel plans to ditch all legacy x86 instruction native implementation (but keeping it emulated) in favor to new (presumably WISC) instructions set much faster and more efficient, WISC in theory well implemented (should be an very expensive engineering effort since its intrinsic complexity), should deliver both groundbreaking efficiency and faster single thread performance as never seen before, OK that's just theory, previous WISC development failed at some point (as Itanium), a new Mac o PC based in these all new WISC architecture will need to re-write its compilers and some of its own high-lever source code to take advantage of this WISC. there are also new developments needed to properly integrate into the Von Neumann architecture as Non-Volatile Ram, or just replace Main mass storage and switch everything as possible to the new non-volatile ram (name it mRam-Optane DIMM etc), the latter implies a whole new concept of Compute device, one bridging among Von Neumann and pure Neural Network (hyper-parallelism).

Meanwhile, the true future of the PC as main compute device is also questioned, as the proposed Heterogeneous System Architecture, could mean in the future you having a mobile smart device like an iPad or a Phablet or a VR headset with relatively low-power processors linked to an external box (or cluster) providing extra processing power when required to do some heavy task as AI, Rendering, Transcoding etc, I think Apple is also quietly betting in this direction (right now you can buy a bunch of mac mini and stack it to join its transcoding or compile power delegated to an macbook or iMac -google xGrid-).

At this point the future of x86 architecture wherever Apple or PC is as uncertain as possible, I cant draw it now, but sure I'll be excited to see it arrive.
 
Last edited:
  • Like
Reactions: askunk
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.