Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yeah..... About that

Let's review software development cycles, shall we?

Most software is on a 18 - 24 month design cycle.

Let's assume that every software house dumps their current Mac development process so they can restart at the same time they get their hands on an ARM Mac. (which won't happen, but I am giving a best case scenario - software houses will conduct an analysis to see if the uptick in sales will justify development costs - and every last one of them will remember the hoops they had to jump through and factor in the possibility that Apple may sabotage them like they did with Carbon-64.).

Version 1 software is anywhere from 18 - 24 months out (Delivery dates - late 2020 to late 2021)- this will be a straight port - no new features. Have fun being a beta tester for every piece of software you "upgrade" to.

Version 2 - 1st truly native version with new features (Delivery dates - Mid 2021 to late 2023). All for a shrinking market.

Sure, a lot of the apps are already on iOS. They are fundamentally running on the same source code, just compiled for different targets (which Xcode and other SDEs coincidentally already does). All modern Macs (except iMac 5K and Mac Pro) already have code running on RISC, the T2 coprocessor. I'm not claiming it happens over night or even over the next few years but the ground work has been laid.

Besides, I am already a beta tester, just look at the state of Adobe Creative Cloud :p
 
  • Like
Reactions: Tucom
That did happen for Nvidia GPUs in 2008 MacBook Pros, but since then the following has also happened:
  • The 2011 MacBook Pro GPU failures that resulted in an extended warranty campaign were AMD GPUs.
  • The 2013 Mac Pro GPU failures resulting in an extended warranty campaign are AMD GPUs.
So if they are upset about Nvidia meltdowns from 10 years ago, wouldn't it follow that they'd also be upset about the two AMD meltdowns, both of which are more recent?
And no Nvidia GPUs between those lines failed in Apple computers, eh?
 
One reason for Apple to pick AMD over Intel: Semi-Custom designs, and very competitive roadmap, and - reliable, compared to Intel.

And latest speculation says that AMD will have higher capacity at TSMC available to them on 7 nm then they had from GloFo with WSA. Rightly so, because Rome is quite hell of a design.

That requires a short and highly selective memory. Since AMD's CPU lines emerged from their most recent dark age, how many iterations have they gone through again? TWO. That's the minimum required to draw a line. Lets not get to confident in our extrapolations from that. Rumors are the next iteration will be good, but until we see it, well, I'm not counting on anything. And on the GPU side, they haven't exactly delivered lately have they?

Intel certainly has its problems, but out of those two companies, its the one that's been able to push out improved products reliably for >2 years. Some might quibble about just how improved they are, but AMD went limp for years before final just catching up to intel. I'd be worried that could return at any moment. As soon as Intel figures out what ever problems they are facing on 10nm, they have the resources to just burry AMD.
 
  • Like
Reactions: filmak
And no Nvidia GPUs between those lines failed in Apple computers, eh?
You should tell us which ones failed, not just post a snarky question with a vague innuendo.

(And "failure" should be widespread failures, particularly those that trigger an extended warranty campaign. A few random failures aren't significant.)
 
Last edited:
Have you seen the A series running at equivalent wattage with non-passive cooling systems?

How is that "slap a bigger cooling system on it" and "run the clocks higher" working for AMD versus Nvidia? *cough* Extremely likely Apple has the same non-linear power envelope for a given design instance as everybody else has. All these design implementations involve making trade-offs , throughput versus power efficiency versus high clock / high power. Nothing particularly wins on all dimensions at the same time.
 
  • Like
Reactions: AidenShaw
You should tell us which ones failed, not just post a snarky question with a vague innuendo.

(And "failure" should be widespread failures, particularly those that trigger an extended warranty campaign. A few random failures aren't significant.)
GT9600M - GPUs failed.
GT330M - GPUs failed.
GT8600M - GPUs failed.
GT650M from Retina MBPs(funnily enough non-retina models were unaffected, no idea) - GPUs failed.

What is common with those GPUs? Oh yeah. They are Nvidia GPUs. All of those models were failing, and required replacements of the motherboads.

How is that "slap a bigger cooling system on it" and "run the clocks higher" working for AMD versus Nvidia? *cough* Extremely likely Apple has the same non-linear power envelope for a given design instance as everybody else has. All these design implementations involve making trade-offs , throughput versus power efficiency versus high clock / high power. Nothing particularly wins on all dimensions at the same time.
Have you actually used AMD GPUs in compute to post something like this? Or are you basing your opinion on gaming benchmarks, only, as 95% of this forum?

To the question of fuchsdh: Apple SoC can have "relatively" normal voltage on the SoC. But the Current available to it is on the low side. We are talking about 3, maybe 4 amps, with 1.0v, on the iPhone, and maybe 6 amps for iPad.

Now compare that to desktop: Core i9 9900K has unrestrained Power Limit of 140-150W of power draw, if power deliver on the Motherboard is sufficient, with 1.00v.

That means you have to give the CPU 140 amps of current to power it in this scenario!

Can Apple design their SoC's to have this king of power? They can. But will it actually be efficient? They will hit the wall, and they most likely will see diminishing returns with ARM architecture.

There is a very good reason why ARM on servers/data center is still... baby toys.
 
Last edited:
It‘ll be very interesting how Apple will allow us to upgrade the machines. I don‘t see how any „Pro“ would buy an overpriced machine that you have to bring to an Apple Store if you want to swap the boot drive, say.

Hmm it’s worst then thought. Apple have admitted that the T2 security chip DOES prevent ‘some’ third party repairs, but they are being very very very cagy about what repairs they limit, but you can bet it includes the motherboard.
So in effect Apple will fully control the lifespan of your computer now, it’s repairable by them until they deem it vintage.

Story’s here:

http://9to5mac.com/2018/11/12/apple-t2-third-party-repairs/

Might want to think about this before spending several thousand on a Mac Pro. I don’t think it would stop me getting a Mac Mini though. But you will have to ensure you backup!
 
GT8600M - GPUs failed.
No credit.

This is the 2008 Mac Pro model that ActionableMango mentioned. This was clearly a major screwup, due to the change to RoHS solder. I had my Dell Latitude fail about an hour and a half after leaving San Francisco for Hong Kong due to this chip. (Fortunately, I was carrying a second laptop as a backup.)

GT9600M - GPUs failed.
Since the solder issue was one that showed up after many, many heat cycles - some of the follow on 9600M chips had the same problem. Nowhere near as serious as the 8600M problem.

GT330M - GPUs failed.
GT650M from Retina MBPs(funnily enough non-retina models were unaffected, no idea) - GPUs failed.
Can you cite links to show that these failures were common? I saw some random reports of failures, but I didn't see anything like the huge numbers of 8600M failures.

(Also interesting is that the 8600M failures hit every company using that chip. If 330M/650M failures are Apple-only, then it could be an Apple motherboard design issue, rather than an Nvidia flaw.)

And, we're still in the cluster-fork of the disastrous faux ATI FirePro GPUs in the MP6.1.
 
Last edited:
No credit.

This is the 2008 Mac Pro model that ActionableMango mentioned. This was clearly a major screwup, due to the change to RoHS solder. I had my Dell Latitude fail about an hour and a half after leaving San Francisco for Hong Kong due to this chip. (Fortunately, I was carrying a second laptop as a backup.)


Since the solder issue was one that showed up after many, many heat cycles - some of the follow on 9600M chips had the same problem. Nowhere near as serious as the 8600M problem.


Can you cite links to show that these failures were common? I saw some random reports of failures, but I didn't see anything like the huge numbers of 8600M failures.

(Also interesting is that the 8600M failures hit every company using that chip. If 330M/650M failures are Apple-only, then it could be an Apple motherboard design issue, rather than an Nvidia flaw.)

And, we're still in the cluster-fork of the disastrous faux ATI FirePro GPUs in the MP6.1.

The 330m was, mine failed and the Apple store had to repair it for free despite being way out of warranty, because it was faulty at point of original purchase, they had a big repair programme for the chips.
But I’m sure they had issues with AMD as well??
 
Sure, a lot of the apps are already on iOS. They are fundamentally running on the same source code, just compiled for different targets (which Xcode and other SDEs coincidentally already does). All modern Macs (except iMac 5K and Mac Pro) already have code running on RISC, the T2 coprocessor. I'm not claiming it happens over night or even over the next few years but the ground work has been laid.

Besides, I am already a beta tester, just look at the state of Adobe Creative Cloud :p


Some of us run real applications on our computers - none of my mission critical apps are on the iOS.

I lived through the PPC to Intel transition - I am NOT going through that goat rope again.
 
  • Like
Reactions: askunk and koyoot
Hmm it’s worst then thought. Apple have admitted that the T2 security chip DOES prevent ‘some’ third party repairs, but they are being very very very cagy about what repairs they limit, but you can bet it includes the motherboard.
So in effect Apple will fully control the lifespan of your computer now, it’s repairable by them until they deem it vintage.

Story’s here:

http://9to5mac.com/2018/11/12/apple-t2-third-party-repairs/

Might want to think about this before spending several thousand on a Mac Pro. I don’t think it would stop me getting a Mac Mini though. But you will have to ensure you backup!

As someone who hasn't had any hardware issues with desktop Macs beyond bad third-party RAM, it's not a major concern for me. I'd be more concerned about the laptops as they take much more of a beating.
 
...and continue to have issues with ATI GPUs.

In one part you claim it is related to Apple design, not Nvidia, and in another you suggest its because of AMD GPU rubbish design.

Hmmmm. Interesting ;).

If you say AMD GPUs - say Fire Pro GPUs in Mac Pro. There are no reports on GPU failures in iMacs, and Polaris GPUs in MBPs.
Since the solder issue was one that showed up after many, many heat cycles - some of the follow on 9600M chips had the same problem. Nowhere near as serious as the 8600M problem.
Or Nvidia deliberately delivered faulty GPUs to Apple. Who knows? Maybe that was the reason why both parties parted their ways?

;)
 
In one part you claim it is related to Apple design, not Nvidia, and in another you suggest its because of AMD GPU rubbish design.
Another failure in reading comprehension.

I said "If 330M/650M failures are Apple-only, then it could be an Apple motherboard design issue, rather than an Nvidia flaw".

In other words - if 330M/650M GPUs are failing everywhere, it's likely to be an Nvidia issue. If the 330M/650M are only failing on Apple motherboards, then it could be an Apple issue. (The 8600M GPUs failed everywhere - an Nvidia issue.)

But almost everyone will agree that the implementation of the faux FirePro GPUs on the MP6,1 is a rubbish design - without specifically putting the blame on ATI for "rubbish" chips (I never used that word - why are you so defensive?) or on Apple for "rubbish" system design and thermals.
 
There is nothing technical about why Apple keeps AMD gpus on their Macs. AMD is simply more willing to cater to the demands of Apple than Nvidia will. Nvidia's current business focus doesn't really need Apple to be part of it.

Whenever Intel comes out with their new discrete graphics solution sometime in 2020, they are probably going to lobby Apple hard for it, even if they barely make any profit out of it.
 
Is that a clever way of discounting failures with MBPs with Pitcairn, Hawaii, Fiji, Tonga and other chips?

Hi Aiden ! I like your avatar.. and it may well be true that the Mac Pro will be dead anyway.. I don't think it will be spectacular.. just look at the new mac mini - Apple made it hard for people to upgrade simple memory. I sure hope that it won't be Mac Pro 2006-2019. But, with my 6-core 2010/2012 and a metal card I am set for a few more years. Really, i think we are seeing the end to the Mac in terms of upgradability.
 
Hmm it’s worst then thought. Apple have admitted that the T2 security chip DOES prevent ‘some’ third party repairs, but they are being very very very cagy about what repairs they limit, but you can bet it includes the motherboard.
So in effect Apple will fully control the lifespan of your computer now, it’s repairable by them until they deem it vintage.

Story’s here:

http://9to5mac.com/2018/11/12/apple-t2-third-party-repairs/

Might want to think about this before spending several thousand on a Mac Pro. I don’t think it would stop me getting a Mac Mini though. But you will have to ensure you backup!
If the Mac Pro has a T2 chip or anything close to it, we can kiss good bye to any hope of modularity internally. I don't mind hardware control when the device's concept call for it, such as on an iPod or an Apple Watch where it is single purpose and self-contained. I question going the same approach on a professional computer where versatility is a feature. And on the iMac Pro and MBP2018 the T2 has been creating unnecessary hardware issues like Kernel Panic as well.

Seriously how does Apple hope to move the future of Macs with this mentality? If the ARM Macs aren't giving dramatic increase in performance/watt over x86 then who is going to buy these? And even on devices that do have power, like the recent iPad Pro, it is obviously being hindered by running sandboxed OS with a restricted FS. Just please Apple can you unstuck yourself before asking us for money every other quarter.
 
  • Like
Reactions: apolloa
https://support.apple.com/kb/index?...support&type=organic&page=search&locale=en_US

"Your search returned no matches."​


...and continue to have issues with ATI GPUs.

Apple have removed the repair programme from their website as they are no longer viable for the fault. It was a well known issue and they did have a repair programme.

http://discussions.apple.com/thread/3191083
[doublepost=1542094550][/doublepost]
If the Mac Pro has a T2 chip or anything close to it, we can kiss good bye to any hope of modularity internally. I don't mind hardware control when the device's concept call for it, such as on an iPod or an Apple Watch where it is single purpose and self-contained. I question going the same approach on a professional computer where versatility is a feature. And on the iMac Pro and MBP2018 the T2 has been creating unnecessary hardware issues like Kernel Panic as well.

Seriously how does Apple hope to move the future of Macs with this mentality? If the ARM Macs aren't giving dramatic increase in performance/watt over x86 then who is going to buy these? And even on devices that do have power, like the recent iPad Pro, it is obviously being hindered by running sandboxed OS with a restricted FS. Just please Apple can you unstuck yourself before asking us for money every other quarter.

That’s the thing, I’m thinking their idea of modular is bolt on units, not simply replacing the graphics card. It we will have to see.
Apple has had a huge war with others trying to repair their computers or iPhones for years and it seems like the T2 is their way to stop it. Planned obsolescence in a multi thousand dollar machine?
 
Is that a clever way of discounting failures with MBPs with Pitcairn, Hawaii, Fiji, Tonga and other chips?

And the iMac's x1600, hd4670, hd6970 had a lot failures too...
We all have been through GPU failures with MPs, iMacs and MBPs.

i.e.
"Apple has determined that some AMD Radeon HD 6970M video cards used in 27-inch iMac computers with 3.1GHz quad-core Intel Core i5 or 3.4GHz quad-core Intel Core i7 processors may fail, causing the computer’s display to appear distorted, white or blue with vertical lines, or to turn black. iMac computers with affected video cards were sold between May 2011 and October 2012."
https://9to5mac.com/2013/08/16/appl...-replacement-program-for-some-mid-2011-imacs/
 
Which puts Apple more than slightly ahead on AR then the competition. And the primary development system for iPhone X apps is ????? Macs.

Actually the software I'm talking about comes from a Windows shop (they're a VR studio). They don't use Macs to develop their iPhone apps, and they don't distribute via the appstore. The iPhone, ARKit, and iOS, is a dumb pipe to a face scanner, and their main point in explaining it was "this technology will be everywhere very soon, and you'll be able to use any cheap phone to do it, you're not going to need an iPhone to do this very soon".

To bring this back to the Mac Pro. If Apple doesn't put an empty slot in the next Mac Pro then they'd have a problem in the VR space. However, of the two AR is a substantially bigger place than VR. Even for AR it still would be better there was an empty slot but it isn't necessarily the end of the world if they don't. All the "even remotely near future Vega64" should be able to fit in a next Mac Pro without any great difficulty. (there is a Vega64 in the iMac Pro ... so a Mac Pro with a wider thermal envelope shouldn't be a problem at all. )

A Mac Pro with a single Vega 64, will be yet another Mac Pro with old, and laughably underpowered graphics. That's not a good thing.

Being good at AR, but mediocre at VR, what would be the point of that? Why would you buy a Mac Pro to do something that a mobile device can do? The way you talk about the Mac Pro, as some frankenbeast with soldered graphics, plus PCI slots, but only for non-display cards, that's just another step on the "nuts & gum, together at last" 2013 / iMac Pro road.
 
That’s the thing, I’m thinking their idea of modular is bolt on units, not simply replacing the graphics card. It we will have to see.
Apple has had a huge war with others trying to repair their computers or iPhones for years and it seems like the T2 is their way to stop it. Planned obsolescence in a multi thousand dollar machine?
I just think the void has been way too long. If they are offering a sharp shift in paradigm where a new approach in computing can ensure increased productivity without constraints of current PCs (including Macs), they need to show it. Now.

This goes back to before the 2017 round table, we all wondered why they don't just walk out of the workstation and pro laptop business silently. What they are offering and saying they will offer are so disjoint, with bad value both as investment and a tool, and they keep raising price while giving quite little. For instance, I honestly think it may have been a better idea to ask all mouse and KB users to leave so Apple can start from scratch for the swiping generation. Just like how they shoved FCP7 users for FCPX, axed Shake and Aperture. No need to worry about x86 transition, interfacing analogy differences, an open file system, everything. Hell even disposable devices on a subscription model, just hardware to run their "Services" on. They already got the aluminium recycling part dealt with.
 
  • Like
Reactions: apolloa and barmann
Another failure in reading comprehension.

I said "If 330M/650M failures are Apple-only, then it could be an Apple motherboard design issue, rather than an Nvidia flaw".

In other words - if 330M/650M GPUs are failing everywhere, it's likely to be an Nvidia issue. If the 330M/650M are only failing on Apple motherboards, then it could be an Apple issue. (The 8600M GPUs failed everywhere - an Nvidia issue.)

But almost everyone will agree that the implementation of the faux FirePro GPUs on the MP6,1 is a rubbish design - without specifically putting the blame on ATI for "rubbish" chips (I never used that word - why are you so defensive?) or on Apple for "rubbish" system design and thermals.
You call me "failure in reading comprehension" and you have omitted the winky face at the end of that paragraph? ;)
 
I heard that these compiler things can take your source code and spit out binaries for different systems. If only Adobe, Microsoft, et al. could do stuff like that, maybe they could port programs to run on different hardware. I guess they would have to have compiled code--not assembler-- and with foresight structure it for porting to different systems, which would probably be beyond the state of current technology. Oh well, we can dream of the future.
The sarcasm is strong in this one.....
 
That did happen for Nvidia GPUs in 2008 MacBook Pros, but since then the following has also happened:
  • The 2011 MacBook Pro GPU failures that resulted in an extended warranty campaign were AMD GPUs.
  • The 2013 Mac Pro GPU failures resulting in an extended warranty campaign are AMD GPUs.
So if they are upset about Nvidia meltdowns from 10 years ago, wouldn't it follow that they'd also be upset about the two AMD meltdowns, both of which are more recent?


Where those Apple or AMD? If I recall correctly the 2010 MBP pruned some ventilation off the case, and 2011 was first iteration after that pruning. Plus like the 2013 Mac pro the CPU and GPU share a unified the thermal system. They are hooked to each other thermally. The GPU is downstream from the CPU before get to the fans. So the thermal channel is throughly heat before get to the GPU.


nNTxYvHhtqRSjCZP.medium

Ifixit Step 9 above. Step 10 says "Holy thermal paste! Time will tell if the gobs of thermal paste applied to the CPU and GPU will cause overheating issues down the road" .

Additionally, it is not so much upset at Nvidia because of the failure, but because of Nvidia's unwillingness to pay. Apple doesn't want to eat the cost if they don't have to. There is a difference between Nvidia/AMD getting the system design requirements specs wrong and Nvidia/AMD not wanting to pay if they screwed up. Periodically people/vendors make mistakes. The latter though, can get you put on a "don't call or work with" list. Messing with Apple's money isn't going to get you on their 'Christmas card' list. ( same thing with the scheme sue every major cellphone out there to make some extra cash. Messing with Apple's money. )


In contrast, if AMD paid for themselves (if mostly their screw up) or let Apple pay pretty close to "at cost' to fix their screw up, then they'd have a different kind of relationship.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.