Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I noticed last night that my nMP can't just run standard AMD Windows drivers. If you do, they usually won't install.

Typically all workstation graphic card use specialized drivers. ( Firepro, Quattro ) Thats one of the main differences from the PC Desktop versions.

If what MVC said is true and you can't use the stand AMD FirePro drivers from AMD's website when running the MP in Windows then you loose the specialized drivers. Last time I looked there were 4-7 different Windows drivers depending on the what your use was.
 
So much incorrect information in this thread, I'll attempt to address the worst offenders.

And those old AMD GPUs from Mac Pro are much faster in Final Cut Pro X than the Nvidia GPUs from the same segment.

If Grenada would be in Mac Pro it would wipe out in the same application Titan X. Only due to Asynchronous Compute that FCPX uses.

GM204 came out a little after the new Mac Pro. Apple decided to stick with a GPU from 2011 instead. You're delusional if you think the 7970 can compete with a GTX 980 in terms of raw power, let alone perf per watt.

Are you absolutely 100% sure that FCPX is using AMD's asynchronous compute? It's unclear how that feature will actually help in a workload where you're processing entire video frames as quickly as possible. I'm pretty sure async compute helps when you have a small amount of compute work that can be done in parallel with a bunch of graphics work etc (i.e. a typical game scenario).

I get the feeling that you're drawing these conclusions without any real evidence, instead you're looking at AMD's spec sheet and suggesting it must be because they have feature X or Y. There's obviously no question that Apple chose AMD for the new Mac Pro, but perhaps there's simply no other reason that AMD had zero hardware products with Apple at the time and this was their first product in nearly 2 years, i.e. the start of the pendulum swinging back in the other direction.

It means that R9 280X is around 2 times faster(at the moment) in FCPX than GTX 970.

And R9 280X uses old Tahiti core that is in nMP in D700.

Congratz to NVIDIA for updating the drivers for FCPX. The problem is that GTX 980 Ti is still eaten in FCPX by Dual Tahiti config, regardless if we are talking about cMP, Hackintosh or nMP.

Oh no, a single card is struggling to keep up with a dual GPU config on a system with massively fast storage (which is arguably making more of an impact on FCPX). Maybe if we had a fair comparison with 2 GM204s or 2 GM200s the results would look very different.

Can you link specific results that show the R9 280X being twice as fast as a GTX 970 with the latest web drivers? I haven't seen anything like that.

P.S. Guys, don't say that AMD GPUs are hot. iMac 5K GPU would go to 105 degrees regardless if there would be 125W AMD GPU or Nvidia. Its the design of a computer defines how fast GPU gets hot. Brand of the GPU has nothing to do here.

And here's the icing on the cake that really suggests you don't understand what you're talking about. A GM204 or GM206 would not be running at 105 degrees, even when driving a 5K display. The Maxwell architecture is twice as power efficient as the Kepler generation, and that was still way better than AMD's architecture (and AMD hasn't improved at all in this area over the last few years). A more power efficient GPU will run cooler than one from a 250W product that's been squeezed into the riMac's power budget.
 
No, Single R9 280X is 2 times faster than single GTX 970. Look at benchmarks, find them on hackintosh forums. They are there.

And here's the icing on the cake that really suggests you don't understand what you're talking about. A GM204 or GM206 would not be running at 105 degrees, even when driving a 5K display. The Maxwell architecture is twice as power efficient as the Kepler generation, and that was still way better than AMD's architecture (and AMD hasn't improved at all in this area over the last few years). A more power efficient GPU will run cooler than one from a 250W product that's been squeezed into the riMac's power budget.
Im sorry, but you are assuming that M295X is constantly idling at 105 degrees. No, it isn't. It goes up to 105 at load. In the same circumstances Nvidia GPU would also go to that temperature at that design of computer and within that power envelope. Or simply would declock itself to stay within power envelope. Thats how they achieve the mythical power efficiency.

I will not argue about the rest, you have your opinion.
 
No, Single R9 280X is 2 times faster than single GTX 970. Look at benchmarks, find them on hackintosh forums. They are there.

Right, my point is that I have been looking, and can't find examples of what you're talking about. If you know of specific examples, can you please post links? Thanks.

The 2 GPU thing was in response to your "GTX 980 Ti is eaten by Dual Tahiti config" which I took to mean the 2 D700s in the new Mac Pro, is that not what you were talking about?
 
No. Dual D500 config, which is based on Tahiti chip. D700 are getting even better(not by a huge margin but better) in this small benchmark.

I will not promote hackintosh forum on Mac forum.
 
every single site / repair guide talks of how easy & friendly the mac pro is to work on compared to anything else apple is making.. (as well as easier than cmp)

Ar? Replace a CPU / GPU / HDD... is easier on the nMP than the cMP??? Are you sure?????
 
So much incorrect information in this thread, I'll attempt to address the worst offenders.
Let's see.
Are you absolutely 100% sure that FCPX is using AMD's asynchronous compute? It's unclear how that feature will actually help in a workload where you're processing entire video frames as quickly as possible. I'm pretty sure async compute helps when you have a small amount of compute work that can be done in parallel with a bunch of graphics work etc (i.e. a typical game scenario).
What exactly is different between 'typical game scenario' and 'processing entire video frames', such that games can work in parallel and video can't?
Video processing is one of the fields, which benefits of massive parallelism, like say server farm. When I look into my list of apps, I have this little companion app called 'Compressor', which let's you build your own server farm for FCPX. That's what I call massive parallism. So why the heck shouldn't FCPX make use of the supported hardware parallelism of AMD hardware?
Even Microsoft understands the sign of the times and DX12 supports multiple processing queues. Lo and behold, AMD cards are getting ahead of NVidia cards in DX12 benchmarks. Please read http://arstechnica.com/gaming/2015/...ly-win-for-amd-and-disappointment-for-nvidia/.
 
you using amazon as example of what i said shows that you're just reaching.
you can buy those things at walmart.. you can buy them at a grocery store..


it's 2015.. not 1970.
that a screw was rare and named 'security' 40 years ago doesn't mean much today..
50 years ago, the torx head was a 'security' screwhead.. times change.. people realized torx head offers plenty more advantages beyond security and the type has become standard.

(sidenote- in the same way i don't think you know what 'user friendly' means.. i also don't think you know what 'standard' means)

torx has another version of screw that's being used as their security head since the 90s(?).. it's non-standard and you don't walk into any old store to buy the tools to use them..
likewise, and i don't understand why you don't recognize this point-- apple themselves have designed a security screw.. it's used on nearly every one of their products.. not on mac pros


get off whatever it is you're on.. every single site / repair guide talks of how easy & friendly the mac pro is to work on compared to anything else apple is making.. (as well as easier than cmp)
why are you continuing to fight against that?
why so hard to realize or admit to yourself "hmm. maybe it isn't so hard to work on" ??
instead, you're continuing to argue that this thing is on lockdown by the evil overseers.
it's just nuts.



ridiculous.. bubba said
"
#28
Content creators want the latest and greatest Quadros for CUDA and the Mercury engine"

it's either a straight lie.
or he's sipping on your koolaid is all.
all your 'proof' about gpu this&that comes from gaming benchmarks.. you should really make that more clear because it could be seen as misleading the public as to why they should consider buying gpus from you.


lol.
if you had any real evidence, you'd spray it in a heartbeat.. quit kidding yourself.


real rich dude.
you've been told countless times by members here how rude you are.. do you just ignore those posts?




when the nmp came out many people were furious. Why? Because apple put a freaking access latch on the shell and made it incredibly easy for people to get inside.. /s

what apple did to the mini is irrelevant unless your'e talking about all the other apple products as well.
mac pro
get off whatever it is you're on.. every single site / repair guide talks of how easy & friendly the mac pro is to work on compared to anything else apple is making.. (as well as easier than cmp)
why are you continuing to fight against that?
why so hard to realize or admit to yourself "hmm. maybe it isn't so hard to work on" ??
instead, you're continuing to argue that this thing is on lockdown by the evil overseers.
it's just nuts.
Ar? Replace a CPU / GPU / HDD... is easier on the nMP than the cMP??? Are you sure?????

Two completely different issues here. I completely agree that Torx screws and drivers are quite easy to come by these days and the Torx screws on a nMP are not really an obstacle to opening it up. Sure most people don't have a Torx laying around like they might have a Philips or flathead, but a Torx can easily be had with a trip to the hardware store. Problem solved.

That being said, what is there to do once you have cracked the case on a nMP other than to sightsee? Unless you are buying expensive, proprietary component pulls off eBay ($2000 D700 anyone?) you won't be able to make many meaningful changes. In every measurable sense the nMP has been "locked down" relative to the cMP. I don't begrudge anyone who has a nMP and likes it since it is a capable enough machine, but let's at least call a spade a spade.
 
Ar? Replace a CPU / GPU / HDD... is easier on the nMP than the cMP??? Are you sure?????
well the drive is easier for sure. the ram is probably equally easy though less steps since you don't remove the board.. direct access to the sticks themselves.
cpu- not sure if it's easier than cmp. but it doesn't look too difficult on nmp.
gpu is harder on nmp as you'll have to apply paste upon a swap..


I really don't understand the resistance here.. the thing is (relatively) easy to work on using standard tools.

I've repeated myself a dozen times and I'd rather not do 13.. ifixit has user comments so if you insist I'm wrong and feel the need to argue about it, go argue with them instead.

it's user serviceable without gymnastics.. it's obvious to me and it's obvious to service techs.


mvc- I get it that you're goal is to make me look like and ass to people here and you've succeeded. not because of your arguments.. simply because I keep arguing you about this. I'm over it now.

(I'll still argue you about stuff. just not this) ;)

image.jpeg
 
What exactly is different between 'typical game scenario' and 'processing entire video frames', such that games can work in parallel and video can't?
Video processing is one of the fields, which benefits of massive parallelism, like say server farm. When I look into my list of apps, I have this little companion app called 'Compressor', which let's you build your own server farm for FCPX. That's what I call massive parallism. So why the heck shouldn't FCPX make use of the supported hardware parallelism of AMD hardware?
Even Microsoft understands the sign of the times and DX12 supports multiple processing queues. Lo and behold, AMD cards are getting ahead of NVidia cards in DX12 benchmarks. Please read http://arstechnica.com/gaming/2015/...ly-win-for-amd-and-disappointment-for-nvidia/.

I'm talking about two different workloads:

- Process an entire video frame via a pixel shader or CL kernel. Implicit use of the parallelism of the shader cores, but still one instance of the shader/kernel is being used.

- Run a small compute task in the background while you're rending the current frame of a game, e.g. physics simulation for the next frame or something like that. This requires partitioning of the shader cores, so that a small number (perhaps even 1) is being used for a compute task in conjunction with the massively parallel processing of the game's frame. This is what async compute is good for, because the small compute tasks that often can't make use of the entire GPU can be run asynchronously and in parallel with the rest of the work.

My point is that I don't understand how doing well in the second case (which AMD clearly does) helps with video processing, where you just want to run one shader/kernel on the entire video frame then move onto the next one.
 
Obviously, I'm not talking about PowerVR in its current form. What I'm saying is that I believe there's a good possibility that Apple is working on more powerful iterations of PowerVR that could at least start off in lower end Macs.

apple doesn't iterate off of the architectures they are licensing. Apples 64 bit ARM implementations came after ARM put down the guidelines for what 64 bit was. Apple's implementation is unique but they aren't out there doing 'rogue' architecture work. That defeats the major upside of licensing an arch where there is joint funding of the R&D. Share core arch. R&D cost and differentiate.

I don't know of anything on PowerVR core arch that is trying to match desktop (and high end laptop) GPU. The same reason why Apple isn't trying to hit desktop/server performance (real desktop not "race to the bottom" priced, Celeron/Pentium performance levels ) with its ARM implementations. It is far more critical that Apple has a top end Phone/ ultra thin tablet GPU not a laptop/desktop one. That's why they stay ahead in the phone space. The ARM 57 reference was kind of skewed toward non phone space and Qualcomm and others who directly implemented that are having problems ( e.g. Qualcomm 810 ).


Knowing how much Apple likes thin devices and how hot AMD GPUs are, something has to give.

Apple put Intel GPUs in just about all of the Mac laptops. PowerVR wasn't necessary. Still isn't going forward. Apple hasn't even gotten to the top end Gen 6 ( Skylake) iGPUs yet. Intel has squeeze both AMD and Nvidia out of the 21.5" iMac line up. Pretty close to doing the same thing to the whole MBP line up. Where is the motivation for a "hail mary" PowerVR move? There isn't one.

When AMD iterates to the next fab nm move it won't be a major issue given space/volume and power allotments of the Mac Pro. Both Nvidia and AMD have been impeded over last year or so. Nvidia didn't get stopped as much as AMD did. ( AMD had over last iterations bet on smaller implementation onto newest process far an edge. They lost when new processes slowed and will get some of that back again when moves again. )

I think what Apple may have overlooked is how large a block of irrational fanboys there are in both camps ( "I'll never by AMD GPU". "I'll never buy Nvida GPU". ). Not the majority of users but enough so that if going to have 2-3 year gaps between major Mac Pro upgrades it might be reasonable to slide a offering into those quiet periods of embedded version of the "losing" bake off design.


PowerVR has made significant improvements since it first appeared on an iPhone. Now, Apple is even opening up AppleTV to apps and games. I don't think it will be long before a version of AppleTV could challenge X-Boxes and PlayStations in console gaming. Then, how long before PowerVRs begin appearing in Macs?

AppleTV aren't Mac Pros or even upper level iMac range. It can't even do 4K let alone 5K or multiple 4K/5K monitors.
There are some x86 implementation that Intel has that have PowerVR. So far Apple hasn't touch any of them with a 10 foot pole. There is exceedingly little indication that is going to change any time soon.

The GPU decoupled from the CPU is completely disconnected from reality. The only in house PowerVR work that Apple does is with it coupled to ARM. Apple's ARM+GPU SoC need OS X like they need another hole in the head. They already have an overwhelmingly successful OS on that SoC. Apple in no way shape or form needs another. What they need for OS X is something that different and better in some way.
 
I'm talking about two different workloads:

- Process an entire video frame via a pixel shader or CL kernel. Implicit use of the parallelism of the shader cores, but still one instance of the shader/kernel is being used.

- Run a small compute task in the background while you're rending the current frame of a game, e.g. physics simulation for the next frame or something like that. This requires partitioning of the shader cores, so that a small number (perhaps even 1) is being used for a compute task in conjunction with the massively parallel processing of the game's frame. This is what async compute is good for, because the small compute tasks that often can't make use of the entire GPU can be run asynchronously and in parallel with the rest of the work.

My point is that I don't understand how doing well in the second case (which AMD clearly does) helps with video processing, where you just want to run one shader/kernel on the entire video frame then move onto the next one.

This is correct. I work professionally and do R&D in GPU computing. I have extensive knowledge of OpenCL I will say this. If I had to choose the best vendor to work on it would be in this order of GPUs: Intel, Nvidia, AMD. Even though Nvidia is described as having poor OpenCL performance as least the kernels run. On the Mac AMD is honestly a piece of ****. Especially the new Mac Pros and while unfortunately I hate to say this it is true. OpenCL code that executes flawlessly on intel and nvidia devices crashes and burns on amd. It really is a shame. Support I believe is a key part of what makes a vendor good and atleast nvidia still provides support. This is a much wider industry problem too. See the link I added. It's a direct letter to Tim Cook from the LuxRender team about the poor OpenCL support.

http://www.blendernation.com/2015/05/04/an-open-letter-to-tim-cook/
 

It's a step, but the driver support for AMD on Mac is still piss poor compared to what it could be. The real shocker is that it took all that time and effort to even do something. It's absolutely ridiculous when Apple crusaded on OpenCL. We've been testing our OpenCL code on el capitan beta with AMD GPUs since WWDC up to GM and there are still lots of unresolved issues. This is reflected in Metal as well. Honestly, a GPU is only as good as the support and drivers developed for it, since so much of it relies on those things. Developers can't wait around being hopeful for these kinds of fixes it's too costly and too time consuming.
 
What I am saying is that I would rather want hardware that has, well... hardware support for features not only software one. Everything benefits Nvidia architecture is due to the "their made" software. CUDA, HSA architecture will be made through CUDA, Performance of games and other things - software drivers. That is because Nvidia made simple architecture to understand and program, but itself is not very much capable of.

AMD is on the other side. Asynchronous Compute - hardware support. HSA - hardware support. Virtualization of Applications - hardware level. The thing is: software is not using all of these capabilities. Yet. The future is long, however, and the world of low-level APIs changed a lot in developer minds.

P.S. If I am fan of any brand - yes I am Nvidia fan. Ive always had Nvidia GPU in my computer. But also I am as you can see fan of engineering and capabilities of electronic hardware, even if not yet used to 100%. And that is what I much more appreciate than any brand in the world.
Ah okay. Seems like AMD has potential... We have to let them grow...
 
apple doesn't iterate off of the architectures they are licensing. Apples 64 bit ARM implementations came after ARM put down the guidelines for what 64 bit was. Apple's implementation is unique but they aren't out there doing 'rogue' architecture work. That defeats the major upside of licensing an arch where there is joint funding of the R&D. Share core arch. R&D cost and differentiate.

I don't know of anything on PowerVR core arch that is trying to match desktop (and high end laptop) GPU. The same reason why Apple isn't trying to hit desktop/server performance (real desktop not "race to the bottom" priced, Celeron/Pentium performance levels ) with its ARM implementations. It is far more critical that Apple has a top end Phone/ ultra thin tablet GPU not a laptop/desktop one. That's why they stay ahead in the phone space. The ARM 57 reference was kind of skewed toward non phone space and Qualcomm and others who directly implemented that are having problems ( e.g. Qualcomm 810 ).




Apple put Intel GPUs in just about all of the Mac laptops. PowerVR wasn't necessary. Still isn't going forward. Apple hasn't even gotten to the top end Gen 6 ( Skylake) iGPUs yet. Intel has squeeze both AMD and Nvidia out of the 21.5" iMac line up. Pretty close to doing the same thing to the whole MBP line up. Where is the motivation for a "hail mary" PowerVR move? There isn't one.

When AMD iterates to the next fab nm move it won't be a major issue given space/volume and power allotments of the Mac Pro. Both Nvidia and AMD have been impeded over last year or so. Nvidia didn't get stopped as much as AMD did. ( AMD had over last iterations bet on smaller implementation onto newest process far an edge. They lost when new processes slowed and will get some of that back again when moves again. )

I think what Apple may have overlooked is how large a block of irrational fanboys there are in both camps ( "I'll never by AMD GPU". "I'll never buy Nvida GPU". ). Not the majority of users but enough so that if going to have 2-3 year gaps between major Mac Pro upgrades it might be reasonable to slide a offering into those quiet periods of embedded version of the "losing" bake off design.




AppleTV aren't Mac Pros or even upper level iMac range. It can't even do 4K let alone 5K or multiple 4K/5K monitors.
There are some x86 implementation that Intel has that have PowerVR. So far Apple hasn't touch any of them with a 10 foot pole. There is exceedingly little indication that is going to change any time soon.

The GPU decoupled from the CPU is completely disconnected from reality. The only in house PowerVR work that Apple does is with it coupled to ARM. Apple's ARM+GPU SoC need OS X like they need another hole in the head. They already have an overwhelmingly successful OS on that SoC. Apple in no way shape or form needs another. What they need for OS X is something that different and better in some way.

Again, you keep on looking at what the current PowerVR can do. I think that the previous AppleTV wasn't even doing 1080p.

Neither you nor I know what Apple R&D is up to. The motivation that you are looking for, I believe exists in Apple's desire to own the whole widget. They could have easily gone with any number of off the shelf ARM CPUs for their original iPhone but they went and designed their own SoC.

Note: Sorry, have to put discussion on hold... Heading to vacation for week and a half...
 
So, about ACE.

It's only gcn 1.1/1.2 cards that have lots of ace blocks (8 iirc), gcn 1.0 cards has just 2 such units iirc.

So this "ACE magic" can only be utilized in 27 inch rimacs with Tonga chips, not even great nMP.
 
Lo and behold, AMD cards are getting ahead of NVidia cards in DX12 benchmarks. Please read http://arstechnica.com/gaming/2015/...ly-win-for-amd-and-disappointment-for-nvidia/.

Financially speaking, AMD is in a lot of trouble so I really hope this pans out for them. Everyone would benefit, even Nvidia fans, due to continued competition. I myself have switched many times based on which company seemed to be the right price/performance at the time.

I remember Diamond, Matrox, 3DFX, Canopus, and more. We're down to 2 companies now, and I don't want to see that fall to just 1. If anything, I'd like for Nvidia to continue, for AMD to catch back up in a big way, and for Intel to join the fray with discreet, powerful GPUs. But perhaps that's unrealistic and there is simply no room in the market for 3 contenders.
 
They gave up on CUDA, now they are quietly distancing themselves from AMD's only remaining advantage, OpenCl.

I think this is immensely speculative and assumes Apple is happy with Nvidia's Metal drivers and performance, and that AMD isn't seen as having better Metal drivers.
 

I would think you would know by now a lot of high tech employees switch to different tech companies. Its pretty common.

If what MVC said is true and you can't use the stand AMD FirePro drivers from AMD's website when running the MP in Windows then you loose the specialized drivers. Last time I looked there were 4-7 different Windows drivers depending on the what your use was.

The reason I emphasized that all workstation cards use specialized drivers, is because the use of specialized drivers is not enough to single out a particular reason why a company would use them. They all do for workstation graphics.

If I recall correctly, it's the other way around:

Apples FirePro D700 has the same device ID as a PC HD 7970.
Real FirePros have a unique device ID.

If I were talking to another person and mention a PC HD 7970, they would most likely assume it a PCIe expansion card. If I say Firepro D700, they would know its from an Apple computer, not to be confused with an 7970 PCIe card, both of which are not swappable in either desktop.
 
mvc- I get it that you're goal is to make me look like and ass to people here and you've succeeded. not because of your arguments.. simply because I keep arguing you about this.

Don't worry, I will automatically mentally ignore all the "interesting" debate.

I was actually interesting in the nMP. The main reason I didn't bought it when it was introduced is because lack of HDMI 2.0. In fact, when it was just released, I am quite happy with the price for what I want (hex core, D700, 32G RAM, 1T SSD). Of course, it's seriously overpriced now.

Few months after the nMP release, I start to upgrade my cMP (CPU, GPU, SATA 3 card, USB 3.0 card, SSD...etc) rather than wait for Apple to upgrade. And then I realised that I really love a computer that is self upgradable (either internally / externally).

So I start to think about what's my next computer. I was seriously consider the Hackintosh, but I tends to avoid it. It's not about I want to avoid the hard work (in fact, I quite enjoy to spend time on "fixing stuff"), but more like I should not install the OSX on a non Mac computer.

In fact, I did read through the ifixit website long time ago on, because I think that I may upgrade the CPU by myself later on (if I end up choose the nMP route). It's definitely do-able, but I won't consider the whole process is "user friendly", and definitely not as easy as the cMP. It takes them 27 steps before they can actually remove the CPU, with plenty of cautions, have to deal with thin wires + small screws, and require few different tools, etc. I read that after I upgrade my W3520 to the W3690, and the very first thing in my mind is "why Apple make this so complicated now?", and "without PCIe slot, how can the nMP keep themselves up to date?"

This is why I was a bit over reacted when I saw that you said upgrade the nMP is easier than the cMP, nothing about your debate. I apologise about that.

As I always says in this forum, the nMP is a nice machine. However, the more I think, the less I want to buy it now. Hopefully Apple will release a really impressive nMP (or NEW nMP) before my 4,1 die. My machine run 24/7, and often stressed to 100% for many hours. I really want my Mac has a decent cooling system, and I still prefer to have everything "internal" if possible.
 
  • Like
Reactions: Macsonic
Financially speaking, AMD is in a lot of trouble so I really hope this pans out for them. Everyone would benefit, even Nvidia fans, due to continued competition. I myself have switched many times based on which company seemed to be the right price/performance at the time.

I remember Diamond, Matrox, 3DFX, Canopus, and more. We're down to 2 companies now, and I don't want to see that fall to just 1. If anything, I'd like for Nvidia to continue, for AMD to catch back up in a big way, and for Intel to join the fray with discreet, powerful GPUs. But perhaps that's unrealistic and there is simply no room in the market for 3 contenders.

IMO, and this is bound to attract vehement denial from the GPU enthusiasts here, the dGPU is either going the way of the co-processor or the dedicated audio card.
The former is extinct, and the latter exists only for audio professionals or hobbyist users who absolutely have to have 7.1 surround on their computers.

So I'd agree that there's no room for three contenders in the rapidly shrinking dGPU market, Intel will eventually take up the lion's share of x86 segment, with AMD possibly surviving with lower-cost alternatives. nVidia could become like Matrox or something like that. A small niche market they will continue to serve, but if they lose the consumer market, it remains to be seen if that will sustain them in the long run.
 
For those who are crying that Nvidia is ditched so far from Mac lineup. Don't worry. There is a lot of technical stuff in Metal that can turn ways for Nvidia in Apple computers.

In essence: Metal looks like directly designed for Nvidia hardware. But that maybe in future will become much more apparent.

From what I have gathered the picture is this: Apple does not want to shut down ANY hardware door for them. It may very well be that Nvidia will return sooner than anyone thinks.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.