There are some
interesting rumors out there today about a Nvidia GTX 990M, which would be a cut down mobile variant of GM200 (aka GTX 980 Ti/Titan X).
cut down how? Just simply clocked down or also turning off some function subsections? For example ....
D310: GM204 cut down, 4 GB VRAM (aka GTX 970)
D510: GM204 full, 8 GB VRAM (aka GTX 980)
D710: GM200 cut down, 12 GB VRAM (aka GTX 980 Ti)
Mobile variants that support 12 GB of VRAM probably aren't necessary. There is simply just not that much room in most laptops for the maximum number of VRAM chips.
If Apple is looking for "Pro" Windows drivers to go along with the offering then these are probably more of interest than the mobile specific versions.
http://www.anandtech.com/show/9516/...-m4000-video-cards-designworks-software-suite
Apple may be in an uncomfortable position with AMD's recent Fiji chip, which is limited to 4 GB VRAM. They would have to choose between slower Hawaii with more VRAM, or faster Fiji with less VRAM
If the RAM footprint working set size of the workload being targeted is >6GB then the Hawaii isn't necessarily slower. Likewise if can copy and work at the same time to faster memory the 4GB isn't. "But games are hardcoded to maximum cache of textures" isn't necessarily a OS X constraint. Especially if looking at apps that primarily use the OS X core graphics foundation. In that subset, all that is necessary is for the core foundation to be updated to the new methodology; not all of the applications on top.
There is no reason Apple could offer but a "high cache" workload offering and a max RAM I/O offering. It isn't not a overly simplistic as "bigger everything" marketing, but it is doable.
Just because Apple has been going all AMD in recent years doesn't mean they won't switch back and forth, depending on various competitive factors.
It is multiple dimension of factors that likely wins the design bake-offs. Not simply based on max benchmarks popular Windows games. Also not simply based on just price.
It may help Nvidia is Apple walks away from Khronos/OpenGL/OpenCL and goes primarily Metal (and OS X specific core graphics layered on top). Nvidia's 'war' on OpenCL hasn't helping them win Apple design bake-offs. If Nvidia are willing to do max effort to Metal they may even out their competitive performance on at least that dimension of the evaluation criteria.
Don't forget though that Intel is a competitor here. In the Mac product space there are three GPU vendors that Apple works with; Intel , AMD , and Nvidia. The player with the most business right now across the whole Mac Pro line up is Intel; not AMD or Nvidia. In the narrow Mac Pro space Intel isn't a major player yet.
Perhaps some supporting evidence might be the big gains in performance for Maxwell seen in the most recent driver updates,
measured by Barefeats.
Not necessarily so. If this is primarily a "f you" aimed at maximizing the number of folks holding onto previous generation Mac Pros and filling them with Nvidia cards run CUDA optimized code then that isn't going to help them win "bonus" points at future Apple design wins. (i.e., max Nvidia revenue short term and step on Apple's toes/processes where ever possible. )
If this primarily is keeping the general driver code base up to competitive edge and mostly a side effect of releasing stuff because working on it anyway, then that has more general upside. It is indicative that Nvidia is still trying to compete for design wins. [ i.e.., current desktop drivers turn into foundation for future mobile drivers and design bake-off prospective over a broader range of Mac products than just the Mac Pro. ]