True Innovation Is Creatively Using What You Do Have Or Can Readily Acquire
Most of my hammers, saws, drills, wrenches, screw drivers and other household tools of today aren't significantly different from those my father owned when I was born more than 60 years ago. But what I use them to build, or otherwise work on, is not set in stone. I strongly suspect that my father, who died when I was a child, would never have imagined that I would use many of those same tools to modify and build computers.
Discrete GPU updates are independent of CPUs. TSMC provides the silicon to both AMD and Nvidia whereas Intel has it's own Fabs. The interesting thing is that both AMD and Nvidia have been on a 28nm process for a few years now. While TSMC has 20nm in production, it seems the only silicon coming out of these Fabs is Apple's A8 SoC. Maybe Apple has purchased all TSMCs 20nm production for the foreseeable future? ... . Only when AMD has access to TSMC's 20 nm silicon will we see a clear and truly next-gen GPU option in the nMP.
We shouldn't expect that enhancements of tools, like those that I listed above, and substantial improvements to computer parts like CPUs and GPUs will always proceed at a regular/steady pace because variables such as, but not limited to, market power may influence who gets what when. But like Nvidia has done in what could be characterized as a "rut,", i.e., the inability to get that which it wants most, it wouldn't stop me from innovating, nor should it stop any other business from innovating with what it has available. So I applaud Nvidia for creatively innovating with what it has available. My guess is that there's a lot of performance enhancing capacity in whats available here and now, but it just isn't being used creatively. Moreover, it may soon be considered that a crisis, such as not having access to TSMC's 20 nm silicon, is such a circumstance which spurs the most innovation. Thus, to Nvidia, its partners, its customers and insightful onlookers, being forced to continue using 28nm in production may have been a blessing in disguise.
I'm not seeing much in the way of workstation application support for GPUs outside of the video/3D world today, and while that's an important area, it does cover only a small fraction of workstation users.
If and when GPUs do become relevant for a large proportion of the workstation market, I suspect it will be long after the current generation of GPUs are obsolete.
In the media and entertainment sectors, programmers have, e.g., incorporated CUDA into animation, modeling, rendering, color correction, grain management, compositing, finishing, effects editing, encoding and digital distribution, on-air graphics, on-set, review and stereo tools, and simulation applications. Although the programmers of applications in the media and entertainment industries are prolific in incorporating GPGPU computing into their softwares, there are programmers of applications in many other large fields who have done the same.
According to Nvidia [
http://www.nvidia.com/content/tesla/pdf/gpu-apps-catalog-mar14-digital-fnl-hr.pdf ], GPU accelerated applications have revolutionized the High Performance Computing (HPC) industry. There are over two hundred applications, across a wide range of fields, already optimized for CUDA. In addition to media and entertainment, these fields include the following:
1) Programmers have created CUDA applications to assist, e.g., in research at institutions of Higher Education and for HPC supercomputing, by creating applications used in chemistry, biology and physics that take advantage of CUDA. That includes: molecular dynamics, quantum chemistry, materials science, visualization and docking, bioinformatics and numerical analytics applications;
2) Programmers of defense and intelligence applications have, e.g., incorporated CUDA to provide faster geospatial visualization and analysis, multi-machine distributed object store providing SQL style query capability, advanced geospatial query capability, heatmap generation, and distributed rasterization services;
3) Programmers of computational finance applications have, e.g., incorporated CUDA to provide for faster for real-time hedging, valuation, derivative pricing and risk management (such as catastrophic risk modeling for earthquakes, hurricanes, terrorism, and infectious diseases), visual big data exploration, insight tools, and regulatory compliance and enterprise wide risk transparency packages;
4) Programmers of manufacturing CAD and CAE applications have, e.g., incorporated CUDA to enhance computational fluid dynamics, computational structural mechanics analyses, computer aided design, and electronic design automation;
5) Programmers of weather and climate forecasting applications have, e.g., incorporated CUDA to enhance weather graphics, and researching and predictioning with regional and global atmospheric and ocean modeling; and
6) Programmers of oil and gas industry applications have, e.g., incorporated CUDA to enhance seismic processing and interpretation and reservoir modeling.
Thus, GPGPU computing is currently relevant to lots of workstation users working in various industries. The GPGPU ball in on the software creators' side of the court - will we see a great swing or a horrible miss.
I agree with you about GPU software optimization, but if software vendors want to start seeing any performance improvements in their apps, they are going to have to adopt it sooner than later.
... .
Absolutely correct.
... .
Meanwhile, AMD and Nvidia are stuck in a rut trying to innovate on the same 3-year old process. As a result, AMDs latest Hawaii and Tonga cores are incremental improvements over the Tahiti cores (used in the 2013 nMP). Nvidia's latest GPUs are showing better results, but still nothing like the previous doubling of performance you'd expect from a new GPUs benefitting from a process shrink.
From AnandTech's opening paragraph on the GTX980 the other day...
The bottom line for nMP refresh options... really only consists of an incremental improvement in GPUs from AMD which are reported to be running 15-deg hotter than a Tahiti (meaning even more down clocking - possibly negating any performance gains) or perhaps, and this would be most interesting, a switch to Nvidia's latest GPU. Although the latter is probably more in the hands of Nvidia and their willingness to spin a custom design for the nMP at lower margins than they're use to.... .
Nvidia has innovated itself out of a rut. Nvidia (with the new GTXs) is achieving parity with AMD in OpenCL performance. [
http://www.brightsideofnews.com/2014/09/18/geforce-gtx-980-review-performance-lower-power/ ]. It's not that I never expected Nvidia to achieve parity, its just that I wasn't expecting it so soon and under these circumstances. Bad times and competitiveness can motivate those who prefer not sitting in the corner, saying "Woe is me."
So I see a perfect reason for Apple to introduce a Haswell MP - if only to get Maxwell CUDA cards in them.
As to the notion that Haswell-E is doesn't offer a meaningful improvement in performance over Ivy Bridge-E, I'd argue it makes for a bigger improvement than Ivy Bridge-E over Sandy Bridge-E in most CPU intensive tasks. No, it's not the 20-30% we used to see (outside of certain specialized areas like AVX) but it's not meaningless.
I also believe, in the absence of Apple switching to CUDA cards for the next MP, that Apple should introduce a Haswell MP because that added performance of Haswell, albeit not massive, may be enough for many others who don't own the 2013 MP - which is a vastly larger market than that composed of current 2013 MacPro owners, or, at the very least, tell us now what are its plans. Apple could creatively innovate with Haswell. Also, when I spend that amount of money for a system, I want one that I can easily upgrade myself like the cMP. Otherwise, I will continue to refrain from purchasing a new MacPro because I have no idea what is Apple's MacPro roadmap. Apple's recent handling of its plans for the MacPro has left a bad taste in my mouth - too much secrecy, then dropping the dual CPU option. Apple shouldn't revel in surprises when it comes to purchasers who use their products to make money.