They are sticking with LGA2011 for Ivy. I don't see why they'd bother changing chipsets given the potential for more problems when it's already back a year due to the initial sandy bridge hiccups.
The move from Sandy Bridge to Ivy Bridge is part of the same tick/tock design cycle. That's why the names are similar. Intel doesn't change socket designs inside of that cycle for any CPU ( Atom , Core i , Xeon , etc. ).
For the server oriented chips they don't change the core chipset inside the cycle. Partially, that is to avoid the risk of bugs as pointed out above. Partially, that is because the only "small form factor" boards in server space as blades which can get I/O off other cards. Otherwise, workstation and server boards aren't pressed for space so if some design needs a discrete controller it is just added.
Not sure what you mean by leaving it all to AMD/NVidia. Gpus tend to handle very specific instruction types. Intel's only response has been to the tesla cards which are a much smaller market.
Chuckle. Intel is the largest player in graphics by revenue. Intel's primary response has been to GPUs into the CPU package and give that functional area an increasingly larger share of the transistor budget.
If look at the path the x86 core count has been frozen at 4 in the maiinstream and mobile designs. Most of the additional transistors for the shrink to Ivy Bridge went to the GPU.
Likewise, Thunderbolt is deeply tied into having a GPU on the motherboard.
The new Xeon Phi response to one of the last refuges that discrete GPUs have retreated to , deadicated GPGPU , is supplementary flanking while the primary attack incrementally streamrolls them out of the default PC monitor driving business.
AMD is on same track of fusing classic CPU cores into the same package as GPU cores.
The vast majority of x86 compatible CPU packages shipped this year and next will have GPUs in them. Even Xeons have them in the E3 line. (which is why it would make Thunderbolt extremely straightforward to implement since 100% aligned with the strategy).
The open question is when with that process subsume the more server oriented CPU packages. It seems to be another couple iterations away.
Hi
All of that sounds exactly the reason that Apple just decided not to bother with Mac Pro development after the 2009 Nehalem model.
Err, they did the 2010 model. It is rather dubious to point the finger at Apple though. All the other major workstation and server vendors did the same thing. Largely firmware changes , PCI-e card upgrades , and "tick" (shrink updates) CPU package changes were all that came in 2010 across the board.
Looking back the R&D for the 2006 Mac Pro 1.1 introduction was set in motion before Steve Jobs had his apocalyptic health diagnosis. R&D for the 2009 Mac Pro was undertaken with Steve still in the CEO's chair. But by March 2009 Tim Cook was running the show
Largely false. Tim Cook was appoint head of the Mac division several years before taking over as CEO. Back in 2004,
".... The division also makes way for a "Macintosh" division to be headed by Timothy Cook, current head of Apple's worldwide sales and operations, according to Reuters. ... "
https://www.macrumors.com/2004/05/19/apple-creates-new-ipod-division/
Even if want to position Jobs as single person approving every smallest detail at Apple, it would be Cooks job to orchestrate getting Mac Pro updates approved. Sorry, the whole move to Intel , the increased commonality of components across Macs (simplifying logistics ) , maximizing R&D return on investment by focusing on a fixed set of products, etc. all have Tim Cooks fingerprints on them.
If anything it is more likely that the Mac Pro is alive now because Steve Jobs is gone. The push to almost double up the number of laptop models with new ones based on Flash storage and utlra thin. Late this fall if end up with 3 13" macbook models on the market, two 15" models , and an 11" and still lagging desktop upgrades that was most likely Jobs trying to see something "new" rather than yet another "box with slots" (which I'm sure he had already a huge variety of. He nuked a how slew of them when took over as Apple CEO the second time. ).
- and the 2009 Mac Pro introductory pricing (+$1000) shows a new mind-set.
Yeah... that the iMac needed a larger price zone to move to larger bundled screens.
- Tim Cook running the show, and Steve focussed totally on iDevices and iCloud.
So it was Cook on stage saying the redesigned, SSD centric MBA was the future of Macbooks? Not.
Looking at it from the outside, in terms of Open CL etc, are Intel rolling over to leave it all to ATI/nVidia?
Since Ivy Bridge GPUs support OpenCL right
now , I'm not sure what you are referring to. For example, the current MBA's support OpenCL.
It is version 1.1 (and not the bleeding edge 1.2) , but it is likely next year's Haswell with have 1.2.
Or does Haswell/Broadwell incorporate some sort of roadmap to use the next generation of on-die IGPs to do something useful in terms of OCL/parallel co-processing?
This has already happened on the mainstream/laptop Core i models. Haswell appears to have capped the x86 at 4 for the mainstream basic design. So yes, most likely GPUs will see a significant share of additional transitor budget.
Broadwell (and associated shrink) will probably also focus more on non x86 core functionality than on pumping up x86 core count. For the mainstream line also.
Server class it appears so far that Haswell won't see GPUs show up in E5 1600 class models. There is no GPU in the diagrams that have leaked so far:
http://vr-zone.com/articles/intel-haswell-ep-platform-detailed/16419-2.html
They could but the boost would be rather small compared to a dedicated discrete GPU. With the E5's larger number of PCI-e v3.0 lanes it is still probably going to be somewhat more effective to just attached dedicated component. Perhaps after Broadwell when the transistor budget is bigger and the x86 core count is already in the double digit range across much of the line up there would be room on a E5 1600 die with more limited x86 cores for a max sized (for that generation) integrated GPU. They'd have to wait for a socket change for that though (the display port output ).
What seems like a likely evolutionary path would be for the new Xeon Phi models to evolve from having PCI-e v3.0 connections to the CPU package to having an QPI connection to the CPU package (and perhaps move to a shared memory model). However, it is likely Intel will wait to see if the Phi gets traction in the market before going that route.
Intel is attacking the Tesla/FirePro GPGPU cards with Xeon Phi based cards. An dual E5 workstation with 80 PCI-e lanes could conceptually put one of each into the same box (i.e., could have 3 fully x16 electrical PCI-e slots ) if added some multi kilowatt power supply. That's why they aren't in a hurry to put it into the CPU package.