Well that is probably because they don't think any one would pay $700 to go from 3.33GHz to 3.46GHz or Intel aren't offering W3690s.
Yes, that is a substantial part of it too. Parts at the right price points to no change the margins. However, the longer the stretch they run with 3500 series parts, the more that portion of the Mac Pro market will shrink. If it shrinks too small they may decide that just don't need to persue it anymore. Especially, if the iMacs soak up the vast majority of the "slack" there.
Given the increasing core counts per die and the performance possible from these chips, I really do see this as the direction the mainstream (lower-end) workstation segment is going.
The problem is that isn't likely to happen at the desktop level. Sandy Bridge to Ivy Bridge to Haswell the mainstream processors have stopped at 4 cores.
That is largely because they now how to compete with the GPU for additional transistors and Intel is not relegating the GPU to the "extra left over" transistor budget anymore.
" ... Actual graphics performance was perceived as an afterthought, and the unused area (so called white space) in the chipset was available for the GPU, but rarely a millimeter more. The imperative was to provide a free graphics solution, without adding to the cost of the platform. The investment in software and drivers was similarly limited, leading to products that were good for multi-media but wholly inappropriate for 3D graphics and games. ... "
http://www.realworldtech.com/page.cfm?ArticleID=RWT042212225031
That was the old Intel approach. The new one is GPU cores are on more equal footing. With the GPU is going to come increased memory I/O and that also is going to soak up some space to alleviate.
Even without the GPU budget pressures the mainstream market is still hooked into "bigger GHz is better". Yet another reason to avoid bumping the core count higher to clock the capped 4 even higher.
All of that points to the likelihood that even at Broadwell the mainstream line will
still be capped at 4 and that other functionality will move on-board the mainstream die. Not just x86 "core count for core count" sake.
I just don't see where the iMac is prune off the 6-8 core folks.
The only chips still on the complex core count race are the E5 and E7. Even the E5 1600 is a good candidate for a smallish GPU that was more finely tuned for OpenCL work than the Ivy GPUs cores are. At Haswell, Intel could merge some GPU into at least the 1600's . The 2600's don't need it.
If Intel goes the route where the 1600's are primarily complex core pumped , but leaves the PCI-e lane count limited then I can see Apple walking away.
Software's current lack of proliferation in terms of true n core multi-threaded suites/applications doesn't lend to DP systems leading the workstation market in sales either.
The problem in the OS X space is that Apple has been bending over backwards to jumpstart this. Grand Central Dispatch, OpenCL , etc.
That 3-4 years after introduction developers still aren't leveraging this in their software is deeply at odds at where Apple wants to go.
If Apple can't jump start wide spread software development to leverage more than 4 cores then neither the single or dual package E5 has traction.
Either the software starts to scale or whole thing is going to go.
It is also not just a single app issue. I believe leveraging visualization is going to be come more prevalent on the desktop workstation market. If put multiple machines in a single box it is much easier to soak of more than 6 cores if set those machines to working on multiple things are once with deadlines.
The issue is that the iMac could be extended to cover part of the single CPU package space. It is not likely to be extended to cover the previous generations dual one. Let alone the current or future ( 2 * 10 cores ) one.
The only 20+ core in a single package is the MIC offerings.
The single processor systems, particularly Hex core or larger on a single die make a better cost/performance ratio without being significantly bottlenecked
Only if the software workload doesn't scale. The cost ratio is jacked up because the software (if primarily just running one application) is jacked up.
Yeah software lags behind hardware..... but just how long can that be used as an excuse? That's akin to saying the current Mac Pro bump is OK because lagging is excusable.
the bigger element of the problem is getting affordable > 4 core development platforms to more developers. OpenCL helps with that if some develop more chops using more widely available GPGPU to test out code base on. That can be re-routed to x86 cores if they are a glut around on a high core count box.