True. However, dual sockets would also increase the available PCIe lanes (to 80 when using the E5).
Imagine a MP with 80GBytes per second of available throughput -- Five PCIe 3.0 16x slots. That would be an exciting addition to the workstation market. Apple's taken a huge step back by having this [!future!] product equipped with less than the present standard (nMP maxes out at 38GBps with all TB2 and PCIe combined).
That's actually an extremely good point I hadn't considered deeply enough. I probably wouldn't have used the phrase "step back" but it ends up meaning the same thing I guess.
So far I haven't given up hoping there will be a dual socket solution offered at some point (even though I still believe 12 cores at > 3Ghz each is enough for almost anything a workstation is likely to be tasked for) - but I had been looking at it merely from the POV of number of cores and compute horsepower.
Yup, good point, thanks!
I am talking about preview, interactive artist work. The only thing that speeds this up is more and/or faster CPU cores.
This is generally true for 3D content but we're not there yet. If you only need a few billion polygons, a gig or two of textures, inexpensive lighting models, and no motion or lens effects, then 12 fast cores produces near instant feedback (0.1s to 5s). It's not until you try and edit interactively with FX like SSS, volumetrics, depth of field (DOF), motion blur, etc. with expensive illumination models and FX like Radiosity, reflection blurring, shadow mapping, etc. and of corse very complex surface shading models too, that there becomes a problem.
And that problem isn't solvable yet. Currently with 12 cores at 3.2GHz scenes with those attributes implemented can take anywhere from 1min to several hours for an interactive render to show nicely - yet alone complete. So, what is it a good doctor will tell you?: If doing
that causes you a problem... then don't do
that.
And indeed this is the solution everyone I've worked with (and that's literally hundreds!) employs. I know some may think it silly for someone else to tell
them how to work but if no one ever did there would chaos! People would be pounding nails with wrenches and wearing their cloths on backwards - the world would actually fall apart.

.. No, really.
Just think, how many 3GHz cores it would take to get a 1min (12 core) interactive render down under 5sec. I count 144 cores at 3GHz to get there. And 72 cores at 3GHz will get you there in 10sec. We're really not there yet. So in almost all cases it's wireframe in the gooey all around. Especially when it
can't actually be fast enough anyway. To check things which need full render feedback we're mostly still at the point where we need to press the render button and when those are longer than about 1min. or so it's always more advantageous to do this on a separate networked machine controlled by a render manager.
Interactive rendering is great for scene and surface setup on the simple, but there's a point at which it's
rendered useless (pardon the word-play). You're right in saying that generally, only more cores will help but the number needed to actually make a difference is currently beyond single desktop workstations. And, we're not there yet.
It is getting closer though. Users have reported that the Intel Phi for about $4k is between 3 and 15 times as fast as an 8 to 12-core 2.8 to 3GHz dual CPU system depending on the software, api, and etc. And the much faster "Knights Landing" 14mn process chips are right around the corner. I guess with 2 or more of those in a machine or on TB2 cables, we can finally begin to accept interactive preview engines as more viable solutions? Currently, it's interesting when it works but most studio and professional people don't/can't base their workflow(s) on it yet. I wanna see some real-world Phi examples though.