Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I'm slow but I get it eventually... Wait, was that meant sarcastically too? Damn, now I'm lost in that unknown 4%. :p

:D

:D Like we have discussed before... it seems that this forum is stuck in a rewind/repeat loop. Hopefully we either find out more concrete information, or this new Mac Pro is released soon, because otherwise I fear that this forum will implode due to a rip in the time/space continuum.
 
By all reports the Mach kernel they have struggles with more cores,

Mach or XNU ? Mach was designed from about day zero to deal with NUMA and multiple cores back when it was a CMU research project. The problems kick in with the merged Unix/BSD 'glue' that is intertwined and layered on top.

I remember reading an article how it doesn't scale well.

This one?

https://forums.macrumors.com/threads/1601942/


Really it would more so be about making what they already have work

https://developer.apple.com/library/mac/releasenotes/Performance/RN-AffinityAPI/

L2/L3 cache affinity isn't a huge leap to memory.



Regardless Intel also struggles to add cores without significantly jacking the price. Xeon's are crazy expensive,

Not really. Intel isn't struggling. However, "get more, pay more" is relatively normal economics. If want the bleeding edge number of cores then pay bleeding edge prices. More cores take up more die space. Lower wafer yield ( due to bigger die) leads to higher prices. When Intel gets to the next process shrink either the price goes down or the base clock is bumped for the same number of cores.

There are $200-500 Xeon CPUs.


it's cheaper to add parallelization through GPU's rather than CPU's which the industry has recognized.

It is cheaper for many computations because those GPU cores are only particularly good at calculating ( not weaving through branches in logic). So they are smaller so therefore get more "bang for the buck" of die space utilization.

Intel is doing something similar with the Xeon Phi. It is first generational so the prices are higher but another couple of iterations it will likely show up as a member of a dual CPU package or long term get weaved into the CPU package itself ( a flavor of integrated GPGPU/graphics).



The idea these days is to find ways to put your code on the massively parallel and cheaper GPU.

That's already happened for code bases that aren't layered in deep inertia and have high need for computational horsepower. The trend the new Mac Pro is likely tracking is making the access to memory between the CPU and GPGPU more uniform. It isn't separate piles of memory. It is more NUMA memory. If Apple is completely ignoring NUMA they are vastly missing the boat.





So Apple is just correctly following the industry trend. If software could take advantage it would be fun to put some more GPU out in a thunderbolt cage.

GPUs in a Thunderbolt cage is not an industry trend. That is not particularly what Thunderbolt was designed for. It can be done and it will "work" ( faster than what currently have) , but that is not a optimal design point.

That would be interesting because the bandwidth should be adequate for pure compute tasks.

If folks are moaning and groaning about non optimal QPI supported memory lay out ( the "why Apple dropped dual" article above) then Thunderbolt can't possibly be the answer.
 
Yep. GPU compute would be great if software could take advantage of it.

i don't follow the maxwell development and/or rumors but someone did say they're dropping silverlight (thankfully too.. that's what mainly prevented me from considering maxwell lately.. i didn't like the idea of installing X software in order to use Y software.. didn't seem too refined or well thought out)

anyway-- do you have any info regarding maxwell 3 and gpu? or are they as tight lipped as apple? ;)
 
Not really. Intel isn't struggling. However, "get more, pay more" is relatively normal economics. If want the bleeding edge number of cores then pay bleeding edge prices. More cores take up more die space. Lower wafer yield ( due to bigger die) leads to higher prices. When Intel gets to the next process shrink either the price goes down or the base clock is bumped for the same number of cores.

There are $200-500 Xeon CPUs.

FWIW, the "$200" Xeons are an entirely different socket. The real Xeons (socket 2011) start at around $500 for anything decent, like hex-core. The 12 core will be $1500+.

Intel's definitely not struggling. They have no competition, so they can price it at whatever they want.
 
i don't follow the maxwell development and/or rumors but someone did say they're dropping silverlight (thankfully too.. that's what mainly prevented me from considering maxwell lately.. i didn't like the idea of installing X software in order to use Y software.. didn't seem too refined or well thought out)

anyway-- do you have any info regarding maxwell 3 and gpu? or are they as tight lipped as apple? ;)

Silverlight? Assuming you're referring to Next Limit's Maxwell and Microsoft's long-obsolete flash competitor, they've never required in any form. That's such a bizarre connection. They have nothing to do with each other.

Maxwell dev's have always maintained that they're always looking at GPU and would utilize it when the tradeoffs are more sensible. In v3 the multilight editor will leverage it, for instance.
 
Silverlight? Assuming you're referring to Next Limit's Maxwell and Microsoft's long-obsolete flash competitor, they've never required in any form. That's such a bizarre connection. They have nothing to do with each other.

haha. really? geez.. maybe i'm thinking about something entirely different than maxwell? (fwiw- when i tried it before, it was the sketchup plugin and not studio.. maybe that's something to do with it? dunno, i'll search around to see why i had silverlight installed at one time (with all of it's popups etc :rolleyes:).. i'm pretty sure it was maxwell (plugin) which required it but i could be wrong..

Maxwell dev's have always maintained that they're always looking at GPU and would utilize it when the tradeoffs are more sensible. In v3 the multilight editor will leverage it, for instance.

i guess it's going to be interesting to see what they do with v3 regarding gpu acceleration.. i know there's another render engine out there that has been placed solely on the gpu(s) so they're either experimenting to the fullest or have actually found ways to improve with gpu alone.. i guess we'll just see how it goes over the next year or two..



[EDIT] ok.. so maybe it was just for the maxwell sketchup plugin..

In contrast to the Maxwell Render Suite plugins, Maxwell for SketchUp contains its own render engine, and has been expressly designed for use without the need of a full Maxwell Render Suite installation.
[...]
This plugin requires a minimum of Microsoft Silverlight 3, but works best with Microsoft Silverlight 4. On 9 December, 2011, Microsoft released Silverlight 5, and we have found that this breaks the plugin in various ways, which range from flickering on the Windows OS, to a complete failure to load on Mac OSX. In order to run the plugin, it will be necessary to make sure you are not running this new version of Microsoft Silverlight. Below are instructions on how to ensure that your machine is running Silverlight 4.

...and i assumed it was for all of maxwell.. my bad
 
FWIW, the "$200" Xeons are an entirely different socket. The real Xeons (socket 2011) start at around $500 for anything decent, like hex-core.

The 4 core models are no less real than the 6 core ones. $294 Right now. ..
http://ark.intel.com/products/64621...-E5-1620-10M-Cache-3_60-GHz-0_0-GTs-Intel-QPI

$198 Right now
http://ark.intel.com/products/64592...E5-2603-10M-Cache-1_80-GHz-6_40-GTs-Intel-QPI

They exist and are very much socket 2011. That was the point. Xeon as a whole product line up come at a very wide variety of price points. Even when limited to the Xeon E5 subset they come at a fairly wide variety of price points.


The 12 core will be $1500+.

Actually up over $1,800 and probably closer to $2,000. th

Intel's definitely not struggling. They have no competition, so they can price it at whatever they want.

There is competition. Frankly, AMD has been playing the "core count" game more heavily that Intel has. It hasn't really gotten them very far. Not having PCI-e v3.0 is getting AMD kicked out of high end HPC designs. Stuck on dual memory controllers... again not helping. Architecture makes a difference; not just the micro-architecture inside the cores.
 
No you haven't. If you read his post carefully, he mentioned upgrading a 5,1 to a dual 12-core (ie, 24-core). That won't be possible without tearing the logic board completely out of the case and replacing it with something new.

And at that point, why not just build a Hack?

Oh, I see now.
 
I thought the 12 cores were.. $2650 (2.4), and $2950 (2.7) .. U think apple get them for 2K?

If dig deeper into the cpu-world article and the other echos working off the same data it turns out those are some 3rd party vendor's pre-order prices, not Intels. On initial launch window I would not be shocked at all if Intel is holding back (or even can't fill) those 12 core models from component suppliers. In turn, those suppliers will goose those items higher for the "absolutely must have 24 cores right now no matter what the cost" market. Kind of opposite reason component suppliers didn't initially have any E5 1620 at launch though. But like 2012, at launch the whole E5 v2 product line probably not be at full volume distribution.

Even for Intel those are more than a little goughing. In last year's line up there were roughly $200 deltas between entries at the top.

http://www.cpu-world.com/news_2012/2012030701_Intel_rolls_out_Xeon_E5-1600_and_E5-2600_CPUs.html

This new one has $300 deltas.

http://www.cpu-world.com/news_2013/2013080801_More_details_on_Intel_Xeon_E5-2600_v2_lineup.html

$300 is like 15% of $2,000. To jump two spots is a 30% jump. That's kind of ridiculous because performance improvements don't match that at all. It doesn't take very many jumps for that to spiral out of making sense except for those who problems are still bigger than 20-24 cores even with the upgrades (i.e., the desperate ).
 
Single CPU or dual CPU, I don't think it matters much. We all know the new Mac Pro is going to be insanely powerful either way you look at it. Personally, I can't wait!
 
Bottom line....

Bottom line....we have no choice, most will buy this "pro" model etc because fi Apple keeps up their pace it will be a decade till the new Mac Pro.
 
The 4 core models are no less real than the 6 core ones. $294 Right now. ..
http://ark.intel.com/products/64621...-E5-1620-10M-Cache-3_60-GHz-0_0-GTs-Intel-QPI

$198 Right now
http://ark.intel.com/products/64592...E5-2603-10M-Cache-1_80-GHz-6_40-GTs-Intel-QPI

They exist and are very much socket 2011. That was the point. Xeon as a whole product line up come at a very wide variety of price points. Even when limited to the Xeon E5 subset they come at a fairly wide variety of price points.




Actually up over $1,800 and probably closer to $2,000. th



There is competition. Frankly, AMD has been playing the "core count" game more heavily that Intel has. It hasn't really gotten them very far. Not having PCI-e v3.0 is getting AMD kicked out of high end HPC designs. Stuck on dual memory controllers... again not helping. Architecture makes a difference; not just the micro-architecture inside the cores.

I think you are nit picking just to argue. I specifically said hex-core, not a non-HT quad or a barely in the $200 range HT quad.

$1500+, notice the plus...

Seriously, AMD? You really are nit picking.
 
I just realized this...in their video on the website, it shows the 'processor' and explains that there are 12-core options, but they are only referring to a single-CPU Ivy Bridge chip. From what it looks like...Apple will not be offering a dual-CPU model.

12-core single chip will be the top option.

Looks like Apple dropped the ball on this one.

Been living under a rock?
 
Is this Apple engineers testing the 6-core model, or is it spoofed?
http://browser.primatelabs.com/geekbench2/2085841

It's questionable because that's the Sandy Bridge clock speed for the 1660, not the expected 3.6 GHz of the Ivy Bridge version. But the RAM speed is right....

----------

To that I add the question: how easy is it to fake all those fields in a geekbench result?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.