Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
I want a new mac pro transform into TRANSFORMERS!
Autobot! Start CODING!
Autbots! Render!
I'm not sure that that you really want the old hardware in the MP6,1 for that.

You might end up with a Chevrolet Corvair (unsafe at any speed) - transformed into a Pinto (good car, but early ones had an issue with high speed rear end collisions).
 
  • Like
Reactions: pat500000

Zarniwoop

macrumors 65816
Aug 12, 2009
1,038
760
West coast, Finland
Oh and that "More core for higher price" is BS. The 2630 v2 is a 6 core and listed at $616. The 2630 v4 is a 10 core and listed at $667. The 2697 v2 (mentioned above) is a 12 core, the 2695 v4 is an 18 core, both around that $2600 mark. We're getting 4-6 more cores for the same price compared to v2.

True, didn't noticed those from the mass. Although base clocks went down from v2 2.6GHz --> v3 2.4GHz --> v4 2.2GHz so those new cores really need to count.
[doublepost=1459763158][/doublepost]
More competition isn't really going to help if what's being done is actually just really freaken hard and even achieving these small improvements is tremendously costly.

Competition is needed to have movement in the pricing. So, if industry has hit a wall, at least they could compete with prices. Intel has a monopoly with x86-64 at the moment. All their products are pretty expensive.

UPDATE: I'm not alone with my opinion. Another quote from the Anandtech review: "We have said it before: this market desperately needs some competition if we want a new generation to bring more exciting improvements in performance-per-dollar metrics.."
[doublepost=1459764834][/doublepost]As of today, only few apps benefit from more that six cpu cores as is the case with most Adobe software.

Adobe Premiere Pro, tested 08/2015 on Windows.
https://www.pugetsystems.com/labs/articles/Adobe-Premiere-Pro-CC-Multi-Core-Performance-698/

Adobe Photoshop, tested 04/2015
https://www.pugetsystems.com/labs/articles/Adobe-Photoshop-CC-Multi-Core-Performance-625/

The user really has to know does he need more that six cores (12 logical) to pay top dollars for the extra cores. Some Photoshop filters can use even 12, so if that specific filter is run many hours every day, then it might count. But usually no.

3D rendering on the other hand... is usually well threaded and can use all the CPU power there is is. But as there has been debates about this here before, serious rendering happens in a cloud.
 
Last edited:

Zarniwoop

macrumors 65816
Aug 12, 2009
1,038
760
West coast, Finland
So to conclude all my posts; if IT industry wants to go forward, it has to happen with specialized co-processors. CPU's in general will go more and more to low-power and co-processors such as DSP (speech recognition, complex virtual sound production) and ISP (Augmented reality and face recognition), GPU and GPGPU, M7 to M9 co-processors on iOS devices.. and who know what else.. my guess is that there will be added security and for instance virus scanners or encryption/decryption and such could run on those co-processors. New file system is needed for added security.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
At leastn seems there is a possibility OSX will have soon native support for Btrfs.

About cpu development, it's true the von Neumann architecture is getting exhausted, we could have 60ghz cpu just right now but the electricity bill will be prohibitive, about specialized co-processors the concept being developed is FPGA dynamically programmable (or configurable to be more exact) the problem with current FPGA tech is that programming it is either cumbersome (verilog) or you should sacrifice FPGA efficiency on a von Neumann like implementation (as opencl), but I feel the industry is working to solve this challenge, and the future is for hybrid CPU/FPGA just as final iteration for Von Neumann architecture.
So to conclude all my posts; if IT industry wants to go forward, it has to happen with specialized co-processors. CPU's in general will go more and more to low-power and co-processors such as DSP (speech recognition, complex virtual sound production) and ISP (Augmented reality and face recognition), GPU and GPGPU, M7 to M9 co-processors on iOS devices.. and who know what else.. my guess is that there will be added security and for instance virus scanners or encryption/decryption and such could run on those co-processors. New file system is needed for added security.
 

wallysb01

macrumors 68000
Jun 30, 2011
1,589
809
True, didn't noticed those from the mass. Although base clocks went down from v2 2.6GHz --> v3 2.4GHz --> v4 2.2GHz so those new cores really need to count.

The Max turbo actually went up though, 3.5 (v2) to 3.6 (v4). So if using the same number of cores, you're getting the same GHz.

Competition is needed to have movement in the pricing. So, if industry has hit a wall, at least they could compete with prices. Intel has a monopoly with x86-64 at the moment. All their products are pretty expensive.

UPDATE: I'm not alone with my opinion. Another quote from the Anandtech review: "We have said it before: this market desperately needs some competition if we want a new generation to bring more exciting improvements in performance-per-dollar metrics.."

Sure, I'm not disagreeing we need competition. What I'm saying however, is that there is probably a reason we don't have any competition. And the reason is that what intel is doing has become very, very difficult.

As of today, only few apps benefit from more that six cpu cores as is the case with most Adobe software.

Adobe Premiere Pro, tested 08/2015 on Windows.
https://www.pugetsystems.com/labs/articles/Adobe-Premiere-Pro-CC-Multi-Core-Performance-698/

Adobe Photoshop, tested 04/2015
https://www.pugetsystems.com/labs/articles/Adobe-Photoshop-CC-Multi-Core-Performance-625/

The user really has to know does he need more that six cores (12 logical) to pay top dollars for the extra cores. Some Photoshop filters can use even 12, so if that specific filter is run many hours every day, then it might count. But usually no.

3D rendering on the other hand... is usually well threaded and can use all the CPU power there is is. But as there has been debates about this here before, serious rendering happens in a cloud.

Sure, the user has to know what they need and these huge 20-40+ core machines are not for your typical photoshop user, even if they call them selves a "PRO!".
 
Last edited:

ManuelGomes

macrumors 68000
Original poster
Dec 4, 2014
1,617
354
Aveiro, Portugal
Super Xeon is MIA. :)
You got i7-6950X 10 core though :)
Still not on ARK though.
[doublepost=1459795534][/doublepost]I wonder if we'll get to see a fully unlocked BDW-EP Xeon with 24 cores?!

Bets are up: will we see a Late 2016 nMP? Hardly Early, probably not Mid and maybe Late.
Just kidding, let's not get started here!! :)
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
Super Xeon is MIA. :)
You got i7-6950X 10 core though :)
Still not on ARK though.
[doublepost=1459795534][/doublepost]I wonder if we'll get to see a fully unlocked BDW-EP Xeon with 24 cores?!

Bets are up: will we see a Late 2016 nMP? Hardly Early, probably not Mid and maybe Late.
Just kidding, let's not get started here!! :)

I bet on WWDC announcement the Late2016 nMP, available sometime on Q3
 

tomvos

macrumors 6502
Jul 7, 2005
345
119
In the Nexus.
I tend to agree - if there's going to be a 2016 Mac Pro, (pre-)announcing it at MacWorld SF makes sense.

Since the MP is most likely an insignificant contributor to Apple's bottom line - there's little chance of an Osborne Effect ( https://en.wikipedia.org/wiki/Osborne_effect ).

It's likely that the Osborne effect is already happening. Most people know the system is quite outdated and that the next system will be released soonish. I would assume that the only people who buy a nMP at the moment are the ones forced to do so by business requirements. Everyone else seems to wait for the new release.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
It's likely that the Osborne effect is already happening. Most people know the system is quite outdated and that the next system will be released soonish. I would assume that the only people who buy a nMP at the moment are the ones forced to do so by business requirements. Everyone else seems to wait for the new release.
That's not an "Osborne Effect". Osborne went bankrupt because people stopped buying the current product and waited for the pre-announced one.

Apple has watch-band sales to keep the lights on....
 
  • Like
Reactions: MacsRgr8 and Mago

tuxon86

macrumors 65816
May 22, 2012
1,321
477
Interesting:
http://www.anandtech.com/show/10219/nvidia-announces-quadro-m5500-details-professional-vr-plans
2048 CUDA core GPU with 4.7 TFLOPs of compute power.

Single Fiji in S9300x2 has the same amount of TDP, and 6.9 TFLOPs. It has around 45% more compute power in the same thermal envelope...

One more thing: https://forum.beyond3d.com/threads/nvidia-pascal-speculation-thread.55552/page-49#post-1904636

The m5500 is a laptop GPU while your AMD counterpart is a server GPU... Not the same target...
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
The m5500 is a laptop GPU while your AMD counterpart is a server GPU... Not the same target...
I saw it precisely the most notorious, a "server" part being more flop/watt efficient.

Of course I'll check again once production Pascal and Polaris could be benchmarked in controlled fair setups

What I see is AMD trying to keep up with nVidia (the market dominant by long time) and comparing it's next products with former nVidia products (despite just launched, those m5500 aren't based on Pascal related tech, just are previous maxwell cores maybe in a smaller process).

Correct me if I'm wrong about nVidia m5500.

Actually I don't care about.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
P. D. I personally tested am AMD Fury nano and I was disappointed on its double precision performance, on single precision it's very good, but as most nVidia maxwell cards sucks on double precision compute, for gaming it's OK but on servers where more of the compute load is double precision is not good (actually an Xeon D beats it's on double precision compute, and this is not something to be proud, a Xeon phi reaches 1TFlop double precision and some nVidia quadro nears 3 TFlop).

Of course everything depends on what are you doing, there are few algorithm where no matter how much you trick it to effectively run in GPGPU simple a GPGPU can't handle some complex algorithm to be an useful offload.
 

tuxon86

macrumors 65816
May 22, 2012
1,321
477
I saw it precisely the most notorious, a "server" part being more flop/watt efficient.

Of course I'll check again once production Pascal and Polaris could be benchmarked in controlled fair setups

What I see is AMD trying to keep up with nVidia (the market dominant by long time) and comparing it's next products with former nVidia products (despite just launched, those m5500 aren't based on Pascal related tech, just are previous maxwell cores maybe in a smaller process).

Correct me if I'm wrong about nVidia m5500.

Actually I don't care about.

Of course you don't care, that isn't really your goal here.
Come back to us when AMD is able to shrink that monster to an MXM board without making compromise and then will talk. Until then keep on posting the same crap.

edit: I though I was responding to Koyoot but in any case the point stand.
 

Stacc

macrumors 6502a
Jun 22, 2005
888
353
Nvidia is rumored to be talking about Pascal today at GTC. One particular technology that would be useful in the Mac Pro is NVLink. Essentially this is a high bandwidth connection between 2 or more GPUs, enabling faster multi-GPU compute. The other thing it does is reduce the bandwidth required between the CPU and the GPUs. Thus, 2 graphics cards would require only 16x PCIe, freeing the other 24 lanes for things like 3 thunderbolt 3 ports (4 lanes each) and 2 SSDs (4 lanes each). This solves the problem of trying to map out the bandwidth of the current configuration of the mac pro where 32 of the 40 PCIe lanes are taken up by graphics cards leaving thunderbolt and the SSD to fight for the remaining 8 lanes.

This seems like a very Apple solution in that they can leverage their custom motherboard and graphics cards to make a solution where other workstation manufacturers will likely stick to commodity hardware with traditional PCIe graphics cards. Of course NVLink is being marketed at HPC which means it will likely be very expensive. There is also the problem that Apple and Nvidia don't seem to be on good terms.

But a Mac Pro with dual Pascal cards, RAID 0 SSDs with 4 TB of capacity, and 6 thunderbolt 3 ports is sure fun to think about.
 

tomvos

macrumors 6502
Jul 7, 2005
345
119
In the Nexus.
That's not an "Osborne Effect". Osborne went bankrupt because people stopped buying the current product and waited for the pre-announced one.

Technically you're right. Apple did not pre-announce the nMP. However in the case of the nMP, companies like Intel did announce Thunderbolt3 and new Xeons, AMD did already release lot's of newer GPUs, Samsung did release better SSDs. So, technically Apple did not pre-annouce the nMP. But all the suppliers did this for Apple instead ... which I would regard as the same mechanism as the Osbone effect to have a likely impact on sales.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
Technically you're right. Apple did not pre-announce the nMP. However in the case of the nMP, companies like Intel did announce Thunderbolt3 and new Xeons, AMD did already release lot's of newer GPUs, Samsung did release better SSDs. So, technically Apple did not pre-annouce the nMP. But all the suppliers did this for Apple instead ... which I would regard as the same mechanism as the Osbone effect to have a likely impact on sales.
It will be an Osborne Effect only if Apple goes bankrupt due to delays in upgrading the MP6,1.
 
  • Like
Reactions: tomvos

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
The m5500 is a laptop GPU while your AMD counterpart is a server GPU... Not the same target...
All that matters here is perf/watt. Laptop part is not able to compete with server part in the same thermal envelope in case of raw compute power.
I saw it precisely the most notorious, a "server" part being more flop/watt efficient.

Of course I'll check again once production Pascal and Polaris could be benchmarked in controlled fair setups

What I see is AMD trying to keep up with nVidia (the market dominant by long time) and comparing it's next products with former nVidia products (despite just launched, those m5500 aren't based on Pascal related tech, just are previous maxwell cores maybe in a smaller process).

Correct me if I'm wrong about nVidia m5500.

Actually I don't care about.
I have every time said this, and can repeat myself: People believe that 4 TFLOPs Nvidia card is faster than 4 TFLOPs GPU from any other vendor. AMD lost mindshare war few years ago. Now tide is changing. Finally.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.