Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

tuxon86

macrumors 65816
May 22, 2012
1,321
477
Ok, do you enjoy measuring the framerate, and not focusing on the game itself? ;)

The framerate impact your game. Low framerate mean that you lose some of the visual information. This is especially true in FPS or even in PvP World of Warcraft.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
The framerate impact your game. Low framerate mean that you lose some of the visual information. This is especially true in FPS or even in PvP World of Warcraft.
I experience it ;). It just baffles me still.

All I need is 60FPS, nothing more. But that is just me and my point of view.
 

Stacc

macrumors 6502a
Jun 22, 2005
888
353
I game on my 2011 macbook pro. Lets just say I'm happy if 1. the game runs and 2. I get more than 30 FPS at any resolution...

The importance of maintaining 60 FPS varies depending on the game. For instance running starcraft at 30 FPS isn't too big of a deal but if you are playing counter-strike at 30 FPS you are losing the ability to correct your aim as quickly.
 

tuxon86

macrumors 65816
May 22, 2012
1,321
477
I experience it ;). It just baffles me still.

All I need is 60FPS, nothing more. But that is just me and my point of view.

Also, having a more powerfull GPU means that you can get plenty of benefit if your goal is 60Fps, such as being able to game at a higher resolution or with better physics and effects. I know car analogy are bad, but having a car with an engine able to reach 300km/h means a nicer rides than one whose engine can only take it to 110Km/h if your goal is to cruise at 100km/h.
 

Zarniwoop

macrumors 65816
Aug 12, 2009
1,038
760
West coast, Finland
Flicker fusion frequency is highly variable between individuals, heavily depends on the displaying device and the content being displayed (e.g. Dark/light, moving/still). Where do these 43 fps come from?

Study this:
http://accidentalscientist.com/2014...e-better-at-60fps-and-the-uncanny-valley.html

Monitor refresh rate has to match to fps, or otherwise it does not look right. That's why 60hz/fps has become a standard.. not because it's the best or most efficient rate, but because it is based on USA:n electricity network and their NTSC tv system was based on the electricity hz rate. Here in Europe it is 50hz for electricity and 50hz for tv, but thanks to PC world, 60Hz has landed on Europe as well. So now we have two competing systems, neither is the most efficient for fps.

But finally some geniuses have invented adaptive sync that matches automatically monitor hz to fps.. and it feels natural, no mater if it is 43 or 60 or something in between.
 
Last edited:
  • Like
Reactions: ManuelGomes

tomvos

macrumors 6502
Jul 7, 2005
345
119
In the Nexus.
I wouldn't be surprised that soon we'll have a desktop running two os on two different CPU's in one computer simultaneously - not just for security purposes - but for many other services.

This is already reality. The marketing label for this is Intel AMT. The CPU is an ARC. But I guess this is not what you had in mind. ;-)
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
What sort of impact does AVX and mixed workloads have on tasks like video/photo editing?

If using circa 2010 binaries then none. There is no AVX code in program (let alone AVX2 ) . ;-)
Lightroom's Camera RAW capabilities have an SIMD option. It will look something like this in your Lr config.

" ... Camera Raw SIMD optimization: SSE2,AVX,AVX2 ...."

Tends to pop up on conversions. FCPX also leverages AVX on certain tasks. If running background batch job conversions while concurrently running an app with GUI tasks/workload to do then you will have a mixed workload. Benchmarks many folks obsess over tend to only go after "do one thing at a time" contexts. So yeah. if threw all of the cores a 95% workload of the exact same time of conversion ( so all go into AVX subsections ) then there isn't as much of a benefit.

Are conversion loads also being shifted to GPGPUs? Yes. There are some apps that look for the "lowest common denominator" solution. All Intel and AMD CPUs since the "Sandy Bridge" era have AVX. So yeah the pre 2011 era computers don't ( previous Mac Pros all included ), but even for the folks sticking to "just" 5 year old computers are mostly in the class of having AVX at this point. The bigger adoption issue is folks using 4+ year old software (and/or leveraging software vendors whose coders are firmly committed to 5+ architectures .



Yeah, I am sure they have been waiting on TB3.

Waiting on CPU and GPU and TB all to move forward is a dubious strategy. It going to lead to long, extended refresh cycles that will cause communication problems long term (given Apple's communication policies ).



They could make a big splash if they released a redesigned macbook pro, an updated mac pro and a retina display.

Unless the the new hardware exposes new APIs ( e.g., hiDPI screens with graphics API adjustments , new coding API , etc.) it makes no sense to "hoard" these for WWDC. The major focus of WWDC is going to be the next iteration of OS X , iOS , watchOS, and tvOS. That should be enough to pack an 1-1.5 hours. They don't need to drag out the dog-and-pony show for too long.
[doublepost=1459538949][/doublepost]
Hmm.. Interestering. Never knew much of the history it.
Why couldn't intel "choke" AMD too?

They have ( no AMD x86+GPU) wins but at least AMD can compete and loose. Nvidia wouldn't even get invited to the dance at all.

The number of discrete GPU cards in Apple's Mac designs is down to about zero. It is zero talking non-custom design. The design approach that Apple takes is largely based on what they do for laptops and iOS devices (fixed and Apple design looped into the implementation) . So being king of the discrete GPU card market isn't much of any leverage.



[doublepost=1459456548][/doublepost]
If i'm not mistaken...the latest usb is 3.1, yes? Would we expect that as well?
[doublepost=1459456724][/doublepost]

Thunderbolt v3 pragmatically subsumes USB 3.1. If TB v3 is present you get an USB 3.1 implementation "for free".
USB 3.1 gen 1 is pragmatically just USB 3.0. The chipset for the E5 v3-v4 options basically includes it. So Apple could deliver some class TYPE A USB sockets also at a "3.1" speed.
[doublepost=1459541381][/doublepost]
... Or at least, that we could have a choice. But what about their OpenCL only vision? ...

'Their ... vision" ... Their =?= Apple. Apple doesn't have OpenCL only vision. There is this thing called Metal. Apple's vision is not to be locked into someone else's restrictive proprietary standard without extremely good rational behind it. CUDA doesn't particularly have one.


I personlay dont think NVidia is in any trouble. If you are building a good (gaming) machine, its an Nvidia that fills up the case. Check out any new builds on youtube, check out any recommended builds on the internet, its Nvidia all over.

The herd tends to buy what the rest of the herd is buy. Long term they will have issues if can't buy enough fab slot volume to get discounts on leading edge processes. They won't implode overnight. Top dog of a stagnating market isn't going to lead to a bright future on the markets or in long term revenue.


They are also working with the chips to build into cars..

When Tesla goes off and hires folks to do their own .....

http://9to5mac.com/2016/02/29/tesla-apple-chip/

doesn't really point to Nvidia having major traction. They are "out of the running" but there is no "break out" here either.
 
  • Like
Reactions: Mago and pat500000

Stacc

macrumors 6502a
Jun 22, 2005
888
353
Unless the the new hardware exposes new APIs ( e.g., hiDPI screens with graphics API adjustments , new coding API , etc.) it makes no sense to "hoard" these for WWDC. The major focus of WWDC is going to be the next iteration of OS X , iOS , watchOS, and tvOS. That should be enough to pack an 1-1.5 hours. They don't need to drag out the dog-and-pony show for too long.

True, it doesn't make sense to hoard them but in this case the release window would likely be within a month before or after WWDC anyways. For the mac pro Broadwell-E is rumored to be available in early June with Polaris in July. For the macbook pro the quad core iris cpu has not appeared in any shipping products yet so June may be a reasonable timeline for that to show up.

Apple does talk about hardware at WWDC from time to time when it makes sense. The 2012 announcement of the retina macbook pro and the 2013 announcement of the mac pro come to mind. If they released a new mac pro, macbook pro and external display it would be interesting enough for developers (who all use macs) to show.
 

Zarniwoop

macrumors 65816
Aug 12, 2009
1,038
760
West coast, Finland
Ok, do you enjoy measuring the framerate, and not focusing on the game itself? ;)
iPad Pro is already using adaptive sync. Apple promoted it because it saves energy and battery lasts longer. This is true, because when you can run with lower FPS the GPU has to work less = consumes less energy. What Apple didn't mention is that adaptive sync can also compensate the shortcomings of the GPU.. if it cannot provide 60fps for some scenes, screen will drop hertz and the user wont notice any stuttering.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
These are the most likely future MP 7,1 CPU:
mp71cpu.png

[doublepost=1459561577][/doublepost]how much premium will charge Apple from going from 4 core to 14, 1106$ is the delta, unlikely will offer a 8 core option
[doublepost=1459562702][/doublepost]I didn't include the E5-2697v4 (18c@145W) (actual succesor to the 12c E5-2697v2 on the nMP 6,1) since it raises 15deg the TDP (cost 2700) with few improvements on PSU and Cooling the Trash can can handle it.
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
If you look at what I wrote above, new filesystem is mandatory to converge two OS running on same chip. Maybe only one of the CPU's has access to the file system and data between them is shared with a ram disk? HSA is anyway a method to accomplish this.

Uhhhh... I don't think a new filesystem will help two OS's run from the same disk at all.

Also, OpenGL running on Metal makes no sense. That would be slower. OpenGL right now runs without anything between it and the GPU. Putting Metal between it and the GPU, and then adding all the translation on top, will just make it slower, not faster.

The speed issues that come with OpenGL are just a result of OpenGL. If OpenGL could have been fixed to run as fast as Metal, they wouldn't have done it. Putting OpenGL on top of Metal is like trying to make a horse go faster by strapping it to a race car to pull around.

The Nvidia drivers are also not going legacy only. That's totally insane. Apple didn't add Metal support for Nvidia just for funsies.
 

Stacc

macrumors 6502a
Jun 22, 2005
888
353
These are the most likely future MP 7,1 CPU: View attachment 624621
[doublepost=1459561577][/doublepost]how much premium will charge Apple from going from 4 core to 14, 1106$ is the delta, unlikely will offer a 8 core option
[doublepost=1459562702][/doublepost]I didn't include the E5-2697v4 (18c@145W) (actual succesor to the 12c E5-2697v2 on the nMP 6,1) since it raises 15deg the TDP (cost 2700) with few improvements on PSU and Cooling the Trash can can handle it.

Remember that most of the options in the current Mac Pro are from the 1600 series which are much cheaper. These haven't been released yet for Xeon v4.
 

developer13245

macrumors 6502a
Nov 15, 2012
771
1,004
These are the most likely future MP 7,1 CPU:...

with few improvements on PSU and Cooling the Trash can can handle it.

This is the invalid assumption made by a lot of people - and - points to the main flaw in the nMP:
The COMMON thermal core is common so it now REQUIRES thermal COMPATIBILITY in a way that never existed before (GPUs can't just have their own cooling fan anymore, can they??)

So even a 5w power increase in the CPU can potentially "cook" another component that uses the same thermal core. What if some future next gen of GPU chips have lower max temp specs than the 6,1? They will fry when placed in even the same design with current CPU etc components - gee, wonder why there are NO graphics card upgrades for nMP???

Sure, the thermal core could be made larger, but that sets off a retooling chain reaction which increases product line manufacture costs. Not gonna happen - remember it's a business.

It does not make sense to change the diameter of the cylinder by a fractional inch for every revision. Think about it, but please include a reasonable consideration of the laws of thermodynamics and manufacturing while doing so.

Face it: End of trash can.
 

filmak

macrumors 65816
Jun 21, 2012
1,418
777
between earth and heaven
Case a) ARM could handle disc-io, de-/encryption,usb, DSP/ISP and sleep mode activity. It could run only in kernel-mode. x86-64 could run apps,ui and band master of GPGPU. It would be possible to switch the roles according to the need. You could re-boot either and it wouldn't crash the second.
Case b) Integrating iOS os macOX worlds. Connecting iPad Pro to Mac Pro with TB and they could start to take each others roles. You could have split screen where you can share documents from both devices like they were one. Imagine Mac Mini that works as a dock for iPad Pro. This would also make a new kind of hybrid laptop possible.

Thank you, it's interesting...
But right now we can't have a single OS polished - without multiple bugs etc, I can't even think that we will have two of them running, with the current situation...
 

Zarniwoop

macrumors 65816
Aug 12, 2009
1,038
760
West coast, Finland
These are the most likely future MP 7,1 CPU: View attachment 624621
[doublepost=1459561577][/doublepost]how much premium will charge Apple from going from 4 core to 14, 1106$ is the delta, unlikely will offer a 8 core option
[doublepost=1459562702][/doublepost]I didn't include the E5-2697v4 (18c@145W) (actual succesor to the 12c E5-2697v2 on the nMP 6,1) since it raises 15deg the TDP (cost 2700) with few improvements on PSU and Cooling the Trash can can handle it.

Apple has used 1600 series for 4-,6- and 8-core models. Here's a Haswell comparison 2600 vs 1600:

Haswell E5-2643v3, 6(12) cores, 3,4GHz, turbo 3,7GHz, USD $1552
Haswell E5-1650v3, 6(12) cores, 3.5GHz, turbo 3,8GHz, USD $583

Haswell E5-2667v3, 8(16) cores, 3,2GHz, turbo 3,6GHz, USD $2057
Haswell E5-1680v3, 8(16) cores, 3,2GHz, turbo 3,8GHz, USD $1723

So, E5-1600v4 series should follow the pattern, that it runs slightly higher clocks than 2600-series - at least with turbo. And the price difference is huge! On some rumors were mentioned, that there should be a high clocked version of 4-core for 1600 series. Lets see..
[doublepost=1459588896][/doublepost]
Also, OpenGL running on Metal makes no sense. That would be slower. OpenGL right now runs without anything between it and the GPU. Putting Metal between it and the GPU, and then adding all the translation on top, will just make it slower, not faster.

The speed issues that come with OpenGL are just a result of OpenGL. If OpenGL could have been fixed to run as fast as Metal, they wouldn't have done it. Putting OpenGL on top of Metal is like trying to make a horse go faster by strapping it to a race car to pull around.

The Nvidia drivers are also not going legacy only. That's totally insane. Apple didn't add Metal support for Nvidia just for funsies.

There are couple of reasons why openGL should go on top of Metal.
1) Money. Currently all three GPU providers write their drivers for TWO systems: Metal and openGL. This two driver system will cost hundreds of thousands.. maybe millions. And it slows down both development, because Mac driver programmers are had to find.
2) Three vendors write very different quality of code. I've followed some Apple developer forums from 2010, and what they've complained mostly is: a) Apple's openGL is like a train track. There's no freedom to go right or left. The only Truth works, and otherwise it is broken. Same goes with openCL. This makes moving your code from Windows openGL to Mac to run usually slower. b) Nvidias drivers are more slower on Mac than in Windows. AMD's drivers are more on par between platforms. Nvidia requires more specific tweaking.
3) Metal works like a GPU to a programmer. It is near as you'd program the GPU itself. There's very small overhead.
4) Putting openGL on top of Metal would remove all driver writing of the three GPU vendors for openGL. Lot of money saved!!
5) Speed penalty doesn't have to be big.
[doublepost=1459589329][/doublepost]
Thank you, it's interesting...
But right now we can't have a single OS polished - without multiple bugs etc, I can't even think that we will have two of them running, with the current situation...
True. If Apple changes OS X --> macOS 11, El Capitan could be the main OS for next year, getting updates up to .11 and there would be at least one year of public beta testing for macOS.
 
Last edited:

JesperA

macrumors 6502a
Feb 10, 2012
691
1,079
Sweden
So even a 5w power increase in the CPU can potentially "cook" another component that uses the same thermal core. What if some future next gen of GPU chips have lower max temp specs than the 6,1?
Don't know if i agree with this:

1) Most GPU:s have higher max temp than CPU:s, so even if the CPU:s is close to its Tcase it would not "cook" another component on the thermal core, especially not the GPU:s

2) Every computer on this planet have alot of thermal sensors that handle the fan control curve and in case of the nMP that has a common core for all its components, the fan will adapt to all thermal events, there is almost no risk that any component in the system will have a higher temperature than its designed max temp, the fan in the Mac Pro will simply adapt to the thermal sensor of the component that is closest to its max allowed temperature

3) If the hypothesized future GPU with a lower max temp than the current 6,1 is put into 6,1 or the future Mac Pro (with the same thermal core), they will have to lower the max allowed temp on the GPU by atleast 15-20 degrees celcius for it to be lower than the max Tcase of any available Xeon CPU. For the last 15 years i have not seen a GPU that has a lower max temp than a Xeon so i doubt this will happen

4) The max temp of the W9000 (D700) in the Mac Pro is over 90 degrees celsius, the max temp for the Xeons in the Mac Pro is less than 80 degrees and yet the D700 is not cooking the Xeon:s when the D700 is at full load and the Xeons are idling. The system; hardware and software are simply designed to cool everything below the components specced max temps.

5) If you increase the CPU by 5w, you still will not be close to the heat output of the GPU:s in the system, so i dont know why you are so worried a CPU would cook other components, you should be more worried that the GPUs would cook other components before a CPU would even come close to do it.

6) The difference in ambient temperature in my room from winter to summer are affecting the temperature of the thermal core and the components more than any 5w CPU increase ever would and my Mac Pro have not exploded yet, i have not even seen it throttling any of its component yet


Why would your hypothesized future nMP be designed in such a way it would not be able to handle its components?
 

lowendlinux

macrumors 603
Sep 24, 2014
5,460
6,788
Germany
Don't know if i agree with this:

1) Most GPU:s have higher max temp than CPU:s, so even if the CPU:s is close to its Tcase it would not "cook" another component on the thermal core, especially not the GPU:s

2) Every computer on this planet have alot of thermal sensors that handle the fan control curve and in case of the nMP that has a common core for all its components, the fan will adapt to all thermal events, there is almost no risk that any component in the system will have a higher temperature than its designed max temp, the fan in the Mac Pro will simply adapt to the thermal sensor of the component that is closest to its max allowed temperature

3) If the hypothesized future GPU with a lower max temp than the current 6,1 is put into 6,1 or the future Mac Pro (with the same thermal core), they will have to lower the max allowed temp on the GPU by atleast 15-20 degrees celcius for it to be lower than the max Tcase of any available Xeon CPU. For the last 15 years i have not seen a GPU that has a lower max temp than a Xeon so i doubt this will happen

4) The max temp of the W9000 (D700) in the Mac Pro is over 90 degrees celsius, the max temp for the Xeons in the Mac Pro is less than 80 degrees and yet the D700 is not cooking the Xeon:s when the D700 is at full load and the Xeons are idling. The system; hardware and software are simply designed to cool everything below the components specced max temps.

5) If you increase the CPU by 5w, you still will not be close to the heat output of the GPU:s in the system, so i dont know why you are so worried a CPU would cook other components, you should be more worried that the GPUs would cook other components before a CPU would even come close to do it.

6) The difference in ambient temperature in my room from winter to summer are affecting the temperature of the thermal core and the components more than any 5w CPU increase ever would and my Mac Pro have not exploded yet, i have not even seen it throttling any of its component yet


Why would your hypothesized future nMP be designed in such a way it would not be able to handle its components?
The problem he was referring to was probably heat soak. We don't know how many total watts the thermal core can dissipate before we reach heat soak. Since this is Apple I'd bet that the current 12c and D700's are just behind that line.
 
  • Like
Reactions: tuxon86

JesperA

macrumors 6502a
Feb 10, 2012
691
1,079
Sweden
The problem he was referring to was probably heat soak. We don't know how many total watts the thermal core can dissipate before we reach heat soak. Since this is Apple I'd bet that the current 12c and D700's are just behind that line.
Yes, heat soak is included in my "calculations", the thermal sensors will negate heat soak, if the GPU:s are at full load, outputting 90 degrees each, over 200 watt of thermal into the thermal core and the heat spreads to the case of the CPU (that is idling in this example) which increase the Tcase temp on the CPU close to its max temp, the thermal sensors will simply notice that and increase the fan speed to keep all the components within its temperature range.

I have been rendering on the GPU:s while the CPU is only under low to moderate load, the temps on the GPU:s have always been 10-20 degrees higher than the temp of the CPU so i don't think heat soak is an issue.
 
  • Like
Reactions: Mago

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
This is the invalid assumption made by a lot of people - and - points to the main flaw in the nMP:
The COMMON thermal core is common so it now REQUIRES thermal COMPATIBILITY in a way that never existed before (GPUs can't just have their own cooling fan anymore, can they??)

So even a 5w power increase in the CPU can potentially "cook" another component that uses the same thermal core. What if some future next gen of GPU chips have lower max temp specs than the 6,1? They will fry when placed in even the same design with current CPU etc components - gee, wonder why there are NO graphics card upgrades for nMP???

Sure, the thermal core could be made larger, but that sets off a retooling chain reaction which increases product line manufacture costs. Not gonna happen - remember it's a business.

It does not make sense to change the diameter of the cylinder by a fractional inch for every revision. Think about it, but please include a reasonable consideration of the laws of thermodynamics and manufacturing while doing so.

Face it: End of trash can.
Thermodynamics is not something you learn reading popular mechanics.

1st your assumption doesn't considers there is an active component (the fan) blowing air that takes away heat immediately.

2nd in case there is no fan (passive heatsink) the temperature at adjacent components will not be the same as the cpu but the mean thermal along the heatsink area, explained in a way you can understand, if you heat a pan with am cpu (a big oan) and this cpu delivers 135W does all the pan have the same temperature as the cpu? No depending on the thermal conductivity of the pan's material the heat will dissipate higher close to the cpu than farther if it's conductivity is slow, and more symmetrical if thermal conductivity is high, and the pan actual temp depends on the area exposed to lower temperature material or vacuum (heat dissipation to the vacuum is the most difficult and represents the material's heat transfer limit).

So no matter if the cpu is 145W and the gpu are 110 as long the fan is active and the cooler TDP isn't exceed all components will cool down as required.

In case the thermal delta among components is a problem because requires the heatsink cooler to effectively cool down quickly, what they (Apple) need to do is to semi isolate sections of the thermal core this way delaying heat normalization along the core giving time to the air to cool down the critical components before the heat from other components reaches the area assigned to critical component.

The nMP thermal core is the most perfect solution, also better than liquid cooling (when no extreme temperatures are involved as on few cpu from AMD).

Apple to improve the thermal core either can improve the inner fins design to cool down quickly components with higher TDP also they can upgrade the fan (the cheapest) also switch from aluminum to copper in case the TDP delta is too extreme.

Your arguments don't have an engineering base, but seems influenced by those nMP haters.
 

Bubba Satori

Suspended
Feb 15, 2008
4,726
3,756
B'ham

Not fair.
Those are computer companies.
The latest technology as soon as it's available.
A full range of models and peripherals.
No "Pro " computers with 450 watt power supplies.
No rebranded consumer GPUs.
No corporate silence on roadmaps.
No matching watch bands.
No A tax.
Not fair.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.