Do you feel such an expectation is unreasonable?i suppose one could argue that.. sure.
winning that argument on the other hand....
good luck in court :thumbsup:
Do you feel such an expectation is unreasonable?i suppose one could argue that.. sure.
winning that argument on the other hand....
good luck in court :thumbsup:
Do you feel such an expectation is unreasonable?
Apple goes to great lengths to have generic ads, and hides technical details (try to find which 8 core CPU is in the MP6,1 on an Apple website).i don't disagree with you.. just couldn't come up with a better analogy. sorry.
there's a point i'm trying to make but maybe i'll just refrain until clearer words come to me.
Perhaps I missed it but I don't see an answer to my question in your response.what happens when it sometimes runs at 3.2GHz? should apple be able to charge you more since it's going faster than advertised?
because maybe i should quit allowing apple to monitor my workflow.. next thing i know, they'll be sending a bill for those cpu spikes
(i kid,i kid.. i don't really feel like arguing you about this)
If they advertise a system as operating at x frequency, which they do, then it is not unreasonable for a consumer to expect it operate at x frequency in continuous operation.Apple goes to great lengths to have generic ads, and hides technical details (try to find which 8 core CPU is in the MP6,1 on an Apple website).
A lawsuit would be very difficult, since Apple makes few concrete claims.
I agree completely for a workstation or server.If they advertise a system as operating at x frequency, which they do, then it is not unreasonable for a consumer to expect it operate at x frequency in continuous operation.
nope. didn't miss anything.Perhaps I missed it but I don't see an answer to my question in your response.
If they advertise a system as operating at x frequency, which they do, then it is not unreasonable for a consumer to expect it operate at x frequency in continuous operation.
Really worried this is the excuse Apple needs to completely exit the pro market.
Watch them say, "because no one wants these kinds of machines anymore", not, "because we built the wrong machine for the job"
Could see them undoing a lot of the damage by:
1. Ditch AMD for Nvidia, take more of a hit on the upfront component cost but save on the AppleCare bill while getting superior performing, cooler, more energy-efficient products
2. Reintroduce the "Power Mac" brand as a line of cMP-like towers with modern parts, or partner with IBM on Mac OS X workstations so they can keep those "ugly" looking towers away from all the pretty iToys
3. Save face by keeping the nMP as the "prosumer" line - for people who like the Mac Mini but need a bit more oomph but don't need a massive upgradeable workhorse machine
Not that it'll ever happen, sadly.
Marketing is important as buyers make purchasing decisions based on what they're told a product can do. If a product fails to meet the marketed capabilities then the manufacturer is being misleading.
However I believe the issue extends further than just base versus turbo speed. I believe what people are referring to is the systems inability to maintain even the base speed for extended periods of time. Not because the components are incapable of doing so but rather the packaging of the components causes thermal issues which necessitate the decreased performance.
Could see them undoing a lot of the damage by:
1. Ditch AMD for Nvidia, take more of a hit on the upfront component cost but save on the AppleCare bill while getting superior performing, cooler, more energy-efficient products
No, I haven't said that, I was simply impressed by how well Apple's iPhone is doing and its custom CPU. All I'm saying they might benefit from doing the same thing with their workstations.
Correct me if I'm misunderstanding the benchmarks out there, but hasn't Maxwell gotten pretty good with OpenCL? It seems like AMD still has a slight edge in the majority of benchmarks, but Nvidia appears to be in the hunt.I see much more of an advantage with Apple staying with AMD in the long run for consumers. Ever since Apple stayed with AMD Cards we are see much more development in openCL in many professional applications. Do we really want to see most application just on a proprietary platform of CUDA with no options besides Nvidia? We need a second choice for consumers and we are seeing increased development with AMD to create more competition the the graphic card field.
OK, here we go:
https://forums.macrumors.com/thread...rottling-heat-and-performance.1815601/page-40
If you don't want to read 40+ pages I'll sum up.
The 5K iMac when fitted with the better GPU (I use the term loosely) routinely runs at 105C.
The fan comes on and the CPU and GPU frequencies plunge until it gets a little cooler, then they go back up, bouncing on thermal limit. Apple's true level of Hubris becomes apparent when you see that in Windows the fan can go faster and keep the machine cooler. So, in order to maintain thinness and quiet, they let the internals boil themselves to death. (105C is higher then boiling in fact)
Note many posts by myself over last year or so, predicting early demise of said machines. Note that my predictions have started coming true. (recent repair shop posts about fixing 100+ logic boards for Apple)
The machine desperately needed quiet, cool Nvidia cards. Apple got the bargain basement AMD space heaters instead and let these poor machines run at high clocks and fry themselves to death. 105C as a standard operating temp is suicide, and Apple knew it.
Read the thread, several times people spoke with Apple Engineers who said 105 wasn't possible, only to be corrected. Expect brown areas over the GPU on that lovely 5K screen, no way to stop radiated heat.
Really worried this is the excuse Apple needs to completely exit the pro market.
Watch them say, "because no one wants these kinds of machines anymore", not, "because we built the wrong machine for the job"
Could see them undoing a lot of the damage by:
1. Ditch AMD for Nvidia, take more of a hit on the upfront component cost but save on the AppleCare bill while getting superior performing, cooler, more energy-efficient products
2. Reintroduce the "Power Mac" brand as a line of cMP-like towers with modern parts, or partner with IBM on Mac OS X workstations so they can keep those "ugly" looking towers away from all the pretty iToys
3. Save face by keeping the nMP as the "prosumer" line - for people who like the Mac Mini but need a bit more oomph but don't need a massive upgradeable workhorse machine
Not that it'll ever happen, sadly.
I have already proposed something. Compare performance in Final Cut Pro X between AMD and Nvidia Maxwell GPUs. You will see why Apple went with AMD GPUs as go-to solution.Correct me if I'm misunderstanding the benchmarks out there, but hasn't Maxwell gotten pretty good with OpenCL? It seems like AMD still has a slight edge in the majority of benchmarks, but Nvidia appears to be in the hunt.
Competition is good, so I'd like to see AMD stay relevant, but I'd rather see Apple go with the best technology, whatever that is.
Is there any sense in Apple buying AMD outright?
Is there any sense in Apple developing it's own Mac-based GPU based on their ARM work?
As the Mac transforms more and more into appliance computing, open standards become less and less important. (not commenting here on whether that's good or bad, but that seems to be the direction the industry is moving in... the Mac is going to follow the iPhone/iPad, not the other way around)
I have already proposed something. Compare performance in Final Cut Pro X between AMD and Nvidia Maxwell GPUs. You will see why Apple went with AMD GPUs as go-to solution.
People don't understand that one of the reasons why Apple ditched Nvidia was the fact that Apple does not want optimization of performance in application through drivers. That is why they developed primitive in comparison to the other options, but still, low-level API to get rid of optimization of performance through drivers and stick it in the App itself. Performance of the application is the most important here. With low-level APIs you have absolutely no control by drivers about performance of the application. All optimization is done through the Application itself. The API is telling the App what to do, and there is absolutely no room for error corrections in drivers. Its all done by Applications. That is good thing regardless of what people here will try to say. It gives the application full control over the hardware, with all of its features. With Asynchronous Compute as the most important feature.
What is the context here? Because Nvidia GPUs are flying when they have great drivers. Ashes of Singularity, and Nvidia developer guide proven it, that Context switching between Graphics and Compute in Maxwell GPUs brings decrease in performance. Not to mention that in their GPUs there is only one Asynchronous Compute Engine.
Before low-level APIs there was possibility that GPUs were underused(lets look at DX11 and AMD GPUs...). Not fully utilized. Right now - they can be.
And all of this Im writing for X time in this thread.
P.S. People on the hackintosh stage already did direct comparisons between Maxwell and GCN GPUs.
Not trying to start anything here but regarding the possible legal suit, come on.
It's reasonable for us to want the max performance all the time, but you also need to account for the particular machine and the use it's designed for.
Also, you see nowhere speed specs for the GPUs, and for the CPUs you see no SKUs but instead the advertised Intel (and Intel being here the focus) processor speeds, single core and turbo. They never claim full speed 100% of the time, as no one else does in fact, and that won't stick.
Would you also sue Intel because you can't keep all cores working at full turbo speed? Because if you don't know your stuff it might not be readily evident that turbo applies only to low core count at work.
We need to be realistic here and know what we're talking about and not get heated up for anything.
I'm not saying we should just take it without questioning, but do it with proper knowledge.