Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
That APU customized is for AppleTV/Console competitor from Apple with VR capabilities.
 
That APU customized is for AppleTV/Console competitor from Apple with VR capabilities.
I disagree, While those custom APU usually ends on game console and may end at some VR related devices, this will not be the case at Apple, Apple TV will remain with Ax chips same for VR, since Apple develop those chip in-house no need for amd help, not the case for Mac hardware since they need amd64 compatibility they need either Intel or AMD, Intel don't sell customized cpu, so AMD Is the only available vendor.
 
The question is this: Will AppleTV with AX chip provide enough performance for 4 people using VR at the same time?
 
The question is this: Will AppleTV with AX chip provide enough performance for 4 people using VR at the same time?
Absolute no sense, Why 4 people at same time shares an APU For VR? As long I know each VR display is attached to a single gpu, group sharing an VR app or environment, are linked via network as today is done with every multi user game.

The iPad pro's A9x chip actually is enough capable for most VR content, on a solution like HTC Vibe (dual specific displays) or something like Google cardboard (optically divided display).

Unless you are thinking on an setup as that 30.000$ Game rig from Linus for 4 simultaneous gamers...
 
There is also Virtualization on GPU, Mago.

Why sharing the same hardware? Imagine TopGear VR, that attracts whole family at the same time. VR is everything. Not only gaming.
 
Thank you :)

that is completely and utterly inaccurate. ECC Ram is not keyed differently at all. the only difference in ECC from regular memory is some extra circuitry on the chip. The only question is if the CPU supports it, which it unfortunately does not. DDR3 is DDR3, DDR4 is DDR4, there is no physical difference in keying between normal, ECC and Registered memory.
 
  • Like
Reactions: Mago
There is also Virtualization on GPU, Mago.

Why sharing the same hardware? Imagine TopGear VR, that attracts whole family at the same time. VR is everything. Not only gaming.
Seems you don't understand how VR works.

And ultimately it's no sense Apple to deploy VR on OSX (amd64) when they don't have OSX as tuned for gaming as iOS, furthermore this makes Apple to depend on AMD to get in VR business while going alone they can get it earlier.

In case of VR content, you don't need centralized rendering (unless everybody is seeing the same object from same perspective), VR is by excellence an distributed compute task, only common place is the content, this only requires improve connectivity.
 
that is completely and utterly inaccurate. ECC Ram is not keyed differently at all. the only difference in ECC from regular memory is some extra circuitry on the chip. The only question is if the CPU supports it, which it unfortunately does not. DDR3 is DDR3, DDR4 is DDR4, there is no physical difference in keying between normal, ECC and Registered memory.
Actually I was asking if Broadwell EP that will use DDR4 can use Non-ECC memory. Maybe I worded it incorrectly that it was not obvious.
So In a word, it is possible to use Non-ECC memory in Mac Pro? :p

Mago, lets just wait and see what happens. It may very well be that Apple will completely forbid this market, or that nobody will predict what they will come up with, in the end.
 
Actually I was asking if Broadwell EP that will use DDR4 can use Non-ECC memory. Maybe I worded it incorrectly that it was not obvious.
So In a word, it is possible to use Non-ECC memory in Mac Pro? :p

all good, my apologies for not reading the details further.
having said that though, i kind of doubt that it will. the big xeons we see in these machines have always been ECC or registered-only. the E3 range xeons support desktop memory these days, which is a fairly recent development, but I think they use it as a way to create segmentation in the market....

i did a little research on this and found this chart, by these indications with them showing RDIMM/LRDIMM it looks like it will not support desktop memory...

http://cdn.wccftech.com/wp-content/...-_Purely-Platform_Brickland-EX-Comparison.jpg
[doublepost=1456514554][/doublepost]actually, semi-unrelated question - i recall previously that OSX had a 32-core limit built into it.... is that still the case? wont these E5 22-core chips break that limit when HT is included in the core count? or is the limitation 32 physical cores?
 
that is completely and utterly inaccurate. ECC Ram is not keyed differently at all. the only difference in ECC from regular memory is some extra circuitry on the chip. The only question is if the CPU supports it, which it unfortunately does not. DDR3 is DDR3, DDR4 is DDR4, there is no physical difference in keying between normal, ECC and Registered memory.

I have both presently at my desk... The notch on the ecc stick doesn't line up with the non ecc.
 
Ok, So I will tell you exactly what I will do with my Mac Pro:

Video Editing, Graphics design, Handling Text editing, web mastering, Swift coding, and Gaming: Heroes of the Storm, Hearthstone, Overwatch.

Do I need for this ECC?
Answer is pretty much simple: no.

Can I get rid of ECC, and go for higher pool of RAM without ECC?
 
This is an relaxed/economic position about expensive components is somewhat digerible, but don't apologize him about not using a reliable approach.

ECC ram not only is useful keeping up the servers, more important it prevents bit-riot, when you work with original data you want it unaltered across the time.

Workstation use to work longer unattended too as servers also benefit from avoiding memory errors.

Also storage it's prone to bit-riot as to other corruption.

ECC maybe not necessary for gaming or word processing and many task as those cited by koyot, but if you're running a long number crushing algorithm looking for an small singularity and then your ram is corrupted on a way you can't detect you could end finding that the sun is hidden under the sea and not beyond the horizon.

Ok that's something for mortals as us, but did anyone know about mission critical servers as IBM's Z3s which not only are ECC their also are the only machine available with RAID Memory banks you can hot swap DIMM w/o corruption or stopping the processing, imagine being capable to do that.
 
This is an relaxed/economic position about expensive components is somewhat digerible, but don't apologize him about not using a reliable approach.

ECC ram not only is useful keeping up the servers, more important it prevents bit-riot, when you work with original data you want it unaltered across the time.

Workstation use to work longer unattended too as servers also benefit from avoiding memory errors.

Also storage it's prone to bit-riot as to other corruption.

ECC maybe not necessary for gaming or word processing and many task as those cited by koyot, but if you're running a long number crushing algorithm looking for an small singularity and then your ram is corrupted on a way you can't detect you could end finding that the sun is hidden under the sea and not beyond the horizon.

Ok that's something for mortals as us, but did anyone know about mission critical servers as IBM's Z3s which not only are ECC their also are the only machine available with RAID Memory banks you can hot swap DIMM w/o corruption or stopping the processing, imagine being capable to do that.
Which is a moot point for the nMP since any data transferred to and from the GPU may suffer from errors as the VRAM isn't ECC itself. This is why on our workstation that are used for heavy gpgpu processing in simulation work we make sure all the data path is ecc.
 
Which is a moot point for the nMP since any data transferred to and from the GPU may suffer from errors as the VRAM isn't ECC itself. This is why on our workstation that are used for heavy gpgpu processing in simulation work we make sure all the data path is ecc.
It depends, if you use your gpu for rendering is not a issue, also isn't an issue of certain processes which implies data degradation as transcoding.
 
ECC maybe not necessary for gaming or word processing and many task as those cited by koyot, but if you're running a long number crushing algorithm looking for an small singularity and then your ram is corrupted on a way you can't detect you could end finding that the sun is hidden under the sea and not beyond the horizon.

For me, the reason that I bought a personal workstation with ECC was that I never need to worry if a glitch or a crash was due to a memory error. If there's a memory error, the system will blue screen with the message "uncorrectable memory error". If it blue screens with some other message, there's almost no chance that the memory was the problem.

Ok that's something for mortals as us, but did anyone know about mission critical servers as IBM's Z3s which not only are ECC their also are the only machine available with RAID Memory banks you can hot swap DIMM w/o corruption or stopping the processing, imagine being capable to do that.
Maybe I'm not following your phrasing exactly, but RAID RAM and hot-swap DIMMs are pretty much standard across all mainstream enterprise servers. HPE ProLiant x64 and x86 servers (except for entry level) have been able to do this for a long, long time.

It's not exclusive to the Z3.
 
It depends, if you use your gpu for rendering is not a issue, also isn't an issue of certain processes which implies data degradation as transcoding.

Which as nothing to do with your prior post or my reply to it...
 
I have both presently at my desk... The notch on the ecc stick doesn't line up with the non ecc.

Then you probably are looking at different types of ram.

here's all 3 kinds of DDR3 ram, U (desktop), R (registered/ECC) and U (ECC/Unregistered)
http://i.imgur.com/0P1yBj1.jpg

the same thing is true for DDR4 - the keying itself is part of how you determine what generation of ram it is, DDR2, 3, or 4.
see here:
http://www.legacyelectronics.com/userfiles/content/Files/Common DDR4 Form Factors.pdf

maybe one of your ram sticks has the label on the wrong side, or its mislabeled.
 
Last edited:
  • Like
Reactions: tomvos
From my perspective, MacVidCards has a "real world" view of this topic.

I'm still running an old mac pro for my bread and butter.
This is this first time in many years that I haven't bought in to Apple's latest solution.

Now the latest box is 3 years old in tech terms.

Seriously? Apple? Is this the best you can do?
Mac Pro?

Seriously?

Apple?
 
  • Like
Reactions: H2SO4 and Aldaris
Yeah, I bought a 2012 as soon as the Trashcan was announced. I hope Apple brings forth a better solution for us.
 
From my perspective, MacVidCards has a "real world" view of this topic.

I'm still running an old mac pro for my bread and butter.
This is this first time in many years that I haven't bought in to Apple's latest solution.

Now the latest box is 3 years old in tech terms.

Seriously? Apple? Is this the best you can do?
Mac Pro?

Seriously?

Apple?
...and another one. I've used Macs since forever (Quadra800 !).
I'm now still using my "Early2008" MacPro I bought when they first came out Jan '08). It's been a very good machine for me, certainly paid its way. I just hope it keeps on going. It's on 24/7, very rarely gets even a restart. I see nothing in the nMacPro that makes me want to lash out a fist full of money but also I don't want an iMac that will probably overheat.
When I look at the new PC adverts from M/S I really think they've stolen Apples ground. The home desktop market appears to have been abandoned. As a graphics professional I feel I've been thrown off the bus. I really wonder what Apple sees as it's core desktop market these days. Obviously , iPad/iPhone sales are where the money is at the moment but I feel "serious" computing will be around for a long time yet. Perhaps it really is time for them to spin off the hardware side, keep developing iOS and OSX, build the iToys but let someone else build the "heavy lift" hardware? (please, don't mention Hackintosh - I gave up on "computers" as a hobby in 1989, I need a working, low maintenance equivalent of a swiss army knife, not a hand carved, one-off hobby)
 
Can't believe you guys are still having these kinds of threads; I admire your tenacity!

The fact is the nMP is based on two big bets. The first is that all but niche uses will transition to HBM-based video cards; this bet still seems right, but the transition is many, many years behind the expected schedule, and in the interim the nMP is making some rather large compromises.

The other bet is that "mid-sized" jobs aren't economically-significant enough to Apple for Apple to worry about when designing the nMP.

How do you know you have a "mid-sized" job?

It's easy!

Do you find yourself saying "man, if only I could upgrade my built-in video card to a Titan...", or "man, if only I could get a 2-cpu, one-gpu nMP..."?

If you do, congrats! If small changes like that have a big impact on time-to-complete, whatever you're doing is a "mid-sized" job.

By contrast, if you're doing a "big job", the actual processing work is far too big for any one box, and will necessarily get farmed out to servers.

And, of course, if you're doing a "small job" the nMP (or even a higher-end iMac/mbp) is already fast enough for you; all it has to do for "small jobs" is keep up with you during interactive use.

Moreover, once you stop caring about the "mid-sized" jobs, you get an opportunity to "think different": for "small jobs" interactive responsiveness is what matters, so you should emphasize that; for "big jobs", the cluster does the actual processing, with the workstation only used for interactive preview/configuration/setup...thus, once again, for use on "large jobs" interactive responsiveness is what matters.

Sure, if you wanted to improve performance on "mid-sized" jobs it'd help to add a 2nd cpu, but that 2nd CPU doesn't have a material benefit for either the "small job" or the "large job" customers; the "small jobs" won't need it, the "big jobs" won't actually use it. Adding an expensive component that doesn't benefit your intended customers isn't worthwhile...yes, you *could* say that about GPU 2, but the bet there is that it'd pay off to have dedicated "UI" and "compute" GPUs under interactive use.

I don't entirely agree with this reasoning, but like it or not such considerations are a significant part of why the nMP is the way it is; the perception is that the users with "mid-sized" jobs are the least-valuable segment (highly vocal, but also rather parsimonious...) and not valuable enough to cater to.

This is also why the updates have been so slow: if the nMP's important metrics are seen as interactive-responsiveness, single-threaded CPU benchmarks are the best single proxy here, and those tick up maybe 5%-8% a generation, at *best*, these days...thus new CPU generations on their own aren't enough to merit a spec-bump release, and the other components are similarly lagging.

As a betting man I'd guess there will be a 2016 nMP, and the high-end GPU will be a cut-down Fury Nano derivative updated to use HBM2 (thereby able to hit 8GB); the other GPU models will be updated, but of similar vintage. It'd be cool to jump straight to AMD polaris, but it just seems unlikely. The CPU will be whatever is available.
 
The fact is the nMP is based on two big bets. The first is that all but niche uses will transition to HBM-based video cards; this bet still seems right, but the transition is many, many years behind the expected schedule, and in the interim the nMP is making some rather large compromises.

What about the mac pro requires graphics cards with HBM? HBM just reduces memory power consumption and reduces the board space required for memory. The video cards in the current design all use GDDR5 and it works just fine.

As a betting man I'd guess there will be a 2016 nMP, and the high-end GPU will be a cut-down Fury Nano derivative updated to use HBM2 (thereby able to hit 8GB); the other GPU models will be updated, but of similar vintage. It'd be cool to jump straight to AMD polaris, but it just seems unlikely. The CPU will be whatever is available.

You can't simply swap out HBM 1 for HBM 2. It requires redesigning the memory controller on the GPU itself. Essentially this means that the nano will always have HBM 1 and HBM 2 will only happen on Polaris and beyond.
 
  • Like
Reactions: tuxon86
Stacc: it's the other way around. If you're looking at a long-term roadmap from your vendors, and there's a cut-off point after which all but niche GPU boards will be at-most half-length or smaller, why *would* you stick with the cMP case? What's all that internal case volume going to be *for*, once that transition has happened?

Sure, reality certainly hasn't been kind to that roadmap, but that's the connection between the technology and the form factor.

For the other point, on the one hand, you're right, on the other hand, carefully consider the word "derivative". Either way though HBM1 is a horrible idea from a supply-chain standpoint so expect either HBM2 (if lucky) or GDDR5 (safer choice).
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.