Cupertino's hometown daily newspaper story on WWDC.I sure hope this is true, and that nMP will be announced at WWDC.
http://www.siliconvalley.com/ci_29975325/apple-hopes-spark-its-developers-enthusiasm
Mac Pro not even mentioned.
Cupertino's hometown daily newspaper story on WWDC.I sure hope this is true, and that nMP will be announced at WWDC.
They skipping broad Well?I sure hope this is true, and that nMP will be announced at WWDC. But I doubt the rMBP will surface now. If they do mention an updated model, it should be Kaby Lake later on, USB-C only, I'd skip the OLED panel but OK.
Regarding i7 on nMP, to me it makes no sense. In terms of cost, nothing to gain. I don't see Apple going non-ECC on nMP, that could make it look like less of a workstation. And Xeons are non overclockable, i7s are which could be a problem for Apple, although that's something they might be able to lock at the firmware level, maybe.
But that would possibly require firmware that supports both models, of different versions - remember Xeons and i7s support different functionality.
Don't think so honestly.
I'm also not so enthusiastic when it comes to VR/AR. Apple will surely wait, like always. They will not offer a solution that won't provide the best experience, and right now it's all still very much meh.
[doublepost=1465046492][/doublepost]http://www.tomshardware.com/news/intel-xeon-skylake-purley-cpu,31980.html
But they mentioned Apple TV. ATV 4 came out not too long ago...I don't know if what they say is believable.Cupertino's hometown daily newspaper story on WWDC.
http://www.siliconvalley.com/ci_29975325/apple-hopes-spark-its-developers-enthusiasm
Mac Pro not even mentioned.
basically a blog roundup, a Mac Pro leak is almost impossible, at leas for non-annoymous people.Cupertino's hometown daily newspaper story on WWDC.
http://www.siliconvalley.com/ci_29975325/apple-hopes-spark-its-developers-enthusiasm
Mac Pro not even mentioned.
I sure hope this is true, and that nMP will be announced at WWDC. But I doubt the rMBP will surface now. If they do mention an updated model, it should be Kaby Lake later on, USB-C only, I'd skip the OLED panel but OK.
Regarding i7 on nMP, to me it makes no sense. In terms of cost, nothing to gain. I don't see Apple going non-ECC on nMP, that could make it look like less of a workstation. And Xeons are non overclockable, i7s are which could be a problem for Apple, although that's something they might be able to lock at the firmware level,
Decoding rumours and anti-rumours:
Facts and evidence indicates WWDC will focus on Software, VR also is know to be a main focus on Apple, as they are too secretive while spending billions on VR-related acquisitions, this VR requires hardware not available yet.
Secretively spending billions on VR acquisitions
But they mentioned Apple TV. ATV 4 came out not too long ago...I don't know if what they say is believable.
Are they usually accurate?
The story isn't a tech leak story - but more about interest in the conference.basically a blog roundup, a Mac Pro leak is almost impossible, at leas for non-annoymous people.
The story isn't a tech leak story - but more about interest in the conference.
It illustrates that a new workstation isn't that important for the customers of a phone company, that's all.
Ah okay. I heard you.The story isn't a tech leak story - but more about interest in the conference.
It illustrates that a new workstation isn't that important for the customers of a phone company, that's all.
But there's a media storm worthy of Donald Trump around any Apple event.wouldn't the same be true for any company selling workstations?
Trying to do serious machine learning on any platform other than Ubuntu with Maxwell or later cards is akin to "swimming upstream". Even using another mainstream Linux like RHEL or CentOS forces you to waste time in setup and debugging - because the tools and libraries that you need were developed on Ubuntu.I hope the eGPU support in OSX 12 is true. I have been looking at what options there are to take with my MP 6,1 since CUDA seems to be a necessity for machine learning. Bizon box2 is the solution today, but perhaps Sonnet or Apple with have a solution by xmas.
Trying to do serious machine learning on any platform other than Ubuntu with Maxwell or later cards is akin to "swimming upstream". Even using another mainstream Linux like RHEL or CentOS forces you to waste time in setup and debugging - because the tools and libraries that you need were developed on Ubuntu.
Don't waste your money on an eGPU only to fight an Apple-hostile ecosystem. Put the same money towards an i7 or E3 box with a good GPU.
It's simply a matter of "the right tool for the job". Most of the time I'm hopping from my Windows 10 workstation to SSH or RDP sessions on Linux boxes.
You've outlined exactly the headaches that I was warning about.Except if you develop for OpenCL 2.x (when available on macOS) and you use the "CUDA first OpenCL then" approach (at least on OS/X Cuda DBG and Profiler work very good, years ahead), of course this approach CUDA then OpenCL, may not fit every scenario.
About Thunderbolt, eGPU and Machine Learning I'll be surprised if eGPU spport is extended to TB2, It will be a thing only for TB3 Macs, I doubt GPUs will dominate Machine Learning, but specific solution as Google's Tensor or FPGA, at least on mid term, in short GPU Rules for a while, but keep eyes on Tensor Processing Unit, FPGA cards and of course DARPA's SyNAPSE.
Ahh, I forgot, AMD is investing on build the toolchain for machine learnig with their GPUs.
Stackoverflow has an interesting thread about http://stackoverflow.com/questions/30622805/opencl-amd-deep-learning
You'll have no problems with Theano, Caffe and Torch.
You've outlined exactly the headaches that I was warning about.
"ATI is investing", "when available on Apple OSX", "keep eyes on",...
Those are available now on Ubuntu/CUDA. Do you want to start work tomorrow (where "tomorrow" is Sunday the 5th of June 2016 - day after today) or tomorrow (where "tomorrow" is sometime in 2017 or later when the stack is mature enough to port the tools and libraries to Apple OSX)?
Secondly we see this: https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/Fottemberg said:Pascal consumer grade cards support FP16 but the arithmetic throughput via native FP16 units is so low that even FP16 arithmetic throughput emulated via FP32 units is higher! Pascal is just a joke.
Good luck NVIDIA users, because Shader Model 6.0 (next DirectX 12 main update) will introduce native and full FP16 support. Yeah, but AMD is the bad guy ...
And yesterday we have this from Oxide: https://www.reddit.com/r/pcgaming/c...ng_the_aots_image_quality_controversy/d3t9ml4Robert Hallock said:Ashes uses procedural generation based on a randomized seed at launch. The benchmark does look slightly different every time it is run. But that, many have noted, does not fully explain the quality difference people noticed.
At present the GTX 1080 is incorrectly executing the terrain shaders responsible for populating the environment with the appropriate amount of snow. The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly. Snow is somewhat flat and boring in color compared to shiny rocks, which gives the illusion that less is being rendered, but this is an incorrect interpretation of how the terrain shaders are functioning in this title.
Brad Wardell/Oxide said:We (Stardock/Oxide) are looking into whether someone has reduced the precision on the new FP16 pipe.
Both AMD and NV have access to the Ashes source base. Once we obtain the new cards, we can evaluate whether someone is trying to modify the game behavior (i.e. reduce visual quality) to get a higher score.
In the meantime, taking side by side screenshots and videos will help ensure GPU developers are dissuaded from trying to boost numbers at the cost of visuals.
Three pieces of a complex tool chain. "Ya" for ATI.The few I read about (not my area) Theano, Caffe and Torch NOW (read: today, right here right now, presto, in so facto, ya) support OpenCL/AMD GPUs ... a bit of google search corroborates it.
We don't have to wait on this at all. If FP16 is #1 in our minds, then we'll watch it to find out what mistakes Oxide made.We have to wait for the end of investigation from Oxide about this to conclude anything.
What is unknown is the pricing.
More than likely, would the pricing be in line with what we have already?
Most machine learning libraries are CUDA. OpenCL barely started to enter the field, and now that Apple is dropping OpenCL adoption has virtually ceased. OpenCL has been deprecated, good luck with building on OpenCL today.
I'd tend to think so, give or take 5-6%. If they keep this same cylinder form factor, and thermal core design, then there's some baked-in cost savings for not having to completely retool the line, etc. Small internal tweaks, even allowing for more RAM, wouldn't change the costs much, except for additional components, of course.
I'm starting to notice price drops on cMP on the secondary market.
That would work out well for me if the new MP line brings pricing even lower for existing nMP.
I'll be watching WWDC to notice who predicted right or wrong from the media and this forum.
Anyone else going to watch it?
➰
Rene Ritchie when contradicted 9to5mac discarded for now TB3 Display with eGPU but suggest a more conventional TB3 Display has to come...