Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
Python is widely used in hpc clusters, but complied thru pypy or others where it's performs at about 80-90% of the same routine in C, further scipy and numpy always run in compiled code as well pyCuda and pyCL.

May seems counter to logic but most researchers prefer to wait 5hr instead 4 if they can code everything on python, go to hpcwire.com and get informed,seems you lost the python bus just when in converted into AirBus.

Actually they don't, at least not in the research center that I support, but YMMV.
And in any case the point of contention wasn't with Python, but with your baseless notion that Swift will replace it when no such thing will happen any time soon. This is especially true since even Apple is barely making use of it: http://appleinsider.com/articles/16...-use-of-swift-in-its-own-apps-engineer-claims
 
Actually they don't, at least not in the research center that I support, but YMMV.
And in any case the point of contention wasn't with Python, but with your baseless notion that Swift will replace it when no such thing will happen any time soon. This is especially true since even Apple is barely making use of it: http://appleinsider.com/articles/16...-use-of-swift-in-its-own-apps-engineer-claims
Swift until v 3 still a WiP in fact still restricted to small applications until its declared stable.

Python its very important on most research centers with access to HPC clusters few I know and I was honored to collabirate:

Argone N.l labs

Berkeley N.Labs

FL.State University

Austin University TACC they Stampeede Xeon phy mega cluster hsve its dedicated taccPy to easy clustering python https://portal.tacc.utexas.edu/-/hpc-python

But this is not the point, a weakness every new programming language has is lack of libraries availablesvon other languages that helps finding a solution on a well known problem, this will not be an issue with swift 3, when everything its complete will have available tons of code legacy of python and others .
 
Yes, I know that. The trouble is, GTX 1080 Founders Edition has 1733 MHz BOOST clock. 2.1 GHz was OC'ed by Nvidia for the demo purpose. Base clock for GTX 1080 is... 1607 MHz. Nvidia Advertises the GPU as a 9 TFLOPs compute powerhouse. It is not achievable with 1607 MHz core clock, and 2560 CC's. It gives 8.5 TFLOPs. Going with 1.7 GHz - that brings the GPU to 8.9 TFLOPs margin. Which is close to the advertised.

So that 6.5 TFLOPs is for boost clock on GTX 1070.

Sorry for late reply but Nvidia Gpus run higher than indicated boost clock.

http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-980-ti/specifications

See 980ti's boost clock of 1075Mhz, but almost all of 980ti's operate at least 1300+Mhz without overclocking applied. And that was on 28nm.
Reason why I said 2.1Ghz would be achievable without a problem on GTX 1080 with 16nm and being lot more power efficient.
 
You are talking about reference or aftermarket design of Maxwell GPUs?

Yes, they do run higher, otherwise they would not be able to OC. But TFLOPs of GTX 1080 comes from 1733 MHz Boost clock. Other than that: demo and scores of it was for 2.1 GHz GPU. 25% OC over reference. What type of GPU you will be able to buy initially? Reference.

And again: 6.5 TFLOPs of compute power for GTX 1070 is for reference boost clocks. That was from the start the matter of our discussion. OC clocks have nothing to do with reference design GPUs. And that is advertised by Nvidia.
 
The box of the 1080 looks very good, very sober and discreet. Unlike those with all sorts of elfs and dragons and whatnot from Taiwan. Don't like very much the new cooler though but it seems to improve thermal dissipation.
1080 looks very good on Ashes, unlike Polaris.
If you're a gamer the choice seems obvious, but we'll have to wait a little longer to confirm.
[doublepost=1462816943][/doublepost]May 17th seems to be the date when 1080 reviews will be out, with full specs disclosure.
Another week or so.
Everyone is now announcing their own 1080, although it's still not out yet.
I'm also curious to see if the new SLI-HB bridges do make a difference. I guess they won't go PCIe.
Those numbers on Polaris better not be the real thing. some 3 weeks more and we'll know.
 
Swift until v 3 still a WiP in fact still restricted to small applications until its declared stable.

Python its very important on most research centers with access to HPC clusters few I know and I was honored to collabirate:

Argone N.l labs

Berkeley N.Labs

FL.State University

Austin University TACC they Stampeede Xeon phy mega cluster hsve its dedicated taccPy to easy clustering python https://portal.tacc.utexas.edu/-/hpc-python

But this is not the point, a weakness every new programming language has is lack of libraries availablesvon other languages that helps finding a solution on a well known problem, this will not be an issue with swift 3, when everything its complete will have available tons of code legacy of python and others .

There are many, many research center worldwide and as I said YMMV.

And no you won't have tons of code legacy of python as you can't just cross compile and expect it to be optimal for the other plateform. Cross compiling isn't a pannacea. Beside, if Python is good enough as you say, then why waste ressources to port for another language and then have to retrain everyone in that new language when the old one was Ok to begin with.
 
Synchro3, I was referring to the NVidia cards that are apparently not DP1.4 certified yet. I find it strange that if NVidia hasn't been able to, how would those you mentioned are? This could also be marketing work, they are not now but maybe when in fact they are released. The cards aren't even put to buy so how can you be sure?
1080 in general will be DP1.4, when it's fully released maybe, but as far as I know it is not yet the case.
We'll have to wait and see when they come.

Hi Manuel. Zotac is the first graphics card manufacturer with the GTX 1080 in its product range. On the Zotac site are the tech spec: https://www.zotac.com/product/graphics_card/geforce-gtx-1080 The card will be released on 27 May. It's a 'Founders edition'.

Normally I consider product specifications as serious, not marketing. Aside from that I've never seen a major manufacturer publish wrong tech specifications.

The card manufacturers have a certain level of flexibility, for example there are GTX cards with MiniDisplayPort etc. So I believe that it is possible that Zotac did the certification for DP 1.4.
 
Last edited:
There are many, many research center worldwide and as I said YMMV.

And no you won't have tons of code legacy of python as you can't just cross compile and expect it to be optimal for the other plateform. Cross compiling isn't a pannacea. Beside, if Python is good enough as you say, then why waste ressources to port for another language and then have to retrain everyone in that new language when the old one was Ok to begin with.
Simple because the new (swift) its better, have better IDE available ( I use pyCharm it's really good but not as good as Xcode on python ).

You can combine C/C++/Objective-C/Java (thru Google translator to Objective-C) on Swift projects, add this python translated code , but most important is you have access to your existing algorithms codes plus an efficient compiler , you can build s library for compute on python then as you are ready to production instead ti hire s coder to translate it to C++ you simple translate it for swift 3, builds an pretty GUI and then link, and in few steps your app could be running efficiently even on a raspi an iPad or a mega cluster .

You noted the optimizations question, this have never been so trivial as now (except you need something specific as some avx2 instruction unreachable thru compiler optimized code).

But also allows you to start from scratch on swift then add the translated libraries as you like .
 
You're right, Zotac was the first but if you look around a lot more now have been announced. Still, if even NVidia has not been able to certify the card as DP1.4 yet, how can Zotac or anyone else for that matter claim so? If you look at NVidia slides the cards are DP1.4 but there's always a side note saying something in the likes of being in qualification.
Don't take me wrong, I'm not doubting they are DP1.4 capable, they must be, but I'm only saying that it's still not final.
Marketing people always jump the gun.
I'm pretty sure it was not final yet. Let me look if I can find it.
Here, read note 2 in the product specs:

http://www.geforce.com/hardware/10series/geforce-gtx-1080

Told ya :)
[doublepost=1462819005][/doublepost]Not even DP1.3 yet...
 
Unfortunately this seems to be common nowadays.
DP1.4 won't probably mean a lot for Apple, I don't seem them using visually lossless compression anytime soon.
But DP1.3 is a must.
Still, I don't see Apple going the NVidia way.
It seems a great card so far though.
[doublepost=1462820807][/doublepost]Kirk is leaving Intel along with another top executive. Things are really changing at Intel.
 
  • Like
Reactions: Synchro3
Talking about brand dominance by the way, have you noticed that most of the front page content displayed on macrumors is all about the iPhone, whether it be rumors, ads, financial reports and what not? I think this publication is quite indicative of where Apple's been headed for the past couple of years.
 
One good thing with 1080 was that finally NVidia ditched analog output (VGA).
I wanna see how HDR works out on Pascal.
[doublepost=1462821535][/doublepost]Yep, but the iPhone hype might slow down a bit now. Don't take me wrong, it's a great gadget, but you can only innovate (my a$$) so much. iPhone 7 might be 6s alike, of course with a few tweaks. Apple light follow Intel with the 3 step approach (Tick-tock to the new 3 step scheme) since the design apparently won't change much again.
 
Talking about brand dominance by the way, have you noticed that most of the front page content displayed on macrumors is all about the iPhone, whether it be rumors, ads, financial reports and what not? I think this publication is quite indicative of where Apple's been headed for the past couple of years.
WHOLE industry is going that way. Smartphones are dominating the sales currently. And where companies go? Where money lies. It does not lie in desktop computing anymore. It still will be a "must" for creating content for smartphones and tablets.
 
Unfortunately this seems to be common nowadays.
DP1.4 won't probably mean a lot for Apple, I don't seem them using visually lossless compression anytime soon.
But DP1.3 is a must.
Still, I don't see Apple going the NVidia way.
It seems a great card so far though.
[doublepost=1462820807][/doublepost]Kirk is leaving Intel along with another top executive. Things are really changing at Intel.
No mac will see dp1.3 this year unless Intel patches falcon ridge to deliver dp1.3 but it means compatibility issues with currently available thunderbolt 3 hardware , so very unlikely.

The best interface to expect is HDMI 2b.
 
You are talking about reference or aftermarket design of Maxwell GPUs?

Yes, they do run higher, otherwise they would not be able to OC. But TFLOPs of GTX 1080 comes from 1733 MHz Boost clock. Other than that: demo and scores of it was for 2.1 GHz GPU. 25% OC over reference. What type of GPU you will be able to buy initially? Reference.

And again: 6.5 TFLOPs of compute power for GTX 1070 is for reference boost clocks. That was from the start the matter of our discussion. OC clocks have nothing to do with reference design GPUs. And that is advertised by Nvidia.


nonono, I am talking about reference, non-overclocked scenario. Reference 980tis will run at least 1290Mhz or higher when it is not overclocked. It will reach higher than 1400Mhz when overclocked. Nvidia GPUs always boosts higher than what is indicated on spec sheet, and no, you don't have to overclock it to achieve that.
 
I think DP1.3 is going to be a Pro feature for Apple, enabling HDR and Frees/adaptive sync. Hence we're getting iMac Pro sooner or later (October). And Mac Pro SE (=Mac mini Pro).

Yes, I know DP1.2a supports adaptive sync. But Intel doesn't. So... a Pro feature.
 
Last edited:
For those who want to know something about HDMI 2.0. The nomenclature is backwards. Just have a look at this:

HDMI-2.0.jpg

Pascal - HDMI 2.0b, Polaris - HDMI 2.0a
 
Pascal - HDMI 2.0b, Polaris - HDMI 2.0a

Why is HDMI2.0 support so rare? I know almost nothing about this area, but it seems that HDMI2 makes a lot more sense to support than the DP1.3/DP1.4/TB3/USB-C mess. Having separate display and data connections, rather than one-port-to-rule-them-all, might be preferable when it comes to actually getting work done with existing equipment.
 
You can combine C/C++/Objective-C/Java (thru Google translator to Objective-C) on Swift projects, add this python translated code , but most important is you have access to your existing algorithms codes plus an efficient compiler , you can build s library for compute on python then as you are ready to production instead ti hire s coder to translate it to C++ you simple translate it for swift 3, builds an pretty GUI and then link, and in few steps your app could be running efficiently even on a raspi an iPad or a mega cluster .

None of this is simple, and all of this requires using everything BUT Swift. Swift also doesn't have direct C++ integration, and Java translation? Please, don't go there.

You noted the optimizations question, this have never been so trivial as now (except you need something specific as some avx2 instruction unreachable thru compiler optimized code).

AVX2 is actually the easiest case. Most other optimization technologies are still proprietary. CUDA/Metal/Accelerate, etc.... It's not simple to move between platforms or CPUs because, besides Intel, no one has a common standard. And even AVX2 is still proprietary to Intel, if you care about that sort of thing.
 
  • Like
Reactions: tuxon86
I own a 6 core nMP and it runs like a champ. I'm going to upgrade the CPU to a 10 or 12 core later this year, just so I can render video faster.
I do have to say I'm daily disappointed in the lack of utilization of the D700's on my Mac.
It really sucks how OpenCL seems to have been largely ignored by major vendors. I should have picked up a 12 core with D300's instead!
I hope on the next Mac Pro they re-orient the power button and port arrangement on the back, add a security cable slot, and hopefully get some companies on board with openCL.
 
I do have to say I'm daily disappointed in the lack of utilization of the D700's on my Mac.
It really sucks how OpenCL seems to have been largely ignored by major vendors. I should have picked up a 12 core with D300's instead!
I hope on the next Mac Pro they re-orient the power button and port arrangement on the back, add a security cable slot, and hopefully get some companies on board with openCL.
Apple can only blame itself for the lack of openCL support. First, when nMP was released openCL was still so buggy, that software developers started to hate it. Apple has been patching it, but then they released Metal, and that made openCL future unsure.. Apple has not said one word, how they see openCL next five years. Last time when they openly promoted it was 2013.. So should developers invest a lot of time, effort and money to learn something, what Apple will sack in one days announcement (let's say, next WWDC for instance)? Third, there are no good tools that could make developers life easier as Cuda does. openCL can be pain to debug...

I think Apple will shed some light over this issue at next WWDC. In form of macOS introduction and Xcode 8.0beta. Next revision of Metal could bring interesting things to OS X.
 
  • Like
Reactions: tomvos
OpenCL is not going anywhere. It will be included in Vulkan even for future VR games. Even Nvidia started advertising performance by simply telling the TFLOPs of their GPUs. Why? Because it is the key factor of performance for upcoming computing platforms: VR, HPC, entertainment, content creation. Around OpenCL is built whole HSA 2.0.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.