Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
hey @koyoot http://www.eetimes.com/document.asp?doc_id=1330568&_mc=sm_eet_editor_rickmerritt

This guy managed to develop on 16nm for under a million dollars

TL;DR this is the guy behind the revolutionary Epiphany family of HPC cpu, those numbers belong to it latest iteration Epiphany V a 1024 core 64bit RISC-V an incredible 1.6 2 TERAflop on miserable 16W 16-32W.

Put in perspective, 10 Epiphany-V will deliver 16 20 Tflop on the half power required by nVidia's pascal P100 (to deliver just 4.7 TFlop), with much improved coding flexibility (similar or better than Xeon-Phi) and a 100% OpenSource platform NDA free for ever.

I Assume the performance figures are for FP32

Edit: performance figures are FP64 and projected 2 TFlop on 32-16W.

OK Apple I grant you permission to move the Mac to the RISC-V architecture.
 
Last edited:
  • Like
Reactions: koyoot
Edit: performance figures are FP64 and projected 2 TFlop on 32-16W.

I wouldn't want to diminish the guy's accomplishment (which looks great), but there are some basic facts of life about power and performance in play. Every computation costs power, and in a given node, well-optimized, apples-to-apples, no-one is going to magically get more performance for less power (say within 20-40%, e.g. AMD vs. Intel). If he gets radically more performance per watt, then something probably isn't being counted, e.g. the chip may not include all common or necessary features that, say, Intel includes on-chip, and the power consumption from these functions is or would be spent elsewhere, and hence isn't being counted.

Or, just to provide a different argument, if my goal was to kick Intel's butt in FP64, I could just make a massive array of FP64 units at the expense of everything else, and I'd "magically" have a ridiculously-high FP64/watt figure, but that's all it could do.

Again, not to diminish the guy's accomplishment, but there aren't many magic bullets out there. His real accomplishment was to create such a chip very quickly and on a tight budget. That took guts (and the support of DARPA).
 
  • Like
Reactions: AidenShaw
I wouldn't want to diminish the guy's accomplishment (which looks great), but there are some basic facts of life about power and performance in play.

His precedent devolvements Epiphany III, and IV (III maybe still available at amazon) validated his architecture and efficiency, nothing magic and actually Epiphany V its just a shrink and multiply of epiphany IV an previously validated development.
no-one is going to magically get more performance for less power (say within 20-40%, e.g. AMD vs. Intel).

Intel and AMD ae technologically tied to their roots, as I commented before even ARM will lead the IPC race sometime the next 3-4 years (maybe even earlier), intel/amd x64 architecture is doomed you cant get more IPC from those chips w/o fundamental changes and w/o breaking legacy support, now that seems will be very difficult to go beyond 6nm there is no more silicone carpet where hide inherited deficiencies (despite recent progress abot 1m gates I doubt at this scale even reach some commercial development, at least not on the next 10 year or more).

Epiphany's architecture seems more practical at process with massive parallelism (2K cores doing what does nv pascal on less than 400) that as DSP Machine Learning Simulations Chemistry, but on other fields maybe wont be so useful, whatever the biggest accomplishment her is that its possible to go beyond and on much less (power and R&D)
 
His precedent devolvements Epiphany III, and IV (III maybe still available at amazon) validated his architecture and efficiency, nothing magic and actually Epiphany V its just a shrink and multiply of epiphany IV an previously validated development.


Intel and AMD ae technologically tied to their roots, as I commented before even ARM will lead the IPC race sometime the next 3-4 years (maybe even earlier), intel/amd x64 architecture is doomed you cant get more IPC from those chips w/o fundamental changes and w/o breaking legacy support, now that seems will be very difficult to go beyond 6nm there is no more silicone carpet where hide inherited deficiencies (despite recent progress abot 1m gates I doubt at this scale even reach some commercial development, at least not on the next 10 year or more).

Epiphany's architecture seems more practical at process with massive parallelism (2K cores doing what does nv pascal on less than 400) that as DSP Machine Learning Simulations Chemistry, but on other fields maybe wont be so useful, whatever the biggest accomplishment her is that its possible to go beyond and on much less (power and R&D)

Uh, Intel+HP have (or had) the Itanium architecture, which nobody apparently wanted. It was an IPC monster, and in many people's opinions, would have crushed everything if they hadn't starved it in favor of x86-64. I wrote a lot of code for the Itanium, and after the compilers got better and you learned how to use them, it was possible to beat anything in IPC. Technically it had a lot going for it, but on the business and marketing side I think they (Intel+HP) really screwed it up.
 
  • Like
Reactions: ManuelGomes
Uh, Intel+HP have (or had) the Itanium architecture, which nobody apparently wanted. It was an IPC monster, and in many people's opinions, would have crushed everything if they hadn't starved it in favor of x86-64. I wrote a lot of code for the Itanium, and after the compilers got better and you learned how to use them, it was possible to beat anything in IPC. Technically it had a lot going for it, but on the business and marketing side I think they (Intel+HP) really screwed it up.
I commented before about Itanium and as usual the same trolls pack defending x86, theoretically Itanium should be capable to reach 4x SMT (actually just reach a modest 2x due compilers and silicon development) it has the logic roots for that.

At some point in the future Intel should resucite Itanium or build another all-new architecture if they don't wanna lose market to ARM and RISC-V cpu (the epiphany is not alone).

Itanium still has chance to rebirth (before IBMs Power 8 everything it's possible).

But also there is the problem that mainstream high performance pc market shrinks only corporate/research is growing, beyond gaming there is less market for HPC, but if Power8 did maybe Itanium too (if not Itanium some other)
 
Yep we'll get it (new itanium) as soon as Commodore launches the new Amiga 5000 :) (joke)
Don't dare the hand of God...

(BTW I was a child when Amiga goes on sales and was my first dream computer, I BTW has lot of fans, maybe a brand resurrection on a Linux running amiga like workstation could work)
 
Don't dare the hand of God...

(BTW I was a child when Amiga goes on sales and was my first dream computer, I BTW has lot of fans, maybe a brand resurrection on a Linux running amiga like workstation could work)

I still use two Amiga 1000 in my home studio for games and mod tracking.
 
  • Like
Reactions: Mago
Man, was Amiga the thing back then?!
Revolutionary at the time.
It was brought back but meh...

Just a curiosity, amiga here means friend (female gender). Having an amiga might be good :) joking of course.
 
Man, was Amiga the thing back then?!
Revolutionary at the time.
It was brought back but meh...

Just a curiosity, amiga here means friend (female gender). Having an amiga might be good :) joking of course.
I mean was the first computer targeted at photo composition and was also one of the first graphic workstations with color screen for the mainstream, don't remember much I was a 6grader child then (wikipedia may help).

I think also the Commodore 64 could be rebirth on something like a Keyboard with touchpad and a couple of USB-C for everything and running moreless the same hardware as the Retina Macbook. its feasible and if backed by an good brand sure should sell good. I knew Asus tried to sell something alike, but ASUS is a crap they only have good motherboards and so-so monitors, maybe lenovo, even Apple (the k-Mac).
 
Man, was Amiga the thing back then?!
Revolutionary at the time.
It was brought back but meh...

Just a curiosity, amiga here means friend (female gender). Having an amiga might be good :) joking of course.
Yes, and since I have two Amiga you can think of it as me having a threesome :)
[doublepost=1476274124][/doublepost]
I mean was the first computer targeted at photo composition and was also one of the first graphic workstations with color screen for the mainstream, don't remember much I was a 6grader child then (wikipedia may help).

I think also the Commodore 64 could be rebirth on something like a Keyboard with touchpad and a couple of USB-C for everything and running moreless the same hardware as the Retina Macbook. its feasible and if backed by an good brand sure should sell good. I knew Asus tried to sell something alike, but ASUS is a crap they only have good motherboards and so-so monitors, maybe lenovo, even Apple (the k-Mac).

It also was the first consumer computer with a preemtive multi-tasking OS... In 1985...

There is a new C64 computer based on an FPGA but it's pricey when you can still get a real C64 for about $50.00
The cheaper solution if you want to relive that era of computing is through emulation. CLoanto Amiga Forever and C64 Forever are the best and comes with many piece of software already installed.
 
  • Like
Reactions: pat500000
One thing what Khalid did completely messed up with efficiency. The GPUs are 15-20% more efficient, not 50%. Its in line with what we know about revision's of the process. 50% would require complete redesign of the process and microarchitecture.

Expect new revision of GPUs, but not 50% more efficient. I think what we will actually see is 1.4 GHz 36CU design, with GDDR5X rated at 150W.
 
Yes, 50% was indeed an exaggeration.
Having GDDR5X would require the controller to support it, which there was no evidence that it would. And adding GDDR5X support now would require a major overhaul. Unless it was indeed dormant all this time waiting for a revision to show up.
 
Yes, 50% was indeed an exaggeration.
Having GDDR5X would require the controller to support it, which there was no evidence that it would. And adding GDDR5X support now would require a major overhaul. Unless it was indeed dormant all this time waiting for a revision to show up.
No, GDDR5X does not require controller to support it, other than "normal" GDDR5 memory controller. It simply allows the memory controller to pass the data in twice the amount than before. GTX 1080 and GTX 1070 are using exactly the same die, with the same controller. And it is GDDR5 memory controller. GDDR5X is not different by a huge margin from GDDR5.
 
I do believe you're mistaken here but I won't argue.
Pascal has a GDDR5X mem controller, which supports GDDR5 as well. The other way around I believe is not true.
Anandtech: The GDDR5X SGRAM (synchronous graphics random access memory) standard is based on the GDDR5 technology introduced in 2007 and first used in 2008. The GDDR5X standard brings three key improvements to the well-established GDDR5: it increases data-rates by up to a factor of two, it improves energy efficiency of high-end memory, and it defines new capacities of memory chips to enable denser memory configurations of add-in graphics boards or other devices. What is very important for developers of chips and makers of graphics cards is that the GDDR5X should not require drastic changes to designs of graphics cards, and the general feature-set of GDDR5 remains unchanged.
http://www.anandtech.com/show/9883/gddr5x-standard-jedec-new-gpu-memory-14-gbps
Wikipedia:
GDDR5X
In January 2016, JEDEC standardized GDDR5X SGRAM. GDDR5X targets a transfer rate of 10 to 14 Gbit/s per pin, twice that of GDDR5. Essentially, it provides the memory controller the option to use either a double data rate mode that has a prefetch of 8n, or a quad data rate mode that has a prefetch of 16n. GDDR5 only has a double data rate mode that has an 8n prefetch. GDDR5X also uses 190 pins per chip; (190 BGA). By comparison, standard GDDR5 has 170 pins per chip; (170 BGA). It therefore requires a modified PCB.
https://en.wikipedia.org/wiki/GDDR5_SDRAM

The only difference is the PCB, not the die. Memory controller of GDDR5 is fully compatible with GDDR5X. GDDR6 - that is different story. I was under the assumption that GDDR5X would require new memory controller, however, reality turned to be slightly different.

It is not that the memory chips are consuming much less power, also. We are talking about 1W less per memory chip, because of lower voltage, compared to GDDR5X. So 256 bit memory will consume 8W less, at 14000 MHz.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.