Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
Could be, since the production resources are very tight.
That would be a capacity constraint.

The process could be custom in that the development was motivated by NVIDIA's big chip, who maybe payed part of it.

It might not be restricted at all, but nobody else wants to use it. Or NVIDIA could have an exclusivity for GPUs for a limited time, for example.
 
This is just marketing name for the process. It is 16 nm, but slightly optimized for Nvidia GPU, to get best yields out of 835mm2 die size chip.

There are two GV chips coming this year, both on "12 nm" process. One's name is for marketing reasons, and one is proper 12 nm, but it is not suitable for anything powerful(GPUs, CPUs with power target over 30W).

"This is just marketing name for the process."...


Where do you get the information that it is 16nm? And why do you bring this in your discussion? Do you have ability to prove it? if not, you are only making an imaginated opinion. It is you who own the truth, or TSMC?
 
Last edited:
  • Like
Reactions: tuxon86
"This is just marketing name for the process."...


Where do you get the information that it is 16nm? And why do you bring this in your discussion? Do you have ability to prove it? if not, you are only making an imaginated opinion. It is you who own the truth, or TSMC?
It seems it was going to be the 4th version of TSMC's "16nm" process but it was relabeled as "12nm", as some features are smaller.

The choice of name could be motivated by GF's upcoming 12FDX.

If some number is smaller than the others' "14nm" I don't think one can complain much, especially if it is below ITRS' 14/16 nm rules.

I have no details about the specs. I guess there's still the possibility that "TSMC 14nm" would have been a more appropriate name.
 
Last edited:
"This is just marketing name for the process."...


Where do you get the information that it is 16nm? And why do you bring this in your discussion? Do you have ability to prove it? if not, you are only making an imaginated opinion. It is you who own the truth, or TSMC?
Koyoot often makes random claims without supportive links in order to put Nvidia in a bad light.

Nothing new here. Move along. These are not the droids that you're looking for.
https://www.semiwiki.com/forum/content/6662-tsmc-talks-about-22nm-12nm-7nm-euv.html
TSMC formally introduced 22nm ULP (an optimized version of 28nm HPC+) and 12nm FFC (an optimized version of 16nm
12 nm process uses optimized for die area libraries. Other characteristics are the same as 16 nm. And this FFC process. Nvidia for GV100 chip uses FFN process(optimized to produce 835 mm2 die size).

When you, Aiden will learn that I do not post things just to paint Nvidia in bad light? Just because you are offended, or do not like information I post, does not mean you are right. I expect apology from you.
 
As I said,

"Koyoot often makes random claims without supportive links in order to put Nvidia in a bad light."

Why didn't you include the link in your earlier post, rather than making claims without supportive links?
Because I am used to every other technological forum on which the most obvious links do not have to be posted, because everybody know about the information.

This forum, unfortunately, is different.
 
Because I am used to every other technological forum on which the most obvious links do not have to be posted, because everybody know about the information.
Then why do you even post stuff that obviously everyone already knows?

This forum, unfortunately, is different.
We think differently, and like to have new info with links so that we can discover more on our own.

And we don't trust shadowy figures who claim that "my sources (whom I cannot name) say that...". Get real.
 
Then why do you even post stuff that obviously everyone already knows?
If you eat blindly Nvidia marketing, and say that is brand new process, and when I say that it is marketing name for 16 nm process, optimized for higher density with new libraries, and you refute, that is only me trying to picture Nvidia in bad light - no. It appears that not everybody knows this technological stuff.
We think differently, and like to have new info with links so that we can discover more on our own.

And we don't trust shadowy figures who claim that "my sources (whom I cannot name) say that...". Get real.
I think that information would be way too obvious to look in my sources ;). As I have said, you are responsible for your knowledge or lack of it. You have now link to page which have most of information about processes. Get used to use it. Later you will not refute with "koyoot does not post links to sites to paint Nvidia in bad light".
 
The sooner Apple moves away from AMD, the better. Here's to hoping the new Mac Pros use Volta GPUs and either the new Intel i9s or new Xeons. And throw in an Optane SSD in there :drool:
 
The sooner Apple moves away from AMD, the better. Here's to hoping the new Mac Pros use Volta GPUs and either the new Intel i9s or new Xeons. And throw in an Optane SSD in there :drool:
What’s wrong with the choice of either?
 
All I can say is the Chinese miners have bought thousands and thousands of Radeons because of the price/performance/efficiency balance. If one Vega can do 12-13 Tflops at the same price as two RX580 then the mining market will go mad for the card.
 
All I can say is the Chinese miners have bought thousands and thousands of Radeons because of the price/performance/efficiency balance. If one Vega can do 12-13 Tflops at the same price as two RX580 then the mining market will go mad for the card.

I think you meant to post this in the Polaris/Vega thread, not sure what this has to do with NVIDIA or Volta.
 
What’s wrong with the choice of either?

AMD is not in the AI game.

If Apple put back Nvidia GPU in MacBook Pro, iMac and Mac Pro, the Mac becomes the world's best AI and VR development platform. All major AI libraries are coded using CUDA, because yes, CUDA is a lot less a pain in the as* for developers than OpenCL.
 
AMD is not in the AI game.

If Apple put back Nvidia GPU in MacBook Pro, iMac and Mac Pro, the Mac becomes the world's best AI and VR development platform. All major AI libraries are coded using CUDA, because yes, CUDA is a lot less a pain in the as* for developers than OpenCL.
And interesting that the ATI PR machine is pitching Frontier as the top of the heap for AI ( http://www.christianpost.com/news/a...o-card-for-ai-and-enterprise-revealed-184001/ ) in spite of decidedly mid-range performance and an embarrassing lack of even any mention of FP64 support. (Rumours say that FP64 will be abysmal nearly as bad as Nvidia gamer cards.)

The AI/ML/DL libraries that matter are all CUDA-based. If ATI can't run CUDA code at the speeds of Nvidia GPUs - they are dead-in-the-water for AI/ML/DL.
 
Last edited:
Apple should just stop making Macs. All the programs that matter run on Windows or Linux.
 
  • Like
Reactions: koyoot
Very much uber exciting it would be.

Please, Jensen Please - release a consumer (AKA "gamer") card with a full complement of Tensor cores. FP64 is useless for me, but Tensor Cores make me hot in all of the right places.
Isn't the name a bit hype? Wouldn't Matrix Cores be a more precise name?
 
Nvidia's Next Big Thing: The HGX-1 AI Platform

What's the difference between DGX-1 and HGX-1? Well, HGX-1 is the hyperscale version of the DGX-1 platform.

Over the past three months, Nvidia's (NASDAQ:NVDA) stock has been upgraded by several financial services firms including Goldman Sachs (NYSE:GS), Citigroup (NYSE:C), and Bernstein, while some others have downgraded the stock, such as Pacific Crest.

47990974-14972784786606288[1].jpg
 
Last edited:
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.