Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MagicWok

macrumors 6502a
Original poster
Mar 2, 2006
822
84
London
So Apple, when can I place my order for a Dual 80-core set up? :D

Intel has built its 80-core processor as part of a research project, but don't expect it to boost your Doom score just yet.

Chief Technical Officer Justin Rattner demonstrated the processor in San Francisco last week for a group of reporters, and the company will present a paper on the project during the International Solid State Circuits Conference in the city this week.

The chip is capable of producing 1 trillion floating-point operations per second, known as a teraflop. That's a level of performance that required 2,500 square feet of large computers a decade ago.

Intel first disclosed it had built a prototype 80-core processor during last fall's Intel Developer Forum, when CEO Paul Otellini promised to deliver the chip within five years. The company's researchers have several hurdles to overcome before PCs and servers come with 80-core processors--such as how to connect the chip to memory and how to teach software developers to write programs for it--but the research chip is an important step, Rattner said.
intel80core

A company called ClearSpeed has put 96 cores on a single chip. ClearSpeed's chips are used as co-processors with supercomputers that require a powerful chip for a very specific purpose.

Intel's research chip has 80 cores, or "tiles," Rattner said. Each tile has a computing element and a router, allowing it to crunch data individually and transport that data to neighboring tiles.

Intel used 100 million transistors on the chip, which measures 275 millimeters squared. By comparison, its Core 2 Duo chip uses 291 million transistors and measures 143 millimeters squared. The chip was built using Intel's 65-nanometer manufacturing technology, but any likely product based on the design would probably use a future process based on smaller transistors. A chip the size of the current research chip is likely too large for cost-effective manufacturing.

The computing elements are very basic and do not use the x86 instruction set used by Intel and Advanced Micro Devices' chips, which means Windows Vista can't be run on the research chip. Instead, the chip uses a VLIW (very long instruction word) architecture, a simpler approach to computing than the x86 instruction set.

There's also no way at present to connect this chip to memory. Intel is working on a stacked memory chip that it could place on top of the research chip, and it's talking to memory companies about next-generation designs for memory chips, Rattner said.

Intel's researchers will then have to figure out how to create general-purpose processing cores that can handle the wide variety of applications in the world. The company is still looking at a five-year timeframe for product delivery, Rattner said.

But the primary challenge for an 80-core chip will be figuring out how to write software that can take advantage of all that horsepower. The PC software community is just starting to get its hands around multicore programming, although its server counterparts are a little further ahead. Still, Microsoft, Apple and the Linux community have a long way to go before they'll be able to effectively utilize 80 individual processing units with their PC operating systems.

"The operating system has the most control over the CPU, and it's got to change," said Jim McGregor, an analyst at In-Stat. "It has to be more intelligent about breaking things up," he said, referring to how tasks are divided among multiple processing cores.

"I think we're sort of all moving forward here together," Rattner said. "As the core count grows and people get the skills to use them effectively, these applications will come." Intel hopes to make it easier by training its army of software developers on creating tools and libraries, he said.

Intel demonstrated the chip running an application created for solving differential equations. At 3.16GHz and with 0.95 volts applied to the processor, it can hit 1 teraflop of performance while consuming 62 watts of power. Intel constructed a special motherboard and cooling system for the demonstration in a San Francisco hotel.

http://news.bbc.co.uk/1/hi/technology/6354225.stm

http://news.com.com/2100-1006_3-6158181.html?part=rss&tag=2547-1_3-0-5&subj=news
 

Raid

macrumors 68020
Feb 18, 2003
2,155
4,588
Toronto
The chip is capable of producing 1 trillion floating-point operations per second, known as a teraflop. That's a level of performance that required 2,500 square feet of large computers a decade ago.

Intel first disclosed it had built a prototype 80-core processor during last fall's Intel Developer Forum, when CEO Paul Otellini promised to deliver the chip within five years.
So I guess this means I should by the 8-core Mac Pro this year.... and in 5 years or so when it's no longer the mighty machine it once was, I can upgrade and get 10x the number of processors! :eek:

Let's just hope all apps will be multi-threaded by then!
 

MagicWok

macrumors 6502a
Original poster
Mar 2, 2006
822
84
London
What is astounding, is how in the space of 11 years, technology has allowed 2,000 square feet and 10,000 processors to fit on a single chip and on 80 cores... Phew!:eek:
 

chibianh

macrumors 6502a
Nov 6, 2001
783
1
Colorado
Intel readies the next generation of PC chips

Hmm, 80 cores? Imagine what a Mac Pro would be like with two of these baby's in it?? I hope you have a dilithium chamber or a naquada generator on hand for the power required, not no mention one hec of an A/C unit to keep it cool.

Eric

The chip is air cooled and only consumes 62 watts at teraflop speeds. That's less than a current Mac Pro dual xeon. No need for the fancy solution you're suggesting.
 

bearbo

macrumors 68000
Jul 20, 2006
1,858
0
how long, by estimation with respect to historical data, will we be seeing the said chip commercially?
 

dmw007

macrumors G4
May 26, 2005
10,635
0
Working for MI-6
What is astounding, is how in the space of 11 years, technology has allowed 2,000 square feet and 10,000 processors to fit on a single chip and on 80 cores... Phew!:eek:


I know that is truly amazing to think that you could shrink 2,000 square feet into a matter of inches. :)
 

Bobdude161

macrumors 65816
Mar 12, 2006
1,215
1
N'Albany, Indiana
ah so I see a trend here. Now that we've reached a certain limit in speeds, we are now focusing on cores. Imagine, basing a computers speed on the amount of cores it has. One mac offers 100 cores while another has 220, with a 500 dollar price difference. Ver yexciting times indeed.
 

dmw007

macrumors G4
May 26, 2005
10,635
0
Working for MI-6
ah so I see a trend here. Now that we've reached a certain limit in speeds, we are now focusing on cores. Imagine, basing a computers speed on the amount of cores it has. One mac offers 100 cores while another has 220, with a 500 dollar price difference. Ver yexciting times indeed.

I would not rule out more increases in clock speed, but the emphasis does seem to have shifted to the amount of cores that a processor contains. :)
 

NintendoFan

macrumors 6502
Apr 14, 2006
268
23
Massachusetts
how long, by estimation with respect to historical data, will we be seeing the said chip commercially?

The chip will never see the light of day, the technologies within the chip, however, will.

As its name implied, the Teraflops Research Chip is a research vehicle and not a product. Intel has no intentions of ever selling the chip, but technology used within the CPU will definitely see the light of day in future Intel chip designs.

http://anandtech.com/cpuchipsets/showdoc.aspx?i=2925&p=1
 

Pressure

macrumors 603
May 30, 2006
5,182
1,546
Denmark
These are not x86 cores, so they would be pretty useless for general computing.

However, it will be great at specialized tasks.
 

Aniej

macrumors 68000
Oct 17, 2006
1,743
0
The people in here commenting on 80 core being used in general consumer comps are full of it. Intel specifically said this is only for their own internal and other highly demanding scientific operations. Moreover, they will not be doing anything over 32 as any positive becomes significantly outweighed by the negatives. Try reading over something once and a while before just blathering words.
 

gauchogolfer

macrumors 603
Jan 28, 2005
5,551
5
American Riviera
The people in here commenting on 80 core being used in general consumer comps are full of it. Intel specifically said this is only for their own internal and other highly demanding scientific operations. Moreover, they will not be doing anything over 32 as any positive becomes significantly outweighed by the negatives. Try reading over something once and a while before just blathering words.

Well, I'd say you might want to lighten up a little bit.

The chip will never see the light of day, the technologies within the chip, however, will.

These are not x86 cores, so they would be pretty useless for general computing.

However, it will be great at specialized tasks.

Seems like some people took the time to read, contemplate, and if necessary, correct the misconceptions of other posters in a polite, even dignified manner.
 

iW00t

macrumors 68040
Nov 7, 2006
3,286
0
Defenders of Apple Guild
The people in here commenting on 80 core being used in general consumer comps are full of it. Intel specifically said this is only for their own internal and other highly demanding scientific operations. Moreover, they will not be doing anything over 32 as any positive becomes significantly outweighed by the negatives. Try reading over something once and a while before just blathering words.

"640kb is enough for everyone"
"... perhaps we will see a global market for 3 computers in a decade... "

Sounds familiar?
 

Pressure

macrumors 603
May 30, 2006
5,182
1,546
Denmark
As far as I understand Intel will take this design and implement 8 or 16 x86 capable cores in one package based on this Tera-Scale project.

That would be the way to go, otherwise they would need far more than 80 cores on a single package.
 

patrick0brien

macrumors 68040
Oct 24, 2002
3,246
9
The West Loop
how long, by estimation with respect to historical data, will we be seeing the said chip commercially?

-bearbo

Interesting how they've not mentioned how long the chip lasts isn't it? My guess is that no more than a few minutes.

We'll see something like that, but 80 is a weird number in the world of base2 math. I'd guess we'll see the standard progression of 2, 4, 8, 16, 32, 64, etc.

... and not for many a year...
 

patrick0brien

macrumors 68040
Oct 24, 2002
3,246
9
The West Loop
-Ah HA!

I knew there to be a rub.

This explains how there were even able to get it to work

"The cores in this particular CPU are not full CPU cores with full x86 instruction sets, but more focused on floating point calculations."

If they were, I'd be worried how they got past the 4-core memory caching barrier.

Linkypoo
 

Anonymous Freak

macrumors 603
Dec 12, 2002
5,604
1,389
Cascadia
For those that don't understand...

This was created solely as an engineering exercise to deal with the feasibility of putting so many cores on one die. It is not a commercial product for any use whatsoever, and will not become one.

What this is more similar to than an '80 core processor' is really an '80 node network'. Instead of a conventional 'front side bus' link between the cores, about 1/4 of each 'core' is really a small networking switch. It links to its own core, to the (up to) four surrounding cores, and one going 'out' for connection to future stacked cores.

The big advantage over conventional multi-core systems is that on a conventional system, every circuit in the chip has to be designed to receive the clock signal simultaneously. The more transistors on the chip, the more you have to worry about clock propagation. On this design, you only need to worry about clock propagation within each core. That is the big advancement on this design, is the ability to have many many cores that don't need to worry about the clock signal. (Their estimate is that in current processors, up to 30% of the power is used solely to worry about the clock signal, in this new design, it's more like 5%.) This makes for a performance hit, but for such a massive energy savings, it's worth it. (Since now you can almost double the number of cores for the same power!)

This is also going to be more useful when we get to different cores on the same die. For example, you could separate out the integer unit from the floating point unit, from the vector unit, and have much simpler individual cores that are capable of faster individual performance on their tasks than a simple multi-big-core unit would. This means that we could see, say, 4 integer cores, 8 floating point cores, and 20 vector cores, for a monster 3D processor.

This was co-developed with their 'stacked dice' concept, where instead of having one huge die that contains everything, you just stack individual dice on top of each other. So you'd have your 32-core processor on one layer, and your cache on a second layer, maybe a similarly-designed GPU on a third layer, etc. Higher-end processors would have more stuff on more layers. Want double the cache? No need to completely redesign the processor, just add another cache layer! This makes manufacturing cheaper, because each layer could be made and tested separately. And if you over-fill each layer, you can even deal with individual cores being bad. (So on my 32-core example, you would have maybe 40 cores total, so that any 8 could be bad and you'd still have a fully functional 32-core processor.)

Basically, this is a complete shift in the design of processors, away from the 'processor connected to the northbridge' concept and toward a 'processors are a network of their own' concept. Just as originally, all components resided on the processor's bus, then we moved to a separate processor bus, now we're on INTERNAL busses.
 

pilotError

macrumors 68020
Apr 12, 2006
2,237
4
Long Island
These are pretty much number crunching cores.

One thing I haven't seen mentioned is that Intel is looking to get into the Graphics business. A 1.2 teraflop graphics card could pretty much do real time ray tracing and some other pretty cool stuff. At 62 Watts, it would eliminate the need to move to external graphics card enclosures like ATI and nVidia are proposing. This was built with 65nm process, I wonder what the new 45nm and the new gate technology could do for something like this? Looks like Gaming has a bright future.

I'm sure the scientific community just wet themselves thinking of all the possibilities. A low cost supercomputer that takes a couple of orders of magnitude less power to operate.
 

gkarris

macrumors G3
Dec 31, 2004
8,301
1,061
"No escape from Reality...”
"640kb is enough for everyone"
"... perhaps we will see a global market for 3 computers in a decade... "

Sounds familiar?

As long as it runs Atari 2600 Asteroids or Intellivision Astrosmash - I'm happy.

Back in college - "You bought an IBM PC clone? A 10 Meg Hard Disk? What are you going to do with that much space???"
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.