Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

pertusis1

macrumors 6502
Original poster
Jul 25, 2010
455
161
Texas
Much has been written over the years about Moore's law, and incredibly it held up for almost 50 years. Every year or two, the number of resistors on a chip doubled, and hence the speed effectively doubled. Over the last couple of years, however, much has been written in the news about the 'death of Moore's law'. It is incredible that the CPU clock speeds have not changed all that much in the last 8 years or so. The Mac Pro 1,1 had clock speeds up to 3 GHz. Although the chips are better at some things than they used to, the bottom line is that 9 years later, chips are hardly faster.

Here's my point. I think that now is the time to be designing hardware that can be incrementally upgraded rather than having to be replaced wholesale. I have a MP 5,1 with dual 3.46 GHz processors, an XP941 drive, and 48 GB memory, up from 12 GB that it started with. It is upgraded with a 7970 video card. With the exception of TB, I can still pretty much upgrade it to whatever connectivity that I want.

Without a surprising technological leap, from a speed standpoint, my computer is not likely to be obsolete for quite a while. I think the days of a 3 year old computer being obsolete are over. It will likely be rendered obsolete by software long before the hardware has been significantly passed by.

I know this is a hackneyed idea on this forum, but in the context of Moore's law grinding to a halt, the idea of incrementally upgraded computers becomes even more attractive.
 
I've been reading a bit about this over the last few days, and I don't think that it's dying, but I think the focus is shifting. As unfortunate as it is, the money is not in desktop processors anymore, it's all in mobile. Laptops, tablets, and now Ultrabooks, and Intel (at least over the last two revisions of their processors) have shifted their focus from power to efficiency, to the point where we now have the new Core M. I know it's gotten some flack for being underpowered for a laptop, but come on, even the rMB edition of the processor has a TDP of just SIX WATTS, and spits out a Geekbench of about 5000 multicore. To me, that should be some kind of revision on Moore's law, rather than just raw power, power efficiency should be considered as well. I mean, heck, a 2006 Mac Pro 2.66GHz has roughly the same multicore performance as the new Core M, and not even close in the single core, and those things were power hungry monsters.

So to put in my two cents, I don't think it's dying, I just think that it's shifting or at least needs to be re-evaluated in a power-per-watt comparison.

The architecture has also changed, as well. For example, the switch from Front Side Bus to QPI did quite a bit for computing, but doesn't show in many raw performance benchmarks.

P.S. - if anyone has charts or benchmarks or studies that have been done on this, I would love to see it. Really quite an interesting topic.
 
How does the speed tests pan out over the last few years? As mentioned, while Moores Law was about transistors, modern times it's about speed, not GHz but real time speed?
 
Cross-post from another thread:

The WSJ [behind a paywall] also has an article about the limitations of Moore's Law.

http://www.wsj.com/articles/moores-law-runs-out-of-gas-1429282819?KEYWORDS=moore's+law

Key points include:

Intel 4004 in 1970's had 2300 transistors
Intel Core i5 2015 has 1.3 billion transistors
Intel Specialized chip late 1025 8 billion transistors

Issues include cost and reliability.

A 65 nanometer chip in 2005 cost $16.4 million to design
A 14 nanometer chip in 2014 costs $131.6 million to design

Micron's Chief Executive thinks that at some point improvements will only be cost effective [my interpretation] for smaller and smaller markets.

NAND chip makes are worried that reliability will suffer and have stopped shrinking transistors and are moving to stacking technologies.
 
Much has been written over the years about Moore's law, and incredibly it held up for almost 50 years. Every year or two, the number of resistors on a chip doubled, and hence the speed effectively doubled.

There is no "and hence" in Moorse's law. Doubling transistors doesn't necessarily increase clock speed.

The issue is becoming what to do with those additional transistors.

1. Could use same constant amount of transistors ( just now smaller ) and sell cheaper CPU packages.

For users that have hit a plateau in workload demands ( a now limited set of data and computatonal demands ) this is one approach.


2. Can add more stuff to the CPU package. The holy grail of a whole System on a Chip (SoC).

This is already happening. Mainstream Intel packages have x86+GPU+'old Northbridge' on a single die. Some of the new Atom class have the x86+GPU+'old Northbridge'+'old Southbridge' components all on the same die.

The huge, obsolete reasoning flaw is that all transistors in a "CPU" are devoted to computation instruction processing. It isn't. What is commonly referred to as a CPU is more so collection of what were discrete components in the old legacy systems your trying to use as a baseline.


3. "Copy and paste" more cores ( and immediate cache levels ). 6, 10, 12, 14 cores all on a single die. The "core count war" .

Two issues with this. First, you can get to higher core counts by using simpler cores (e.g., 'GPU' cores ). If have just one user who wants faster results from the same program and data set then more 'simpler' cores is generally more space efficient.

Second. this tends to work better for higher number of complex cores (x86 ) when have multiple users and/or applications running at the same time. But that also brings higher L3 cache level demands ( a broader spectrum of data being pulled in at the same time).

None of those is a push to higher x86 core clock speed.


There was/is some coupling between smaller process designs , tighter tolerances , and voltages that allowed clock rates to generally trend upwards. Alot of the slop in designs has been squeezed out over time. the instruction execution paths have been optimized for decades now. But that really isn't what Moore's law covered.

It was more so what could be done if had a bigger transistor budget. An evermore complicated individual x86 core hits the point of diminishing returns after a while. The bigger "band for the buck" returns now are in more narrower computations ( specialized like crypto or single instruction multiple data AVX ).


Over the last couple of years, however, much has been written in the news about the 'death of Moore's law'. It is incredible that the CPU clock speeds have not changed all that much in the last 8 years or so.

But the CPU speeds have changed. Especially on the Intel side. Again viewed through legacy, outdated, lens the clock speed has stalled because only talking about the 'base rate' speed. None of the modern Intel Xeon E5 run at just one constant speed on most workloads.

The change that has happened over last 4 years is a trend toward dynamically adjusting clocks. Cranked up when there is relatively little parallel work to do and set at a base when there is lots of parallel work to do. Single user workstations are not in just one of those modes most of the time when being activity used.


I think that now is the time to be designing hardware that can be incrementally upgraded rather than having to be replaced wholesale. I have a MP 5,1 with dual 3.46 GHz processors, an XP941 drive, and 48 GB memory, up from 12 GB that it started with. It is upgraded with a 7970 video card. With the exception of TB, I can still pretty much upgrade it to whatever connectivity that I want.

That is just as much due to plateauing of your workload as much as the hardware.

As pointed out above the CPU packages are becoming more integrated over time. Sticking with an older CPU package means setting in stone much more than just the x86 cores. Your memory speeds and bandwidths are stuck ( since integrated). Your PCIe speeds and bandwidth are stuck ( since integrated ). Your link to any Southbridge ( IO Hub) chipset is also stuck. Which means also stuck with the chipset.

PCIe isn't a 'cure all' panacea. PCie v2 lanes can't do PCIe v3 levels of bandwidth. Can't do v4.

The two x4 PCIe v2 lanes of the older Mac Pros is stuck either at v1 levels or bandwidth split on a single x4 source (i.e., behind a switch). Fire up the x4 PCIe SSD card and a x2 USB 3.0/SATA card at the same time and they will start resource competing.


I know this is a hackneyed idea on this forum, but in the context of Moore's law grinding to a halt, the idea of incrementally upgraded computers becomes even more attractive.

Moore's law isn' the root cause "Problem". Stagnating demand is. Computers are "fast enough" . While "640K ought to be enough for everybody" was a warning it is getting closer to being mainstream at the current common specs. Your are stopping at 48GB which is isn't even half way to the 2011-2012 Mac Pros limits. ( 128GB with OS X 10.9+ ). That is a sign workload is at least in part under shooting the limits of what the machine can do. No need for new Mac Pro because can't even fill workload capacity of old Mac Pro with base, hardcore requirements.

The market for "top fuel" drag racing CPUs ( max clock at all costs ) is relatively small and getting smaller. That is why the clock speed isn't the end-all-be-all of the CPU market anymore.
 
You cannot directly compare clock speeds between different designs. In general the newer designs will do more work per clock cycle. The clock speed is similar between a 3.2GHz X5482 in a 2008 Mac Pro 3,1 & a 3.46GHz X5690 which is the fastest CPU that can be put in a 2012 Mac Pro 5,1 but the single threaded performance is 4767 & 9235 respectively i.e. nearly double the performance for a similar clock speed. However this does illustrate that single stream performance has not doubled every year or two as the fastest Xeon in 2012 is not even double the fastest Xeon in 2008. This also illustrates that the 2008 Mac Pro 3,1 performance is still a very decent compared to the fastest cMP possible.
 
So far, Moore's law is still working, but I wound't take it as a serious indicator for performance. We are coming pretty close to the physical limitation (but there is still some way to go). More importantly, we can't just increase the clocks in the arbitrary way — this is why multicore designs became popular.

All in all, there are two tendencies with increasing the CPU performance nowadays:

1. Tighter integration: the components are being brought closer together,which allows increased efficiency and performance. Stacked designs are the new answer in chip manufacturing. This will ultimately culminate in combining RAM/ CPU/GPU at the same die.

2. Wider execution: instead of making CPUs faster at doing one thing, we focus on allowing CPUs to do more things at the same time. Essentially, CPUs are becoming more like GPUs. For example, current Nvidia GPUs have 1024-bit registers, which allows a single ALU to process 32 floating point numbers at the same time. In comparison, the modern Intel CPUs have 256bit wide registers. Soon, these will have 512bit registers. Of course, to harness all that computational power, we need to change the programming model.
 
Much has been written over the years about Moore's law, and incredibly it held up for almost 50 years. Every year or two, the number of resistors on a chip doubled, and hence the speed effectively doubled. Over the last couple of years, however, much has been written in the news about the 'death of Moore's law'. It is incredible that the CPU clock speeds have not changed all that much in the last 8 years or so. The Mac Pro 1,1 had clock speeds up to 3 GHz. Although the chips are better at some things than they used to, the bottom line is that 9 years later, chips are hardly faster.

Here's my point. I think that now is the time to be designing hardware that can be incrementally upgraded rather than having to be replaced wholesale. I have a MP 5,1 with dual 3.46 GHz processors, an XP941 drive, and 48 GB memory, up from 12 GB that it started with. It is upgraded with a 7970 video card. With the exception of TB, I can still pretty much upgrade it to whatever connectivity that I want.

Without a surprising technological leap, from a speed standpoint, my computer is not likely to be obsolete for quite a while. I think the days of a 3 year old computer being obsolete are over. It will likely be rendered obsolete by software long before the hardware has been significantly passed by.

I know this is a hackneyed idea on this forum, but in the context of Moore's law grinding to a halt, the idea of incrementally upgraded computers becomes even more attractive.

It slowing down and have slowed down a lot when you look at the chart. It is not looking good at all. It is really slowing down and taking longer.



800 nm 1980
600 nm 1994
350 nm 1995
250 nm 1997
180 nm 1999
130 nm 2001
90 nm 2004
65 nm 2006
45 nm 2008
32 nm 2010
22 nm 2012
14 nm 2015


Intel road map.

10nm late 2017
7nm 2018
5nm 2020

https://en.wikipedia.org/wiki/5_nanometer
 
There a lot of factors in play, cost, lack of competition,...
Effieciency isn't one of them though, the fact that we can put the same amount of transistors on a smaller chip makes you save power. And that's what we're doing mostly with the U and M series because people's average workload hasn't become that much more intensive (probably due to the Internet, tablets and already rich media experiences).

Every year Intel unveils faster processors with more cores, the fact that there getting more and more expensive is partly due to nearly no
competition and that wafers get more expensive and that thus cramming 32 cores in a place where last generation you crammed 22 into is just really expensive.

For a further tread this ARS article does a great job: https://arstechnica.com/information...ive-by-making-bigger-improvements-less-often/

I could type more but I'm to lazy :)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.