Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't think Apple will use NVidia GPUs even if there were new ones available. NVidia's release schedule has nothing to do with it.

There are quite a lot of issues. Even for demanding users, much of the time a lot of what they do won't run much faster on a dozen cores than four. It's possible to be in a situation where you can use more power, yet the type of power offered doesn't grant much in the way of performance. I have to wonder if Intel has tried to determine how much longer they can milk X86 and where they see themselves after that.

x86's future looks pretty bright. ARM is in no position to replace it on desktops or laptops (Apple shot down any rumors of an ARM laptop last week for good reason), and x86 is moving into cell phones and tablets.

Whether x86 will work out in cell phones or tablets is an open question. But there is no threat to it right now in desktops or notebooks.
 
There are quite a lot of issues. Even for demanding users, much of the time a lot of what they do won't run much faster on a dozen cores than four. It's possible to be in a situation where you can use more power, yet the type of power offered doesn't grant much in the way of performance. I have to wonder if Intel has tried to determine how much longer they can milk X86 and where they see themselves after that.
What you're describing is primarily a software issue (lack of threaded applications when compared to the overall software market), not hardware though.

But there are specialized areas that do possess an increased use of such software, particularly in the enterprise market, and focused around servers (it exists for workstations too, but most of the professional suites are actually based on rather old code, so typically only one or two of the applications within the suite are truly threaded, while the rest of it isn't). CS5.5 would be a good example.

But where Intel is trying to help, is with other areas that have needed addressing for years. Specifically in the area of I/O throughput with more recent CPU architectures.

For example, the memory controller was integrated on the CPU die with Nehalem, and they've now added the PCIe controller on the die as well with Sandy Bridge. BTW, I'm talking about Xeons specifically, as this is the MP section of MR. ;) Both have resulted in increased throughputs outside of the CPU itself, which are very welcome improvements in the enterprise market.

Another area that Intel's improvements can help, but also highly depend on the software development, would be GPGPU processing. Highly useful for number crunching (workstation and servers that run FPU calculations most of the time, if not 24/7).

Now there are users that have fallen through the cracks as it were, as they may not have much use for additional PCIe lanes, and the software developers haven't produced (or can't) threaded applications for their particular usage. In those cases, there may not be a choice (i.e. the application types cannot be threaded/won't benefit whatsoever).

It comes down to specific usage, but I suspect that most professional users that would fall into that category will be a niche in time as software is slowly changed to support threading (i.e. won't ever be able to take advantage of threading or PCIe lanes), and suspect are likely to be consumer or non-critical business users (i.e. running spreadsheets, word processing, email, and browser types of applications are as strenuous as it gets).

In such cases, you can't put it all on Intel, as they're dealing with device physics and finding other ways of increasing performance rather than just ramping clock speeds as they were able to do for decades (limitations to semiconductors = why we're getting more cores rather than frequency increases with more recent architectures).

Keep in mind, this wasn't a massive surprise to software developers, as articles and white papers on the need for parallelization has been published on this for years (materials were available before the first Intel multi-core CPU's were ever released).

Food for thought anyway. ;) :D
 
What you're describing is primarily a software issue (lack of threaded applications when compared to the overall software market), not hardware though.
...
Keep in mind, this wasn't a massive surprise to software developers, as articles and white papers on the need for parallelization has been published on this for years (materials were available before the first Intel multi-core CPU's were ever released).

Food for thought anyway. ;) :D

Apple began, with Moto's help, heading down this road at least 12 years ago. Remember that Apple announced in July 2000, the PowerMac G4 (Gigabit Ethernet) with dual-processor power to the G4 line (available with dual 450 or 500 MHz PowerPC G4 CPUs). I remember reading their white papers on parallelization at that time, while I was experimenting with swapping clock chips for that extra MHz boost. So you're definitely correct nanofrog - it really shouldn't be a surprising development today, particularly to Mac developers. The Sandy Bridge E5 line will serve well for what they were designed to do - JIT (just in time) performance = low multicore GHz idle, but high turbo ratios on demand. Sounds like WolfPack1 and my other underclocked/turbo biased predators (Compare GHz speeds vs. geekbench2 performance in URL in sig to see what greater turbo boost/multicore potential can bring to highly multithreaded apps).
 
Last edited:
Apple began, with Moto's help, heading down this road at least 12 years ago. Remember that Apple announced in July 2000, the PowerMac G4 (Gigabit Ethernet) with dual-processor power to the G4 line (available with dual 450 or 500 MHz PowerPC G4 CPUs). I remember reading their white papers on parallelization at that time, while I was experimenting with swapping clock chips for that extra MHz boost. So you're definitely correct nanofrog - it really shouldn't be a surprising development today, particularly to Mac developers. The Sandy Bridge E5 line will serve well for what they were designed to do - JIT (just in time) performance = low multi-core GHz idle, but high turbo ratios on demand. Sounds like WolfPack1 and my other underclocked/turbo biased predators (Compare GHz speeds vs. geekbench2 performance in URL in sig to see what greater turbo boost/multicore potential can bring to highly multi-threaded apps).
Since the complaint was leveled against Intel, and that's what the MP's have been using since 2006, I limited the scope of my post to Intel only CPU's.

In regard to the need for threaded applications however, Intel certainly didn't have a monopoly on that realization. Research would have shown this back in the '70's at least, and perhaps a bit earlier than that (Cray's monsters that had to do it with an incredible number of physical CPU's in order to get the core counts high enough to generate the IOPS they were designed for immediately comes to mind, as does RISC based chips from other manufacturers in the '80's such as Sparc, DEC, IBM, and MIPS for example).

As fabrication processes shrank, it became feasible to start adding cores to a single die and passing that technology on to more and more users over time as the costs continually decreased.

Where the PPC factor would have come in, and been highly important, was for running PPC code on an Intel CPU. Apple created a solution, but I'm not all that familiar with the PPC applications landscape to know which ones/how many were available in order to get a sense of % of marketshare of threaded PPC applications that existed, or how well they performed on the Intel CPU's with another software layer added in (emulator).

But generally speaking, software development has always lagged behind hardware, and in the case of threaded application development, it seems to be slower than usual to me (haven't sat down and done any statistics, so it's just an impression based on memory recall as I type this).
 
But generally speaking, software development has always lagged behind hardware, and in the case of threaded application development, it seems to be slower than usual to me (haven't sat down and done any statistics, so it's just an impression based on memory recall as I type this).

Have you seen the Haswell developments? Looking like a step in the right direction. Would also be a benefit to start engineering closer to what Tutor has going on. Results are everything. If the SW isn't staying relevant then HW tech should bend to make it better.

http://www.xbitlabs.com/news/cpu/di...ficiency_of_Highly_Threaded_Applications.html

http://www.extremetech.com/computin...ign=Feed:+ziffdavis/extremetech+(Extremetech)
 
Have you seen the Haswell developments? Looking like a step in the right direction. Would also be a benefit to start engineering closer to what Tutor has going on. Results are everything. If the SW isn't staying relevant then HW tech should bend to make it better.

http://www.xbitlabs.com/news/cpu/di...ficiency_of_Highly_Threaded_Applications.html

http://www.extremetech.com/computin...ign=Feed:+ziffdavis/extremetech+(Extremetech)
I was more interested in the IGP gains.
 
No I know for sure the desktop will not be eliminated

Nikon just came out with a 36MP camera. A good majority of photographers use mac.

Desktops have the power and storage you will never get from anything else Apple can produce.

Case closed.
 
Nikon just came out with a 36MP camera. A good majority of photographers use mac.

Desktops have the power and storage you will never get from anything else Apple can produce.

Case closed.

Same with 4K and then 8K video. Oh and Sigma just announced a 46MP camera following on from Nikon's news.

People can talk about how consumer systems are good enough for most things, but unless Apple wants to abandon those who pioneer and lead in many areas and reduce the efficiency of so many - possibly millions of - users, they need to keep a high-end workstation.
 
Another area that Intel's improvements can help, but also highly depend on the software development, would be GPGPU processing. Highly useful for number crunching (workstation and servers that run FPU calculations most of the time, if not 24/7).

GPGPU processing has some cool potential, but I have no idea what its limitations are. It's more that I've seen it implemented in interesting ways already within the past year or so. I noted NVidia has focused on the server market there with their Tesla cards. The server market seems to benefit considerably more from higher core counts, especially given the rise in virtualization solutions over the past decade. On the desktop end, it seems like there are problems converting old code, and problems with really splitting up functions meant to be perceived by the user in real time. I don't know much about programming. I have noted Adobe and others often talk about what things don't run well in parallel threads. Someone pointed an online document regarding python, and it said that people with beards like Linux. I'm reading that, but if you know of any good books on Python, I wished to learn it thoroughly. Maya has a python api, and it's integrated into a lot of other 3d packages as well.


Same with 4K and then 8K video. Oh and Sigma just announced a 46MP camera following on from Nikon's news.

People can talk about how consumer systems are good enough for most things, but unless Apple wants to abandon those who pioneer and lead in many areas and reduce the efficiency of so many - possibly millions of - users, they need to keep a high-end workstation.



Sigma says a lot of things. The foveon technology was interesting especially given that it doesn't suffer from some of the issues inherent in the bayer RGBG array. They haven't done anything great with it though :(. What I mean is that it kills issues like red blooming and blue noise due to objects of those colors having virtually less pixels with good data to build from, and it limits moire problems. I wish sony or canon bought that company out instead.
 
Have you seen the Haswell developments? Looking like a step in the right direction. Would also be a benefit to start engineering closer to what Tutor has going on. Results are everything. If the SW isn't staying relevant then HW tech should bend to make it better.
Yesk, I'm aware of Haswell's architecture.

TSX is will make a nice improvement, but keep in mind that this is aimed for the enterprise market (servers in particular) and will take time to trickle down to consumer grade software, if it ever does.

The reasoning behind the architectural changes is driven by the enterprise users, as that's where most of Intel's profit margins are derived. So they cater to this particular group due to financial interest (pure self interest standard business practice ;)).

Comparatively speaking, consumer systems are just icing on the proverbial cake through systems engineering (= scaled down version of the enterprise design, perhaps with a tiny bit of original design), so though they do make improvements (primarily trickle down), it's not derived directly for consumer users in the case of CPU development.

It's too prohibitively expensive for them to design 2x (or more) lines for each group from the ground up, so systems engineering has to take over (last I looked, a new fabrication facility is over $3B USD to construct, and there's still the R&D to account for :eek:).

Now since servers can actually benefit from threaded applications today (which are primarily designed in-house), and businesses have been demanding things like a lower TCO (particularly in areas of power consumption), parallelism, improved I/O outside of the CPU, and of course more performance, Intel has answered. Exactly what you're on about (hardware should bend bit :eek: :D).

Where the differences lie, is with the group that the changes are made for, and ultimately, it still has to conform to the laws of physics. So far, they've actually managed to do a decent job of it. ;)

If you look at software development, you'll find they got too used to just clock frequency increases, and ignored the changes that had to be made due to the limits of semiconductors and fabrication techniques (i.e. management vaguely listed to developer staff talk about technical changes, then took the White Papers and articles into the bathroom right after the meetting... followed by a distinctive FLUSH sound :rolleyes: :p).

Combine that with the business side being cheap, we have software with spaghetti code that's typically over 15 years old hampering current software from what I've seen, including the very expensive professional grade applications.

It sucks consumer and some professional users (i.e. workstation users that don't use much in the way of threaded software, whatever the causality), but comparing software v. hardware development improvements, Intel as answered the call of what's been asked of them (particularly recently), while most commercial software developers have remained mired in the past for various reasons (mostly due to greed - gotta love MBA's :rolleyes: :mad:). :eek: :p

I was more interested in the IGP gains.
For consumer users, this will be far more important.

TSX is just marketing in this particular segment, and will remain so for some time. You'll see it's implementation in never-before-written software before it's added to existing software suites.

GPGPU processing has some cool potential, but I have no idea what its limitations are.
It's meant for number crunching (i.e. massive FPU calculations). So if you're dependent on strings or integers, it's useless.

But things like accounting, simulation software, ... types of code that generate decimal values, it could be leveraged to improve things significantly, as the FPU's in GPU's are much faster than those in CPU's (starting to look like the beginning of an engineering joke... :D).

I noted NVidia has focused on the server market there with their Tesla cards. The server market seems to benefit considerably more from higher core counts, especially given the rise in vitalization solutions over the past decade.
The enterprise market (i.e. servers), is where the money is. They're willing to pay higher margins for performance, and will actually spend the money rather than claim they will up until the point they see the price tag as consumer users are wont to do.

Pure profit motive at work. ;)

On the desktop end, it seems like there are problems converting old code, and problems with really splitting up functions meant to be perceived by the user in real time.
There are challenges to be sure, but a lot of it centers around financial decisions, such as the amount of recycled code (where spaghetti comes from), and the lack of funds being put forth to better develop software (i.e. not enough funds to cover the full extent of man-hours needed to write from scratch or debug the spaghetti before it's released into the market).

You'd be amazed at how many screwed up products were the result of the business side as they don't understand the technical aspects involved, or just don't care.

I'm reading that, but if you know of any good books on Python, I wished to learn it thoroughly. Maya has a python api, and it's integrated into a lot of other 3d packages as well.
Web resources would give you a better idea for Python books than I can (not read but a couple, and those were some time ago).
 
For consumer users, this will be far more important.

TSX is just marketing in this particular segment, and will remain so for some time. You'll see it's implementation in never-before-written software before it's added to existing software suites.
We are 6 years into the multi-core adventure and it is not the first time I have heard claims of "cracking threaded applications". Sometimes it will be from the software guys and others from the hardware guys.

Beyond that you have architecture instruction specific boost and the last one in memory is from the SSE4 era. Make sure to check all your boxes!
 
We are 6 years into the multi-core adventure and it is not the first time I have heard claims of "cracking threaded applications". Sometimes it will be from the software guys and others from the hardware guys.
Not quite sure what you mean with "cracking threaded applications". :confused:
 
If you look at software development, you'll find they got too used to just clock frequency increases, and ignored the changes that had to be made due to the limits of semiconductors and fabrication techniques (i.e. management vaguely listed to developer staff talk about technical changes, then took the White Papers and articles into the bathroom right after the meetting... followed by a distinctive FLUSH sound :rolleyes: :p).

That made me laugh:D although I like reading white papers.

It sucks consumer and some professional users (i.e. workstation users that don't use much in the way of threaded software, whatever the causality), but comparing software v. hardware development improvements, Intel as answered the call of what's been asked of them (particularly recently), while most commercial software developers have remained mired in the past for various reasons (mostly due to greed - gotta love MBA's :rolleyes: :mad:). :eek: :p

It's quite annoying on various levels. The mac pro is an area where cpu power doesn't scale as well as I'd like with the price of the machine compared to earlier generations. I still don't understand using such a cheap cpu in the entry model. This is one of those things where I don't mind paying, but I wish to pay for performance. Low clock speeds and high core counts put the 8 core in a weird spot as well, when you consider that even users who can benefit from high core counts don't typically benefit from them on everything. It's really quite common, and for users in larger settings, it's common to have access to a server for heavier tasks to be run over a longer period rather than running everything directly on the workstation. The tradeoff in machine build options offered can be quite irritating.
 
Writing programs to take advantage of multiple cores/threads.
Ah, OK.

Makes you wonder just how long it will be before suitable software is properly threaded (what can be and would actually benefit from it).

We just made it to 4 cores/threads on average in this past year outside of niche applications.
Niche is exactly right for the avg. computer user (little here and there, but overall, not much).

But in the enterprise market, the niche becomes closer to the norm (if you look at servers/clusters for a specialized task, this usually is the case), as they've had to either take matters into their own hands (write their own applications), or contracted it out rather than try to use commercially available software that's trying to do too much in order to work in a greater number of environments. (tends to result in it's not as optimized for any particular user's usage pattern as it possibly could be).

Please understand, I'm not talking about server OS's, but specialized applications that are so specific, there either isn't a commercially available product, or probably not well suited to the specific usage (too much customization needed to even make it work, and then it's not that efficient). So this has led to large organizations to "roll their own" tailored to their specific use.

This is one of the reasons why Linux is being adopted by enterprise users, as they can create these applications faster (stable OS, and plenty of open source code resources to reduce development time). Cheaper too, when it works out (keep in mind, this occurs when there are IT staff available full-time for internal support).

That made me laugh:D although I like reading white papers.
Funny, but not all that far from the truth in my experience.

It's quite annoying on various levels. The mac pro is an area where cpu power doesn't scale as well as I'd like with the price of the machine compared to earlier generations. I still don't understand using such a cheap cpu in the entry model. This is one of those things where I don't mind paying, but I wish to pay for performance. Low clock speeds and high core counts put the 8 core in a weird spot as well, when you consider that even users who can benefit from high core counts don't typically benefit from them on everything. It's really quite common, and for users in larger settings, it's common to have access to a server for heavier tasks to be run over a longer period rather than running everything directly on the workstation. The trade-off in machine build options offered can be quite irritating.
In the case of the low-end CPU's (low frequency, lower core count), this is due to monetary factors (allows Apple to keep higher margins without significant price increases, as the cost of the CPU's is actually lower than in past Intel based MP's).

As per the performance issues, it's back to the current state of software regardless of the frequency or total core count really, as they all suffer from idle cores when the application suites aren't fully threaded (all applications =/ threaded, but rather a sparse few only, if the user is really lucky).

More pressure needs to be placed on software developers IMO, as it seems they don't have any motivation to change from their current course of how they do business (tiny increments at best, rather than sit down and do significant re-writes as are currently needed due to a lack of significant competition). I keep thinking of an old saying that involves a barrel and a person with their pants down as a suitable analogy. ;)

I just hope Intel doesn't fall into this as well, as they did in the past...
 
Ah, OK.

Makes you wonder just how long it will be before suitable software is properly threaded (what can be and would actually benefit from it).
Both OS X and Windows have jumped up in starting thread count from their base processes. Though this is not terribly impressive when you are at idle the majority of the time. Load is another story where hopping into Activity Monitor can get interesting but how many people do that?

Niche is exactly right for the avg. computer user (little here and there, but overall, not much).

But in the enterprise market, the niche becomes closer to the norm (if you look at servers/clusters for a specialized task, this usually is the case), as they've had to either take matters into their own hands (write their own applications), or contracted it out rather than try to use commercially available software that's trying to do too much in order to work in a greater number of environments. (tends to result in it's not as optimized for any particular user's usage pattern as it possibly could be).

Please understand, I'm not talking about server OS's, but specialized applications that are so specific, there either isn't a commercially available product, or probably not well suited to the specific usage (too much customization needed to even make it work, and then it's not that efficient). So this has led to large organizations to "roll their own" tailored to their specific use.

This is one of the reasons why Linux is being adopted by enterprise users, as they can create these applications faster (stable OS, and plenty of open source code resources to reduce development time). Cheaper too, when it works out (keep in mind, this occurs when there are IT staff available full-time for internal support).
I will stick with the general public/personal workstation for this conversation. I am sure there are many more exotic personally tailored applications and platforms beyond the scientific/mathematical computational ones that I have worked in deploying that have scaled up for years now. I have not talked about those video editors either.

In my sphere of experience it is going nowhere fast with software/hardware vendors claiming the "fix" for the masses.

Now we have fancy things like Turbo Boost, SSDs, PCI-Express 3.0 unlike those early days of the QX6700. I remember the day where one man could get 4 cores on a single socket for less than $1,000. Now it is around $100-200.
 
Both OS X and Windows have jumped up in starting thread count from their base processes. Though this is not terribly impressive when you are at idle the majority of the time. Load is another story where hopping into Activity Monitor can get interesting but how many people do that?
I'm ignoring OS operations, as an OS is rather useless without applications (granted, they do include a few things, such as the ability to access the internet and email, but those aren't capable of really pushing the system).

I will stick with the general public/personal workstation for this conversation.
It's easy to separate out the consumer based systems, but harder to do so with workstations when they're using Xeons as well.

Unfortunately, the server portion of the market is MUCH larger than the workstation portion. So when you consider both that that particular group is not only the majority, but how that majority = Golden Rule, Intel will cater to that group.

The hardware itself isn't the problem for workstation users, or consumer users for that matter.

But the state of commercially developed software is. The server users have gotten around this problem by developing it themselves, but unlike the workstation and consumer users, they have both the financial resources and the financial incentives to pursue such actions.

The rest of us don't, and that's what we all notice (where you seem to be focused, and it's certainly not without merit).

It's just how things are separated that make things murky IMO.

In my sphere of experience it is going nowhere fast with software/hardware vendors claiming the "fix" for the masses.
Gotta love PR/marketing.... :rolleyes:

But it's not just perspective IMO, as the simple fact is that the hardware most of us have is capable of more than the software is designed to take advantage of. Thus from a technical POV, it's mostly on software generally speaking (there will of course be exceptions, but those are applications that either won't benefit from threading, or worse, would create a mess).

Now we have fancy things like Turbo Boost, SSDs, PCI-Express 3.0 unlike those early days of the QX6700. I remember the day where one man could get 4 cores on a single socket for less than $1,000. Now it is around $100-200.
These are the "bones" that the rest of us can better take advantage of... Particularly the increased value for money (cost performance ratio).
 
I'm ignoring OS operations, as an OS is rather useless without applications (granted, they do include a few things, such as the ability to access the internet and email, but those aren't capable of really pushing the system).
People still appear to have the race to the bottom mentality when it comes to startup processes. This is not the Windows XP days where getting under 30 processes is a great success.


It's easy to separate out the consumer based systems, but harder to do so with workstations when they're using Xeons as well.

Unfortunately, the server portion of the market is MUCH larger than the workstation portion. So when you consider both that that particular group is not only the majority, but how that majority = Golden Rule, Intel will cater to that group.

The hardware itself isn't the problem for workstation users, or consumer users for that matter.

But the state of commercially developed software is. The server users have gotten around this problem by developing it themselves, but unlike the workstation and consumer users, they have both the financial resources and the financial incentives to pursue such actions.

The rest of us don't, and that's what we all notice (where you seem to be focused, and it's certainly not without merit).

It's just how things are separated that make things murky IMO.
Time is money and you have the money to save a lot of time on all of those backroom machines your spending plenty of money on.

On campus we discouraged professors from purchasing their own massive workstations when they were going to spend their entire time in a terminal session running it on the server rack anyways. Though I can understand those that did not have the patience for a time share. More often than not, that was never truly going to be a problem.

I will agree that software still has to catch up but we are well into the multi-core era with still far too many promises.


These are the "bones" that the rest of us can better take advantage of... Particularly the increased value for money (cost performance ratio).
Certain technologies caught up to allow us to push the bottleneck in responsiveness elsewhere. Intel still plays the I/O starved card on anything that is not a workstation. I would not strongly consider a SSD until the Intel 6 Series era on a simple single socket desktop. A lot of new technology came late into the 5 Series life and choked the available bandwidth.

Now we get PCIe 3.0 and USB 3.0 built into Panther Point with no addition to the BOM. One wonders why Intel let other chipmakers run rampant with USB 3.0 controllers. Then again they are still going to only provide two of them via the PCH. Things still look good if you want nothing but USB 3.0 ports.
 
Last edited:
Given what is occuring on the Windows platform with the latest i7s, we can make more reasonable speculations about the relative levels of increase in performnace we may see in the Sandy Bridge E5's. The Sandy Bridge systems with the Intel Core i7 - 3930k chip (its Xeon doppelgänger is the Intel Xeon E5 - 1650 - both run at 3.2 GHz and do [and will] retail in the $575 to $600 range for each) appear to be the best price/performance systems for those who can make use of enough of a single or multiple multithreaded app(s) to justify going beyond 4 cores. The Intel Core i7 - 3960x's doppelgänger is the Intel Xeon E5 - 1660 - both run at 3.3 GHz and do [and will] retail in the $1050 to $1100 range for each. I'll be using a Windows build to show why this is my belief. I know that geekbench 2 isn't a performance test illustrative of most apps because they're not multi-threaded and to the extent they are multi-threaded the mileage may vary. Further, I know that this system is overclocked to run that single CPU at 4.16 GHz.

Currently on geekbench top scores Page 49 is a geekbench 2 score of 22,644 by robotics [there are other much higher scores from others using the 3930k's and 3960x's ( http://browse.geekbench.ca/geekbench2/chart/show/527648 )]. Since this system is running Windows 64-bit, if it were running OSX 64-bit, from my experience and comparisons with others, that Windows 64-bit geekbench 2 score has to be multiplied by 1.17 to forecast what it would be running on OSX 64-bit. Thus, the OSX score would be around 26,493. In a standard Mac Pro 2012 (or 6,1), the forecasted performance for that same CPU would be at a minimum (because the performance increase from overclocking isn't linear) about (3.2/4.16 = 0.76923076923077; 0.76923076923077 x 26,493 =) 20379, which is near the range of the 22,574 average score for the 12-core Mid 2010 Mac Pro 2.66 GHz system and 4659 units or 30% higher than the 15,720 average score for the 6-core Mid 2010 Mac Pro 3.33 GHz system with the W3680 ( http://www.primatelabs.ca/geekbench/mac-benchmarks/#64bit ). Even reducing the guestimate to a 25% increase in performance reveals a good performance boost for a chip within the same price range as the W3680.
 
Last edited:
Time is money and you have the money to save a lot of time on all of those backroom machines your spending plenty of money on.
I assume you're referring to racks full of servers. In such cases, this should be true so long as the planning was done properly (takes into account all factors, including networking capabilities and available software).

I will agree that software still has to catch up but we are well into the multi-core era with still far too many promises.
I truly understand your frustration, as I feel it as well, and expect many others do too.

Where I differ however, is that I know that the performance gains aren't purely hardware. Software has to play it's part as well, and it's why Intel has Development Partners (not just hardware companies such as board makers and system vendors).

And when I look for the short-falls in this relationship, I see software as the weakest link in the chain.

Intel still plays the I/O starved card on anything that is not a workstation. I would not strongly consider a SSD until the Intel 6 Series era on a simple single socket desktop. A lot of new technology came late into the 5 Series life and choked the available bandwidth.
Intel had no control over that, but more importantly, most of the consumer systems are used by people that don't leverage a lot of slots.

Most here in MR are very computer oriented, and in the case of the MP in particular, I/O performance is rather important.

But think about the laptop, home desktops (AIO's especially), and even basic-use business systems (i.e. Office suites, email, and browsers are as bad as it gets). Those users don't leverage a lot of PCIe slots, if any. Such users are also the driving force for the IGP based systems as a means of reducing costs and adding value for money. Ultimately, system costs should reduce in the entry-level consumer systems due to no longer requiring a discrete GPU solution.

Now we get PCIe 3.0 and USB 3.0 built into Panther Point with no addition to the BOM. One wonders why Intel let other chipmakers run rampant with USB 3.0 controllers. Then again they are still going to only provide two of them via the PCH. Things still look good if you want nothing but USB 3.0 ports.
Thunderbolt has played a significant role here, as they intentionally delayed it as a means of getting TB out and allowing it time to gain traction.

It also happens to allow them to stretch out incremental performance upgrades outside of the CPU itself. MBA thinking strikes again.

Chipsets are getting less complicated due to moving sections they previously contained on the CPU die as well. Unfortunately, they're also seeing bugs which does concern me (hoping this isn't the beginning of a downward trend).
 
I assume you're referring to racks full of servers. In such cases, this should be true so long as the planning was done properly (takes into account all factors, including networking capabilities and available software).
One would hope if you are going to invest in the back end you have the needs and applications to take advantage of it. Then again is hopefully true of any business purchase. :rolleyes:

I truly understand your frustration, as I feel it as well, and expect many others do too.
I am not really frustrated. I am just not impressed with all of the PR.

Intel had no control over that, but more importantly, most of the consumer systems are used by people that don't leverage a lot of slots.
Correct, SATA 6 Gbps and USB 3.0 did get sprung upon the bandwidth limited DMI at the time.

Most here in MR are very computer oriented, and in the case of the MP in particular, I/O performance is rather important.

But think about the laptop, home desktops (AIO's especially), and even basic-use business systems (i.e. Office suites, email, and browsers are as bad as it gets). Those users don't leverage a lot of PCIe slots, if any. Such users are also the driving force for the IGP based systems as a means of reducing costs and adding value for money. Ultimately, system costs should reduce in the entry-level consumer systems due to no longer requiring a discrete GPU solution.
Outside of the small enterprise market (50-300 users), I am deeply embedded into the enthusiast side. You get plenty of overlap from user's "workstation" desires and hardware vendors.

Nehalem was the beginning of a clear market separation from Intel with the different P55 and X58 platforms. You can get nearly all the CPU power from the mainstream platform. Intel was smart in capping those LGA 1155 prices now at $300 and introducing the Core i7 3820 if you need the platform advantage of X79.

The BOM does not appear to have changed much from LGA 775 to the newer sockets for the mainstream. Though board prices are expect to creep up from material and labor costs.

If you really want a hex core you are still going to look for enthusiast/workstation due to memory bandwidth constraints. Dual channel is saturated at 4 cores.

Quad core turns into entry level stuff in 2013/2014.

On a personal note, forget SSD caching. We are on RAM caching for X79.

Thunderbolt has played a significant role here, as they intentionally delayed it as a means of getting TB out and allowing it time to gain traction.

It also happens to allow them to stretch out incremental performance upgrades outside of the CPU itself. MBA thinking strikes again.

Chipsets are getting less complicated due to moving sections they previously contained on the CPU die as well. Unfortunately, they're also seeing bugs which does concern me (hoping this isn't the beginning of a downward trend).
ThunderBolt is going to get better traction on the mobile front but the barrier to entry is ridiculous for everyone but the enthusiast/professional. Every Mac is going to get a ThunderBolt port and so are the Ultrabooks/Z77 premium boards but at $50 a cable and the near vaporware status of drivers is not helping one bit.

Some users already clamor for more bandwidth but Intel is not going to push an update to the platform until 2014 in order to saturate the market with the current generation.

USB 3.0 gets near disposable cables but you do not get the fun of a literal external PCIe.
 
One would hope if you are going to invest in the back end you have the needs and applications to take advantage of it. Then again is hopefully true of any business purchase. :rolleyes:
What I was getting at, as even with large enterprises, they tend to upgrade in stages, not everything simultaneously.

For example, they may replace servers, but connect them using the same networking system that was installed five years prior. So it's not uncommon for bandwidth to be strained due to more users that have been added within that period of time as well as changes in the volume of data per user is also likely to increase as a result of the faster systems.

I am not really frustrated. I am just not impressed with all of the PR.
Not wise to confuse PR with reality IMO as PR/Marketing personnel are likely to live somewhere other than reality. So I try not to take everything stated as fact, particularly as reality tends to be different. Saves a lot of disappointment that way. ;)

The BOM does not appear to have changed much from LGA 775 to the newer sockets for the mainstream. Though board prices are expect to creep up from material and labor costs.
The changes have been significant, but it's within the semiconductors themselves, not the discrete components (from filters to PWM controllers to voltage regulation).

Thunderbolt is going to get better traction on the mobile front but the barrier to entry is ridiculous for everyone but the enthusiast/professional. Every Mac is going to get a Thunderbolt port and so are the Ultrabooks/Z77 premium boards but at $50 a cable and the near vaporware status of drivers is not helping one bit.
It's still rather new and will take time for prices to fall as per usual with a new technology.
 
What I was getting at, as even with large enterprises, they tend to upgrade in stages, not everything simultaneously.

For example, they may replace servers, but connect them using the same networking system that was installed five years prior. So it's not uncommon for bandwidth to be strained due to more users that have been added within that period of time as well as changes in the volume of data per user is also likely to increase as a result of the faster systems.
Unless you are building an entire infrastructure from the ground up, you are going to be passing the bottleneck onto some other portion of what you have through tiered advancements. You are going to improve the weakest link before moving onto the next one. The same could be said with components inside a single desktop that are upgraded at different points.

The changes have been significant, but it's within the semiconductors themselves, not the discrete components (from filters to PWM controllers to voltage regulation).
True, even Intel is going to take x86 to SoC status for notebooks soon enough.

It's still rather new and will take time for prices to fall as per usual with a new technology.
We are still hitting the chicken or the egg scenario. I have high praise for the technical merits of Thunderbolt but everything else just falls flat.

I might have some interested in getting a thin/light with Thunderbolt and just bringing my own external GPU. ATI has the HD 7750 coming soon and it is said to be bus powered at 55W.
 
Unless you are building an entire infrastructure from the ground up, you are going to be passing the bottleneck onto some other portion of what you have through tiered advancements. You are going to improve the weakest link before moving onto the next one. The same could be said with components inside a single desktop that are upgraded at different points.
We're not likely to get new architecture that's developed from the ground up, as no one will put that much investment in during a single shot, unless it's something that's never been done before.

So incremental changes are and will continue to be the standard approach to architectural changes (monetary reasoning that governs it).

We are still hitting the chicken or the egg scenario. I have high praise for the technical merits of Thunderbolt but everything else just falls flat.

I might have some interested in getting a thin/light with Thunderbolt and just bringing my own external GPU. ATI has the HD 7750 coming soon and it is said to be bus powered at 55W.
TB is still new, and needs to get more products available to users before it will really have any chance at real traction. Without it, it will fail.

As per TB's market, it's aimed at laptop users, not desktops, and of those, it's a niche group (i.e. pros that use portable systems for I/O intensive use). Such users would want to share peripherals with more powerful desktops they have sitting in the office, but this is also a niche (niche of a niche really).

At some point, it has a chance of becoming more mainstream, but there would have to be more TB products on the market (peripherals), and the costs would have to be lower.

If they ever get the optical portion of the spec on the market, and in particular, add networking capabilities, that would create a larger market as well (enterprise segment that desperately needs an inexpensive optical networking technology).

As you indicated, most consumer users will balk at the idea of spending $50 USD on just the cable, which will hinder it's adoption rate.

As per an external GPU over TB, it would be possible, and could offer an improvement over an IGP or very weak discrete GPU chip in a laptop. But it will have performance limitations, and for desktops with PCIe slots, a non-starter. So for some users, it offers an advantage. But not for all, and even of those it could benefit, they may not "bite" if the prices are too high.

It could be disastrous should they (Intel and their partners) get it wrong.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.