Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't think they're implying that they will only offer dual processor configurations, but Apple could offer a single 2687w option.

A single E5 can't have 8 memory channels.

It is whether the 8 memory channels is a hard constraint. Or whether that is a "up to" reference to an upper limit.

Apple is also highly to use a E6 2687. It is 150W. A pair would be 300W. So it has major "keep whisper quiet under high load" problems in addtion to eating up a large portion of the power supply budget. GPUs too are eating up a larger share. Having both go up is an even bigger "whisper quiet" problem.

The other issue is that it isn't particularly Price/Performance competitive. That 150W is largely to crank up to 3.1GHz at the "bargain" price of $1800+

Well gee. A lowly E5 1620 can crank out 3.6GHz. at the price of $294. Even the 1660 at $1080 with turbo up into the same range looks down right affordable next to that $1800. ( yeah there are more cores on the 2687 but if it is serial code execution chasing with high GHz then those are less of a factor).

The high mark-up on the upper end of the E5 2600 series is worth it if actually going to use the two QPI links that paying a heft surcharge are rendered useless by only using one package.

If the work scales with cores so that a 8 core is more effective than a 6 core then two E6 2640's or 2650 gives you more cores for the same amount of money.
 
None of the newly announced HP, Dell, or any of the other E5 based workstation vendors have it. None of the not so newly announced i7 39xx based workstation vendors have it. None of the standard motheboards from typical vendors SuperMicro, MSI , Asus ,etc. for these SB-E/SB-EP processors have it.

Nobody. So it is quite imaginable.




The mDP output from the iGPU. Sure. A couple were announced months ago at CeBit.





And the universally true fact among all of those I have seen is that those motherboards all host a Core i CPU package with a iGPU in it.

You folks completely ignore the most signficant factor. If Display Port singals are naturally on the motheboard ( either there is an embedded mobile GPU card and/or there is a integrated GPU unit in the CPU package ) those are naturally aligned with the Thunderbolt objectives. It is a 'no brainer' to add it.

TB on board highly enhances the GPU being on board. It only moves Intel's ( and AMD's ) merged CPU/GU agenda forward that much more faster.

Decoupling TB from the onboard GPU doesn't. That in addition to the "expectation mismatches" set up by TB sometimes does and sometimes not, makes for completely in Intels non interest to push for "data only" PCs at this time.

When TB is more mature and widely adopted perhaps there will be a push for an alternative "data only" socket. We'll see, but that is not likely to come for at least a couple of years, if ever.

I think is more likely that Xeon E5's will pick up a iGPU unit around the Haswell update. At that point the E5's will be just like the other Core i models that Apple uses. They'll have GPUs and can use practically the same exact methodologies to hook up to the TB controller on Mac Pro as on the other Macs.

It is a matter getting the server folks to fess up to the fact the GPU can actually get more significant computational workload done than the generic x86 cores. In two years or so that will be more clearly evident.

It's difficult to argue with you when you make so much sense. But I still do think that the Mac Pro will have a Thunderbolt port, even if it does not do display port. One of the great advantages that I see is being able to work away from your main site, using fast TB storage and then hooking it up to the Mac Pro when you're back at your office. Otherwise ThunderBolt is ThunderPants to me.
 
None of the newly announced HP, Dell, or any of the other E5 based workstation vendors have it. None of the not so newly announced i7 39xx based workstation vendors have it. None of the standard motheboards from typical vendors SuperMicro, MSI , Asus ,etc. for these SB-E/SB-EP processors have it.

Nobody. So it is quite imaginable.


uhhh . . . http://www.newegg.com/Product/Product.aspx?Item=N82E16813131850 (In stock. )

yes I know it's i7, but it does look to be coming slooowwly in to focus. I guess.

Wow, the specs on that board are amazing. Oh well, I'll be plenty happy with what we get next week.
 
Last edited:
One of the great advantages that I see is being able to work away from your main site, using fast TB storage and then hooking it up to the Mac Pro when you're back at your office. Otherwise ThunderBolt is ThunderPants to me.

Errrr, you can do that now. Put a eSATA or SAS card in your Mac Pro. Plug in your portable eSATA disk. Turn off the disk and Mac Pro and pack eSATA disk in your bag. At remote site plug eSATA card into MBP 17" ExpressCard slot. Plug in eSATA disk. Rise and repeat.

The Thunderbolt change is that you dump the ExpressCard in the trash can. You buy a eSATA Thuderbolt adapter and keep using the same set up.
Maybe you buy some newer, faster eSATA disks but that isn't really part of the Thunderbolt upgrade process.

Even if you love having your throughput choked off by ExpressCard. Get a ExpressCard /TB adapter and continue to poke along as speeds from 5 years ago.

The only time enter the "gotta have" circular loop is when the "SATA adapter/controller" is pushed inside the external drive's case leaving only a TB socket behind. Yeah at that point you need TB because it is the only way into to the data. But that wasn't the only route.

Similarly if more vendors adopted USM ( http://www.sata-io.org/technology/usm.asp ) besides the Seagate GoFlex then you be able to attached/detach the TB controller hiding the SATA connectors as you move from machine to machine without the raw drive's casing being exposed.
Again a "problem" which solutions pre-date TB.


Granted, Thunderbolt is probably a better solution for carrying multidrive RAID box from machine to machine. If the RAID card isn't being duplicated that's a signficant upside. But individual drives from box to box (even single 6Gbs SSDs ) ... there are very competitive solutions already out there.
 
Errrr, you can do that now. Put a eSATA or SAS card in your Mac Pro. Plug in your portable eSATA disk. Turn off the disk and Mac Pro and pack eSATA disk in your bag. At remote site plug eSATA card into MBP 17" ExpressCard slot. Plug in eSATA disk. Rise and repeat.

The Thunderbolt change is that you dump the ExpressCard in the trash can. You buy a eSATA Thuderbolt adapter and keep using the same set up.
Maybe you buy some newer, faster eSATA disks but that isn't really part of the Thunderbolt upgrade process.

Even if you love having your throughput choked off by ExpressCard. Get a ExpressCard /TB adapter and continue to poke along as speeds from 5 years ago.

The only time enter the "gotta have" circular loop is when the "SATA adapter/controller" is pushed inside the external drive's case leaving only a TB socket behind. Yeah at that point you need TB because it is the only way into to the data. But that wasn't the only route.

Similarly if more vendors adopted USM ( http://www.sata-io.org/technology/usm.asp ) besides the Seagate GoFlex then you be able to attached/detach the TB controller hiding the SATA connectors as you move from machine to machine without the raw drive's casing being exposed.
Again a "problem" which solutions pre-date TB.


Granted, Thunderbolt is probably a better solution for carrying multidrive RAID box from machine to machine. If the RAID card isn't being duplicated that's a signficant upside. But individual drives from box to box (even single 6Gbs SSDs ) ... there are very competitive solutions already out there.
I realise I could do it what I mentioned in various ways right now, but the Thunderbolt solution just appeals for some reason.
 

Uhhh bubba. "... [Socket] 1155 Intel Z77 " ..... that means iGPU !

Get real folks. Just "i7" says nothing about whether something is SB-E or Ivy Bridge-E . Some of the i7's are mainstream based dies. Some of the i7's are Xeon EP based dies ( with some features turned off/on). The mainstream ones are all socket 1155 based. ( fewer PCI-e lanes is part of the missing pins. 16x for mainstream and 20x for the less crippled E3's )

Same is true on Xeon side. Some Xeons are mainstream dies with some features turned on/off ( Xeon E3 with iGPU and ECC on) and some Xeons are EP based The E5 1600 , 2600 , 2400 , 4600 series. They have sockets 2011 and 1356 .


To make it simpler for folks who have no idea what they are looking for, it is a :

Socket 2011 motherboard with either a C600 chipset (for E5's ) or a X79 chipset for a ( i7 39xx )

one of those with TB built in (or takes an approved TB PCI-e card ... there will be a "TB " socket next to a PCI-e x4 slot that supplies the board's Display Port signal .... there probably won't be any of these, but will show up on 1155 boards. ). If find one report back. .... we'll let the crickets chirp in the mean time.
 
Last edited:
Most of the tests I've seen show the i7 3930k basically offering the same performance as a single xeon for about half the price. It seems the xeon e5's value really kicks in with multiple processors. Also a lot of their benefits seemed to be in the efficiency department for machines working as 24/7 servers.

The problem is that these tests don't capture the whole story. The Xeons have much higher maximum data bandwidth to memory / GPU compared to the i7s. If you need to move large volumes of data between the CPU and GPU then the Xeons may be much faster.

Also with the dual processor models, particularly the Dual 8-Core models the i7s cost advantage disappears for multi-core optimised applications.
 
Uhhh bubba. "... [Socket] 1155 Intel Z77 " ..... that means iGPU !

Get real folks. Just "i7" says nothing about whether something is SB-E or Ivy Bridge-E . Some of the i7's are mainstream based dies. Some of the i7's are Xeon EP based dies ( with some features turned off/on). The mainstream ones are all socket 1155 based. ( fewer PCI-e lanes is part of the missing pins. 16x for mainstream and 20x for the less crippled E3's )

Same is true on Xeon side. Some Xeons are mainstream dies with some features turned on/off ( Xeon E3 with iGPU and ECC on) and some Xeons are EP based The E5 1600 , 2600 , 2400 , 4600 series. They have sockets 2011 and 1356 .


To make it simpler for folks who have no idea what they are looking for, it is a :

Socket 2011 motherboard with either a C600 chipset (for E5's ) or a X79 chipset for a ( i7 39xx )

one of those with TB built in (or takes an approved TB PCI-e card ... there will be a "TB " socket next to a PCI-e x4 slot that supplies the board's Display Port signal .... there probably won't be any of these, but will show up on 1155 boards. ). If find one report back. .... we'll let the crickets chirp in the mean time.


my point was that they are ever so slowly making it out in the wild..... I don't need it personally, yet anyway, but maybe in the future I could see some external drives when the price of cables and enclosures gets reasonable.
 
This is not only possible, but easier to do, as there's no need to make provisions for display data.

But the question is whether or not Apple would take such an approach or not.

I don't think Intel will let you use the Thunderbolt Trademark on a PC if don't follow their rulers. As explained in another response there is extremely little motivation for them to approve PCs that have stripped off the video signal because they don't feel like it.
 
I think Apple will do it ( perhaps Mountain Lion only), because it is kind of ridiculous to "too poor to do the work" on a box that is 3-4 times the average PC unit cost.

Sadly I wouldn't put it past them. Most mac pro users are likely using something faster than usb3 for anything where high performance is a requirement anyway, so they could just regard it as a nice feature but not 100% necessary. It can be added via discrete cards, but I haven't seen any with perfect drivers.

Price/Performance wise it wouldn't be. Those offerings tend to deficient along that metric. They are largely a gimmick for those who either wistfully plan to some day upgrade (but never will) or have some short term purchasing problem ( can't buy the two that they actually really need right away).

Apple did this on the 3,1. There was a cto single package option. This was most likely for the purpose of using only one board design. By the way, do you deal with server or workstation deployments all day? I've never seen anyone else explain tech decisions in this kind of detail.
 
A single Xeon machine would certainly feature 4 channel RAM... 8 channels would require 2 CPUs installed and active, but ( depending on how much Apple was willing to forego common parts), a single Xeon machine with the e5-1650 could certainly have 8 RAM slots - most desktops and low-end workstations are 2 slots per channel. I really hope we see a single CPU Mac Pro, because it could have 6 cores (ideal for most photographic workloads ), a relatively high clock speed and a reasonable price tag ( a sub-$600 CPU, could fit in a $2500 computer). It would be nice to have 2 RAM slots per channel, letting it get to 64 GB with commodity memory.
This would be an ideal machine for professional photographers - about 1.5 times as fast as the best iMac, twice the RAM capacity, can handle Nikon D800 raw files without breaking a sweat, easily capable of tackling HD video files as well (assuming you aren't editing a feature film or using a $50,000 RED Epic), yet still priced for the lone photographer, not the Hollywood studio. It would also swallow lots of cheap hard drives for a ridiculous storage capacity, hopefully with a bay for a SSD or two instead of the useless second optical.
Keep the huge dual processor monster in the line as well - the high end of the video market needs those! FOUR top iMacs in one box (the potential 16-core) with a RAM capacity of 128 GB has its applications, mostly among video folks working on 4k or very long projects, but also among scientists who need massive compute potential, but don't know enough about computers to manage anything terribly complex. Plenty of biochemists and the like would love all that power in a single box that behaves like any other Mac, freeing them to think about the biology of protein folding instead of compute cluster management (university IT staff are, by definition, overworked).
 
I would love a single processor 6 core machine with a higher clock speed, 8 memory slots, sata3, pcie3, and more drive space. If the price isn't rape (I'm getting tired of using that word, but I can't think of a fitting replacement!) I would surely consider this option. I don't know if a slower clock dual CPU is as good for my work and it would jack up the price by at least a grand. Anyhow, only days and we will hopefully know for sure. I really don't want to go pc but for me it's still an option.
 
Sadly I wouldn't put it past them. Most mac pro users are likely using something faster than usb3 for anything where high performance is a requirement anyway,

I wouldn't bet on that. USB 3.0 thumb drives, external single HDDs , storage card readers, etc. all make much more sense with USB 3.0 than any of the alternatives. It is faster and more portable if occasionally have to deal with slower machines. It is rather unfortunate if one of the "slower machines" is the Mac pro for likely another two iterations (2012 , 2013 ).
To show up in 2014 with USB 3.0 is kind of ridiculous. Hence, I can see them saying they were going to be just a little late.

so they could just regard it as a nice feature but not 100% necessary. It can be added via discrete cards,

It isn't the discrete cards that is the issue. it is coming out with a clean, efficient, and effective xHCI stack.

Mircosoft has be working on a new, clean xHCI stack for several years.

http://blogs.msdn.com/b/b8/archive/2011/08/22/building-robust-usb-3-0-support.aspx

I picture it more so as Apple starting later and putting less resources into the problem. So it will arrive later.

The 3rd party drivers are the issue. I don't really expect them to actually implement all of the issues correctly since the money to pay for the drivers is coming from razor thin cards margins and not the broadbase OS fees collected from every users. For example Linux was first out of the gate with USB 3.0 (xHCI) support but their isochronous implementation has lagged. (I'm not sure they ever closed the gap but to cut corners some vendors have used this code as a base. How GPL drivers mix with Windows and Mac OS X is a bit of mystery but that's OK. ]


Apple did this on the 3,1. There was a cto single package option.

The 3,1 used the old Core microarchitecture. Back then the processor packages shared a "front side bus" with to the memory. There was no difference in cost between those that did and didn not have active QPI (Intel's inter-processor package connector ) links. There were no QPI links. :)

Of course they didn't differentiate.... there was no difference.

Shared front side memory bus is in the garbage can next to Pentium 4 architecture at this point. Not particularly relevant. It is never coming back on Xeon class processors. It doesn't really scale past 4 cores very well. The 3,1 Hapertown was pretty suboptimal from a parallel processing perspective , but it was the only thing Intel had at the time. They were still digging out of the Pentium 4 hole they had dug for themselves.


By the way, do you deal with server or workstation deployments all day? .

Among other things, I deal with optimizations. So, wiring up and installing boxes? No. Why is the associated system stack performance or design bad? Yes.
 
my point was that they are ever so slowly making it out in the wild.....

With Intel the major driving force behind the technology was there any reasonable doubt this was not going to happen?

PCIMIA and ExpressCard over time became very pervasive in the general PC market laptop space. Eventuall ExpressCard had a substantive distribution. However, it never got a deep penetration into the "box with slots" portion of the market.

Thunderbolt is going to go deeper and wider than ExpressCard. However, I don't think it is going to spread to as many general PC market boxes as USB has.

Whether people like it or not these mainstream motherboard compete in with Mac universe largely with Mac min's and iMacs. Those already have Thunderbolt. So is really just the more general PC market catching up.

The Mac Pro isn't "behind" because these are getting TB sockets. If it took till Haswell-EP to deploy a much more cleaner solution with a iGPU ( or a x16 bigger PCI-e lane budget), the Mac Pro wouldn't loose much. It will take as least the amount of time for TB to ramp to much lower prices and by that time TB v2.0 will be out .... which will probably push prices up a bit.


I don't need it personally, yet anyway, but maybe in the future I could see some external drives when the price of cables and enclosures gets reasonable.

Most external Thunderbolt drives are likely going to remain RAID boxes. (multiple drive containers). Those prices are not likely to crater over time.

The TB Drive market is probably be no larger than the Mac market is to the overall general PC market now.
 
Most external Thunderbolt drives are likely going to remain RAID boxes. (multiple drive containers). Those prices are not likely to crater over time.

The TB Drive market is probably be no larger than the Mac market is to the overall general PC market now.

I find that usb 3 external drives I use now for my 2010 mac pro are fine for occasional backup or transfers, but I open the case when I need real speed. Maybe the native usb 3 solution on the new MP's will be faster. The pci usb 3 cards aren't anywhere near the usb 3 spec or even what one on a pc can do for some reason, but still light years faster than usb2 and inexpensive. The external raid boxes would be nice to have if they weren't so expensive.
 
I wouldn't bet on that. USB 3.0 thumb drives, external single HDDs , storage card readers, etc. all make much more sense with USB 3.0 than any of the alternatives. It is faster and more portable if occasionally have to deal with slower machines. It is rather unfortunate if one of the "slower machines" is the Mac pro for likely another two iterations (2012 , 2013 ).
To show up in 2014 with USB 3.0 is kind of ridiculous. Hence, I can see them saying they were going to be just a little late.

I'm aware that the chipset wouldn't be updated beyond firmware tweaks until 2014. I just kind of wonder if Apple is putting a lot of engineering resources into this line. Their last update made some significant board changes, but they seemed like they went a little weak on hardware relative to prior configurations unless you spent significantly more.




The 3rd party drivers are the issue. I don't really expect them to actually implement all of the issues correctly since the money to pay for the drivers is coming from razor thin cards margins and not the broadbase OS fees collected from every users. For example Linux was first out of the gate with USB 3.0 (xHCI) support but their isochronous implementation has lagged. (I'm not sure they ever closed the gap but to cut corners some vendors have used this code as a base. How GPL drivers mix with Windows and Mac OS X is a bit of mystery but that's OK. ]

If Apple implements native usb3 support with Mountain Lion, wouldn't that mitigate development costs in producing a stable card?


The 3,1 used the old Core microarchitecture. Back then the processor packages shared a "front side bus" with to the memory. There was no difference in cost between those that did and didn not have active QPI (Intel's inter-processor package connector ) links. There were no QPI links. :)

Of course they didn't differentiate.... there was no difference.

Shared front side memory bus is in the garbage can next to Pentium 4 architecture at this point. Not particularly relevant. It is never coming back on Xeon class processors. It doesn't really scale past 4 cores very well. The 3,1 Hapertown was pretty suboptimal from a parallel processing perspective , but it was the only thing Intel had at the time. They were still digging out of the Pentium 4 hole they had dug for themselves.

I remember that not everything seemed to really benefit from the core count there, but I thought that was more of a programming limitation given that the initial testing on nehalem showed it to be somewhat of a flat upgrade going from octo core harper to octo core nehalem. Obviously the lower end parts in the latter would have played a factor too. It made for a slightly flat upgrade at some price points, but I doubt Apple thought parts of the line would remain in stasis this long.



Among other things, I deal with optimizations. So, wiring up and installing boxes? No. Why is the associated system stack performance or design bad? Yes.

That's really cool.
 
I'm aware that the chipset wouldn't be updated beyond firmware tweaks until 2014. I just kind of wonder if Apple is putting a lot of engineering resources into this line.

Let's see what they come up with. There are some cool things they could have salvaged from the XServe and put into the Mac Pro. If it is the same rack mount hostile case and a slightly tweaked motherboard and the major changes are restricted internal to the CPU/RAM daughtercard .... then yeah ... it is a the "minimal life support" R&D budget.

If there is a new daughtercard with 56 PCI-e lanes out to the motherboard (up from 36 ) then that's a different story.

Apple could be split the single from the dual package model so that had different cases ( I don't think that works out economically long term ) that would be a change.


If Apple implements native usb3 support with Mountain Lion, wouldn't that mitigate development costs in producing a stable card?

Yes. It is also a better " find bug once / fix many " value proposition for the users as well. It is really a much better set up when the OS vendor takes on responsibility for the basic drivers that everyone just leverages.





I remember that not everything seemed to really benefit from the core count there, but I thought that was more of a programming limitation given that the initial testing on nehalem showed it to be somewhat of a flat upgrade going from octo core harper to octo core nehalem.

Not really. This is an old article that compares the new 2010 model but it includes the 2009 and 2008 models as wells.

http://www.barefeats.com/wst10.html

On geekbench the 2.8GHz 4C Nehalem comes out about even with the 2.8GHz 8C Hapertown. On Cinebench it isn't quite even but awfully close giving up a 4C advantage to the older memory throttled architecture. That's is indicative of how jack up the front side bus was. I'm also preplexed by the steady steam of folks claim the pre-Nehalem boxes like 2,1 and 1,1 were some hallmark of computational prowess.

I suppose there may have been some benchmarks that were Rosetta bound where the Hapertown might have come out on top, but it being hooked to the front side bus was a large contributor to why AMD was spanking Intel around that 2006-2008 time period.

Prices went up a bit, but the performance users were getting went up much more.
 
So I'm assuming all the mac pro's will use xeons, including the single socket?

Most of the tests I've seen show the i7 3930k basically offering the same performance as a single xeon for about half the price.

i7 3930K => $583
http://ark.intel.com/products/63697/Intel-Core-i7-3930K-Processor-(12M-Cache-up-to-3_80-GHz)

E5 2650 => $583
http://ark.intel.com/products/64601...5-1650-(10M-Cache-3_20-GHz-0_0-GTs-Intel-QPI)
[ that should be 12M Cache. ]


Not sure what tests these were. They are the exact same price.
I suppose if the i7 3930K was overclocked to match the E5 2660 ( at the $1080 price point) that would count as twice. Likewise if the 3930 was compared to the W3680 from the previous generation?

The inherent flaw there is that Apple isn't going to ship an overclocked i7 in a system.



It seems the xeon e5's value really kicks in with multiple processors.

This is the often repeated, but not up backed up empircally statement that just has a life of its own on these forums.



. I couldn't justify an extra $1000-$1500 for the xeon configurations based on the performance increases I was seeing in their real world tests as well as benchmarks.

Buying them because the system cost is cheaper ... probably. But that isn't driven by the CPU package cost differences. Nor CPU performance.
 
Let's see what they come up with. There are some cool things they could have salvaged from the XServe and put into the Mac Pro. If it is the same rack mount hostile case and a slightly tweaked motherboard and the major changes are restricted internal to the CPU/RAM daughtercard .... then yeah ... it is a the "minimal life support" R&D budget.

That was my point. There isn't much R&D. They'd like to keep it as stable as possible so that the platform remains feasible without producing any unusual issues.


Yes. It is also a better " find bug once / fix many " value proposition for the users as well. It is really a much better set up when the OS vendor takes on responsibility for the basic drivers that everyone just leverages.

Microsoft is supposed to have native support in Windows 8. In either case it makes it much easier to release a stable PCI card.



Not really. This is an old article that compares the new 2010 model but it includes the 2009 and 2008 models as wells.

http://www.barefeats.com/wst10.html

On geekbench the 2.8GHz 4C Nehalem comes out about even with the 2.8GHz 8C Hapertown. On Cinebench it isn't quite even but awfully close giving up a 4C advantage to the older memory throttled architecture. That's is indicative of how jack up the front side bus was. I'm also preplexed by the steady steam of folks claim the pre-Nehalem boxes like 2,1 and 1,1 were some hallmark of computational prowess.

I suppose there may have been some benchmarks that were Rosetta bound where the Hapertown might have come out on top, but it being hooked to the front side bus was a large contributor to why AMD was spanking Intel around that 2006-2008 time period.

Prices went up a bit, but the performance users were getting went up much more.

I must keep getting these mixed up somewhere. I know different benchmarks have suggested other things. Early performance benchmarks showed a relatively flat increase between the 2.8 and the 2.26. It may have just improved with later OS optimizations weighted toward a newer architecture. I'm waiting to see what the newest ones look like in terms of price and performance. While I wish I could just update my laptop to the newest and go with that, I question the wisdom of letting a macbook pro render things for hours at a time. Unless they're quite fast, a 10k render would still have to go overnight anyway. I usually go a bit large so that some of the noise can be handled via downsampling before cleaning up the rest by hand. It seems like it will be annoying figuring out what to buy this time.
 
i7 3930K => $583
http://ark.intel.com/products/63697/Intel-Core-i7-3930K-Processor-(12M-Cache-up-to-3_80-GHz)

E5 2650 => $583
http://ark.intel.com/products/64601...5-1650-(10M-Cache-3_20-GHz-0_0-GTs-Intel-QPI)

Buying them because the system cost is cheaper ... probably. But that isn't driven by the CPU package cost differences. Nor CPU performance.

I see the E5 2650 listed around 1099.99 and that's a 2ghz processor as opposed to 3.2ghz 3930k.

http://www.newegg.com/Product/Product.aspx?Item=N82E16819117266

That link in your post was to an E5 1650. Did you mean to say 1650? Typo?


EDIT: Did a little more reading and assume you meant to say the 1650. So what do you think is the likelihood of this 1650 chip ending up in the single core Mac pro? Would they opt for this over 26xx? Is it correct that the 16xx are more aimed at workstations vs 26xx for server? Trying to get a better handle on it. Also is it true that ECC memory can introduce some latency in exchange for stability and error detection (aimed again at server applications)? If the mobo supports ECC memory is it then required?
 
Last edited:
I see the E5 2650 listed around 1099.99 and that's a 2ghz processor as opposed to 3.2ghz 3930k.

http://www.newegg.com/Product/Product.aspx?Item=N82E16819117266

That link in your post was to an E5 1650. Did you mean to say 1650? Typo?


EDIT: Did a little more reading and assume you meant to say the 1650. So what do you think is the likelihood of this 1650 chip ending up in the single core Mac pro? Would they opt for this over 26xx? Is it correct that the 16xx are more aimed at workstations vs 26xx for server? Trying to get a better handle on it. Also is it true that ECC memory can introduce some latency in exchange for stability and error detection (aimed again at server applications)? If the mobo supports ECC memory is it then required?

He meant 1650. It's a typo. The likelihood of the 16xx series being in single CPU Mac Pro is very high.
 
Last edited:
That was my point. There isn't much R&D. They'd like to keep it as stable as possible so that the platform remains feasible without producing any unusual issues.

Stability is likely part of it. But also there isn't much R&D from the last 2 years that you can see. That doesn't mean they haven't been heavily invested.

For example, some folks lamented that Final Cut Pro wasn't getting high R&D ( 6-7 was slow and "not much" to them). Well that was in part because Apple had decided to do a ground up rewrite.

The design input to this Mac Pro had to include decisions that Apple was making then that had not previously made. Canceling XServe ( so racking is larger issue ), Thunderbolt , GPU cards on a seemingly irreversible trend to higher TDP plateaus ( higher end ones above 200W) , wired (10GbE) and wireless (801.11ac) networks going to new higher plateaus , USB 3.0 adoption rates, etc.. If the situation was properly assess several of the core assumptions that had gone into the basic Mac Pro design were at that point quite stale; if not outright wrong.

I suspect Apple saw these coming and has mutated the Mac Pro a bit more than the last couple of iterations.

Sometimes R&D investments are like an iceberg.... there is more you can't immediately see from far away to the object.





Microsoft is supposed to have native support in Windows 8. In either case it makes it much easier to release a stable PCI card.

This is going to have a deep impact on USB 3.0 adoption rates. Now that the dominate general PC market OS has full USB 3.0 support the adoption rate is only going to dramatically increase. This will kick the adoption into the "next gear". For the Mac Pro to be lacking on it would be folly. I can see Apple doubting USB 3.0 had high adoption rates before Windows 8. But after? They would have to be drinking gallons of Cupertino kool-aid to believe it was going to have growth problems after that enablement fell into place.




While I wish I could just update my laptop to the newest and go with that, I question the wisdom of letting a macbook pro render things for hours at a time.

If they kill the optical and put in a bigger/more fan that would help.

----------

That link in your post was to an E5 1650. Did you mean to say 1650? Typo?

Yes typo. And one reason not to link to an Newegg option was they haven't been listed in newegg for several months after the "announcement" for whatever reason.

ECC memory isn't huge problem as it is made out to be in these forums. As long as overclocking isn't the primary objective it isn't a big issue.
 
The design input to this Mac Pro had to include decisions that Apple was making then that had not previously made. Canceling XServe ( so racking is larger issue ), Thunderbolt , GPU cards on a seemingly irreversible trend to higher TDP plateaus ( higher end ones above 200W) , wired (10GbE) and wireless (801.11ac) networks going to new higher plateaus , USB 3.0 adoption rates, etc.. If the situation was properly assess several of the core assumptions that had gone into the basic Mac Pro design were at that point quite stale; if not outright wrong.

I suspect Apple saw these coming and has mutated the Mac Pro a bit more than the last couple of iterations.

Sometimes R&D investments are like an iceberg.... there is more you can't immediately see from far away to the object.

I have no idea what they'll do with gpus. Typically they've ignored the top end of it, but OpenCL functions keep popping up in more applications. I just looked up the Firepro versions and the 7800 and above (roughly corresponds to a 5850 in terms of hardware) still use massive amounts of power. Workstation cards never have quite the same features under OSX anyway. I would've liked one to drive displayport at 10 bpc rather than 8 to help better address the shadow values which is really more of a problem with gamma 2.2 settings (few shadow values allocated within an LUT). The Quadro 4000 doesn't support it though, and it doesn't even seem to have full OpenCL compatibility. This is just an easy example. It worked in one test but made no difference in the other.



This is going to have a deep impact on USB 3.0 adoption rates. Now that the dominate general PC market OS has full USB 3.0 support the adoption rate is only going to dramatically increase. This will kick the adoption into the "next gear". For the Mac Pro to be lacking on it would be folly. I can see Apple doubting USB 3.0 had high adoption rates before Windows 8. But after? They would have to be drinking gallons of Cupertino kool-aid to believe it was going to have growth problems after that enablement fell into place.

Sometimes Apple adopts things early when they feel like it. I figured given that it still hasn't completely blown up, they might push it out once again, even though it would be annoying. If the driver stack showed up during such a revision, it would be much easier to find third party cards assuming free pci slots mid cycle.


If they kill the optical and put in a bigger/more fan that would help.

Doesn't intel have separate thermal specs relative to high duty cycles as opposed to sub 8 hours/day? I also haven't seen much in the way of information related to what kind of abuse the mobile chips are designed to withstand. Obviously they can handle shorter cycles with cpus maxed, but I don't know if they're designed to do that for hours. It would be nice if Apple installed bigger fans so that they don't run both hot and noisy under higher loads. The rumor was that they'd go thinner if the optical goes away, but I think the bloggers fabricate much of this for page hits.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.