Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
And then D700 has slower single core performance than the 7970 so someone here shouldn't be comparing these GPUs to the 980 at all, in any sense.

That's also very true.

Because the D700 is slower than a 7970 (yet has the same feature set), it's pretty safe to say the D700 is not 10x faster. Not even remotely at all close.
 
Most if not all 3D application viewport aren't giving you a view of the real final product but use instead a basic rendering.

What? Of course you're not working on a final quality render image in real time. However whatever you have in your viewport is seriously taxing your system as the scene gets more and more complex. Any designer/modeler/animator is going to want to work with the best viewport performance possible.

Which mean that a good videogame card will result in good performance in the viewport.

I don't think anyone's saying they can't offer good performance, but workstation cards regularly perform better in this area.

The rendering part is another thing and for many renderer, it's done 100% on the CPU and not the GPU.

That is slowly shifting as GPU rendering gets more popular. But you're right, and it's all the more reason to have a GPu that offers the best viewport experience possible.
 
Hey look, OpenCL benchmarks with the 7970/D700 and the Geforce 980.

The 980 usually wins, but the 7970 does have an edge in a few benchmarks. Definitely not 2x though, and far far far from 10x.

http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/20

Ok, I will try to explain this for non 3d people.
In every 3D editing software the polygons/textures displayed by the GPU on the viewport(via OpenGL) are not loaded from disk, they have to be processed by the CPU first, and unfortunately this task is mostly single threaded. Therefore machine with higher single core turbo will have an edge above slower clocked machines(even if those may excels in other multithreaded tasks).
For this reason you will see the same identical GPU getting very different results just because they are feeded by different CPUs. And being the old macpro very slow for single threaded operations, you will get a clue of why a 980 will get 50FPS on an old mp and 190FPS on an overclocked modern i7.
Honestly I do not know exactly the difference between a D700 against a 980/oMP, but recently I was surprised to discover that in a real world scene the D700 performs as fast as a 980 on an overclocked PC, and almost 3x faster when activating antialiasing/shadows in the viewport(this is likely due to Firepro drivers optimization). Now if the D700 can be up to 3x faster than a 980 on a fast PC you can just imagine how much faster it will be against an old (5 years technology) macpro.
I'm a 3d guy and I'm mostly concerned by the GPU viewport speed for my works, this is what really matters for me. The 980 should be faster in other tasks like OpenCL and obviously CUDA(don't know about gaming.. I'm not into that stuff).
 
Last edited:
Ok, I will try to explain this for non 3d people.
In every 3D editing software the polygons/textures displayed by the GPU on the viewport(via OpenGL) are not loaded from disk, they have to be processed by the CPU first, and unfortunately this task is mostly single threaded. Therefore machine with higher single core turbo will have an edge above slower clocked machines(even if those may excels in other multithreaded tasks).
For this reason you will see the same identical GPU getting very different results just because they are feeded by different CPUs. And being the old macpro very slow for single threaded operations, you will get a clue of why a 980 will get 50FPS on an old mp and 190FPS on an overclocked modern i7.
Honestly I do not know exactly the difference between a D700 against a 980/oMP, but recently I was surprised to discover that in a real world scene the D700 performs as fast as a 980 on an overclocked PC, and almost 3x faster when activating antialiasing/shadows in the viewport(this is likely due to Firepro drivers optimization). Now if the D700 can be up to 3x faster than a 980 on a fast PC you can just imagine how much faster it will be against an old (5 years technology) macpro.
I'm a 3d guy and I'm mostly concerned by the GPU viewport speed for my works, this is what really matters for me. The 980 should be faster in other tasks like OpenCL and obviously CUDA(don't know about gaming.. I'm not into that stuff).

The specific reason I picked the OpenCL benchmarks is because none of what you're saying is relevant to that.

Most pure synthetic OpenGL benchmarks cache everything they can to VRAM, but OpenCL is more cleanly separated from the CPU. The data doesn't need to be "primed" as much or at all.

I don't know of many people priming OpenGL data with a single thread any more. You certainly don't have to, and if you're doing that, you're doing it wrong. But regardless, that's why these tests are all being done on the same box.
 
What? Of course you're not working on a final quality render image in real time. However whatever you have in your viewport is seriously taxing your system as the scene gets more and more complex. Any designer/modeler/animator is going to want to work with the best viewport performance possible.

... Which you will get with any top of the line GPU either gaming or not.


I don't think anyone's saying they can't offer good performance, but workstation cards regularly perform better in this area.

They did in the good old days, but since they are now using the same processors than the "gaming" GPU and since they now do their ecc via software they aren't really the super card of yesterday.

That is slowly shifting as GPU rendering gets more popular. But you're right, and it's all the more reason to have a GPu that offers the best viewport experience possible.

Which you will get from ANY top of the line GPU, gaming or not.
 
Disagree. I freelance in studios all over the world and its always been a split. As a guess Linux 50% OSX 30% windows 20% - and the windows is normally just there for 3d studio max and some specialist applications.

Editors may have shifted form FCP to Premiere. But I know a lot of companies that have gone back with the nMP to FCPX when they see what it can do for the money with very little setup or running hassle

I don't know where do you freelance, but i worked with all the big 10 in EU, US and Israel and i don't see anything from what you mention. I have my own company by the way, so i have a LOT more clients then just visiting 1 studio/year(i had Allianz, Erste Bank, Tuborg, Shell etc in client pool, just to know what we are talking about here right?). Linux is only used for render farms, and OSX is less then 3-5% for 3D work. in add agencies has decreased from 30-40% to less then 15%. Vast majority is Windows for work and Linux for render and network managers..... Again, print, video and advertising. I don't know ANY major production house(or advertising agencies) that switched to Windows who has returned to Apple. Care to provide one example and if i worked with them i will ask and come back to you with facts.
 
LMAO he's still going. I give up. Go play with your magical D700 that is 10x faster than a GTX 980.

I never once said it was 10x faster. Get your fact straight there bud. I said there are a lot of reasons that a pro card is different to a games card.

----------

- I don't really care what you do or don't do.
- the bus will be larger when cards with more ram come out. Right now we are only seeing the 3 to 4 gig variants.
- memory bandwidth will change depending on manufacturer spec
- larger amounts of ram will be available in the future
- the bit depth is due to HDMI constraint. It will come with latter cards.

So every single thing you have said is about THINGS IN THE FUTURE. What's that got to do with the discussion in hand and the hardware available now.

Given this is a discussion about Pro kit not a gaming PC. What I do have very relevance, not that it had anything to do with you anyway.
 
I never once said it was 10x faster.

I quoted you. You can't even edit your history after I did that. Just admit it was a silly thing to say. No biggie. A Dual D700 set up performs great but there is no need to exaggerate.

----------

Honestly I do not know exactly the difference between a D700 against a 980/oMP, but recently I was surprised to discover that in a real world scene the D700 performs as fast as a 980 on an overclocked PC, and almost 3x faster when activating antialiasing/shadows in the viewport(this is likely due to Firepro drivers optimization).

I'm seeing hyperbole but no benchmarks or videos here. When you make a comparison it has to be single GPU vs single GPU and SLI vs SLI. Be precise and specific about the sets ups you witness otherwise it's not worth a discussion.
 
I quoted you. You can't even edit your history after I did that. Just admit it was a silly thing to say. No biggie. A Dual D700 set up performs great but there is no need to exaggerate.

Nope. Can't admit anything as you were quoting out of context Siro76 in his post 271

I have no idea what he is talking about there.

I made not such claim. My entire argument has been that Certification in Apps is important in a work environment and When I have used ( and still ) do use gaming cards, there are often glitches. Sometimes this is not important, but in Building work it's vital. Pro cards are often better at handing large scenes, And games cards are normally faster and Textures.
 
I never once said it was 10x faster. Get your fact straight there bud. I said there are a lot of reasons that a pro card is different to a games card.

----------



So every single thing you have said is about THINGS IN THE FUTURE. What's that got to do with the discussion in hand and the hardware available now.

Given this is a discussion about Pro kit not a gaming PC. What I do have very relevance, not that it had anything to do with you anyway.

Lol. My point is that nothing you've mentioned is unique to a pro graphics card and is readily available in consumer cards. All you are paying for is driver signage and in some cases extended OpenGL libraries. You need to stop with the smug omg I'm a pro attitude. You need to let it go
 
[/COLOR]
Lol. My point is that nothing you've mentioned is unique to a pro graphics card and is readily available in consumer cards. All you are paying for is driver signage and in some cases extended OpenGL libraries. You need to stop with the smug omg I'm a pro attitude. You need to let it go

And this is what you are still not getting. Unless you have a certified Card... you don't get support.

Actually, this is the most pointless pissing contest.

1. We have a D700 in the nMPro and the other slightly pointless ones
2. Yes there are faster cards out there now - Gaming and Pro.
3. 90% of people can use a gaming card and be perfectly happy. Even in a pro environment.
4. Games cards are cheaper than Pro cards by a factor.
5. If you have to have Cuda - Get a PC and Nvidia
6. If you really dislike windows - Get a Mac.

Done.
 
Nope. Can't admit anything as you were quoting out of context Siro76 in his post 271

I have no idea what he is talking about there.

I made not such claim.

I must make the apology that I am confusing you with Siro as you are both overlapping and it is difficult to note user names on my iPhone screen.

I know what thee benefits of a certified workstation card are, as I state my first such card was in the nineties. But to be frank the features they are said to be better at are very minor and don't reflect much difference in the real world. Certainly not the 3 - 10 times difference cited by Siro. The only time a 7970 beats a GTX 980 is in apps that NVidia doesn't want to optimise for. In most cases the 980 is very far ahead of the 7970. And the D700 has slower single GPU performance than the 7970 even because it's a low power custom version. It shouldn't be compared to any new card.
 
As Intel are starting to wind down production of Ivy Bridge Xeon's I'd say not long now. My sweepstake of an announcement in the last week of January is still on, though I am still waiting for more info to confirm some more things to be absolutely certain.

Without a Xeon and its chipset you have no Mac Pro regardless of what GPU cards are in the 7,1.
 
Without a Xeon and its chipset you have no Mac Pro regardless of what GPU cards are in the 7,1.

Going with the CPU/chipset releases is definitely the safest guide as it is the main factor - that is, if anyone can guess Apple's plans (e.g. I wouldn't be shocked if Apple chose to skip a Xeon iteration, or release a new revision of nMP with a couple of months delay).

A correction to the thread's title, though, could be that if there's a new release of nMP, it should be the 6.2. An actual 7.1 release should be far, far away yet (if ever).

So, to try and answer the thread's title; I can't tell what the CPU of the 7.1 will be or when this is going to be released, but I predict huge quantities of glue inside :p
 
Guys, you look like 10 year olds... :-(

Gav Mack, let's hope you are right. Indeed, Intel will want to let go of IB-E production in favor of Haswell-E.
That might convince Apple to go for it.
Thing is, what GPU will they use? Current Tonga/Hawaii? Wait for Caribbean Islands? More likely the first I guess, if the update is to come sooner than later.
 
So, to try and answer the thread's title; I can't tell what the CPU of the 7.1 will be or when this is going to be released, but I predict huge quantities of glue inside :p

Well, I know this is partially a joke, but I wouldn't expect any significant case design changes. :p
 
As Intel are starting to wind down production of Ivy Bridge Xeon's I'd say not long now. My sweepstake of an announcement in the last week of January is still on, though I am still waiting for more info to confirm some more things to be absolutely certain.

Without a Xeon and its chipset you have no Mac Pro regardless of what GPU cards are in the 7,1.

Well I finally bought the base 4c model this week on sale ($2,599) so that can only mean one thing: The new-new mac pro will drop within a month. ;)

I'd like to say that is a joke but it usually happens to me each major purchase. But good news for everyone else!
 
Going with the CPU/chipset releases is definitely the safest guide as it is the main factor - that is, if anyone can guess Apple's plans (e.g. I wouldn't be shocked if Apple chose to skip a Xeon iteration, or release a new revision of nMP with a couple of months delay).

A correction to the thread's title, though, could be that if there's a new release of nMP, it should be the 6.2. An actual 7.1 release should be far, far away yet (if ever).

So, to try and answer the thread's title; I can't tell what the CPU of the 7.1 will be or when this is going to be released, but I predict huge quantities of glue inside :p


Going with the chipset usually is the safest bet - only the cMP was pretty much the exception to this rule but that was likely because the 2 piece logic board would have required an entire redesign with no northbridge on the sandy bridge e chipset and the memory controller on the cpu.

The 1-4/5,1 have all pretty much shadowed Intel chipsets in pretty much the same enclosure and I don't expect the Haswell Xeon version to be any different.

----------

Well I finally bought the base 4c model this week on sale ($2,599) so that can only mean one thing: The new-new mac pro will drop within a month. ;)



I'd like to say that is a joke but it usually happens to me each major purchase. But good news for everyone else!


We call it Sod's law over here in the UK! And you did get the base model after all. If it fits your uses don't worry you could always drop an 8 core 3.3 when they drop in price used!
 
The next Mac Pro will be a 7,1, not a 6,2. In recent history, the major number changes with a chipset change (and sometimes not even that), and the new Mac Pro will require a new chipset.
 
We call it Sod's law over here in the UK! And you did get the base model after all. If it fits your uses don't worry you could always drop an 8 core 3.3 when they drop in price used!

Here it's Murphy's I believe :D

Unfortunately budget didn't allow for what I wanted originally (6c / 512GB / D500) because of some last minute expenses. My wife needed a new computer so my cMP goes to her and the nMP to me. Merry Christmas everyone.
 
AD, good deal there.
You guys are so lucky over there, prices are way lower.
Over here the base 4c goes for 3099,00€ - that's EUR :-(
Playing the waiting game can be pointless, if the 7,1 is a no show.
If your wife didn't mind you getting the gorgeous new MP while she keeps the old one, lucky you!!
Merry Xmas you too.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.