Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
I have a front row seat and a large box of popcorn. The only thing missing from this back & forth is ole' Tesselator! Now those were the good old days! :D
 

Attachments

  • Popcorn.png
    Popcorn.png
    307.4 KB · Views: 172
that's cool.. and certainly the direction i hope to see coming to my own software (the realtime stuff).

it's gpu processing but it's not cuda (i don't think).. or opencl.
you don't need either of those to code gpgpu.. they just help the developer.. but if you're wiz enough to do it from scratch, more power to you.

----
didn't realize you meant watch a video.. (or there wasn't a video at the link).. this one?

YouTube: video

that looks like openGL stuff.. my modeling software (rhino) is currently able to look like that realtime (minus the animation)..
the actual movie looks different so there's another rendering process that happens i think.. getting the final ray traced look in real time.. that's the goal. (or- i hope that's the goal.. it's just a wish for me)

You need Cuda. You can't address GPUs directly from assembler or any other language since the old VESA gpus days. This is why the alternative OS developpment as pretty much fadded away now. All API are closed sources now. You have to use what NVidia or AMD gives you, CUDA or OpenCL those are your choice if you want to program GPUs directly.

The only GPU left that I know of that is directly addressible at the hardware level via assembler albeit in a roundabout way is the Intel one because it is part of the CPU itself.
 
You need Cuda. You can't address GPUs directly from assembler or any other language since the old VESA gpus days. This is why the alternative OS developpment as pretty much fadded away now. All API are closed sources now. You have to use what NVidia or AMD gives you, CUDA or OpenCL those are your choice if you want to program GPUs directly.

The only GPU left that I know of that is directly addressible at the hardware level via assembler albeit in a roundabout way is the Intel one because it is part of the CPU itself.

I didn't entirely mean that Pixar software is built from nothing into something.
just that it looks (as in- the way it's rendered on the screen) like it's using OpenGL as the base and not cuda. or the gpu is being accessed via OpenGL and not cuda or openCL.

but I'm just basing that off the software I use in which the realtime rendering isn't raytracing.. instead using OpenGL 'tricks'.. that monsters inc workflow looks real similar to me. I don't know anything about how that software (presto) was actually written.
 
Here a recent test illustrating a task done every day and shows why we moved to PC:

System 1: 2x Xeon E5-2690v2, 256 GB RAM, Quadro K6000, Windows 8.1
System 2: nMP, 1x E5-2690v2, 64GB, D700

Task: Transcode ~10 minute 6K Dragon clip to ProRes 444 QX

System 1, using GPU: 8:15
System 2, using GPU: 21:11

Tell me again why having more cores doesn't matter?

Or should i wait a few days to send dailies? Maybe we can call them weeklies?
 
read what? this:



if you mean something specific, say something specific.. don't just drop a link that talks about a few different things and run.. it's poor internetting.. which is then amplified when you come back after link&run and start insulting people "what are you, stupid?" etc..

you said 'watch this'.. i assume you meant the video i posted.. is that right? is that what you wanted me to watch? if so, that's not cuda based software.


... .

it's software.. nothing inherently special about nvidia hardware which allows cuda to run.. other than nvidia locking it down to their gpus.
same thing as osx running on macs and not dells even though the hardware inside is moreorless the same.

----------



bye

Hello Flat Five,

I'm still in the early learning stage of CUDA software development. It's just one of the items on my bucket list. OptiX has it's basis in the CUDA software development kit. Here's a URL that makes the link between OptiX and CUDA a little clearer: http://www.nvidia.com/object/optix.html . The opener to that article is, in part:

"INTERACTIVE RAY TRACING ON NVIDIA QUADRO PROFESSIONAL GRAPHICS SOLUTIONS The NVIDIA® OptiX™ ray tracing engine elevates applications to a new level of interactive realism by greatly increasing ray tracing speeds on NVIDIA® GPUs using the NVIDIA® CUDA® GPU computing architecture. - See more at: http://www.nvidia.com/object/optix.html#sthash.zdBQrPP8.dpuf "

And the closer to that article is:

"To experience the OptiX engine in action, see a live demonstration by Pixar of their Lighting Tool at Siggraph 2013. - See more at: http://www.nvidia.com/object/optix.html#sthash.zdBQrPP8.dpuf ."

Here's more support that Optix is derived from the CUDA toolkit:

"OptiX SDK System Requirements:
... .
GPU: CUDA-capable, G80 or later. GT200 class or later GPU required for multi-GPU scaling and technical support
... .
NVIDIA Driver R300 or later, CUDA toolkit 2.3 or later
Development Environment: C/C++ Compiler, CUDA Toolkit 2.3 or newer, CMake (only for rebuilding SDK samples)"
[ https://developer.nvidia.com/optix ]. [Emphasis added.] Presto is just one of the many derivatives sprouting from CUDA.

When you have some free time, you might check out this URL also (and download the PDF) : [ http://www.nvidia.com/content/gpu-applications/PDF/GPU-apps-catalog-mar2015.pdf ]. It lists (and provides brief descriptions of) most of the growing number of applications now supported by CUDA.
 
Last edited:
Hello Flat Five,

I'm still in the early learning stage of CUDA software development. It's just one of the items on my bucket list. OptiX has it's basis in the CUDA software development kit. Here's a URL that makes the link between OptiX and CUDA a little clearer: http://www.nvidia.com/object/optix.html . The opener to that article is, in part:

"INTERACTIVE RAY TRACING ON NVIDIA QUADRO PROFESSIONAL GRAPHICS SOLUTIONS The NVIDIA® OptiX™ ray tracing engine elevates applications to a new level of interactive realism by greatly increasing ray tracing speeds on NVIDIA® GPUs using the NVIDIA® CUDA® GPU computing architecture. - See more at: http://www.nvidia.com/object/optix.html#sthash.zdBQrPP8.dpuf "

hey Tutor..
thanks for the info. fwiw, i'm aware of optix engine.. if you follow the conversation or the bit you quoted me, you'll see i was talking to aiden about his claim (something like) "openCL is for amateurs.. real pros use cuda.. watch this for proof!!!" then proceeds to post an example of openGL :rolleyes:
that's the argument i was making..
the conversation which should follow would be aiden saying he was mistaken and maybe not as aware of what cuda actually is or how it's being used.. then the conversation could progress.. instead, he resorts to name calling and accusing me of being illiterate etc..

i'm into talking about this stuff with you guys.. but i don't need to be skooled on what reeall pros use.. especially when the schooling involves inaccurate information.

And the closer to that article is:

"To experience the OptiX engine in action, see a live demonstration by Pixar of their Lighting Tool at Siggraph 2013. - See more at: http://www.nvidia.com/object/optix.html#sthash.zdBQrPP8.dpuf ."

neat.
watch the video in that link:
http://images.nvidia.com/content/quadro/videos/pixar-siggraph-r1-original.mp4

watch whats actually happening (at around 2minutes and on) and you see the rendered image resolving in seconds as a view change is made.. (not quite real time but incredibly close when compared to rendering that frame a few years ago would take hours to resolve)

now watch this video:
https://vimeo.com/113329185

notice it's the same thing.. very near realtime raytracing.. it's being done with openCL.


really, my point is this.. i think nvidia is pulling a fast one on you guys or in this forum's lingo, serving you all koolaid.. they have a vested interest in promoting cuda.. all of you guy's 'proof' of how incredible cuda is is via nvidia's or affiliates own advertisements.. don't get me wrong, cuda and it's sucessors are amazing.. but please try to realize it's not the only way.. it's simply software.. other people not affiliated with nvidia can write software too.. and they are doing it.. openCL just doesn't get the blast of advertisements since it's an open standard and runs on everything..

the preview of indigo4 is only a preview.. the software isn't ready for public yet but you should still be able to see that and kick it down a notch on the cuda blasting.. or "oh, ok.. there are other options too"

Here's more support that Optix is derived from the CUDA toolkit:

"OptiX SDK System Requirements:
... .
GPU: CUDA-capable, G80 or later. GT200 class or later GPU required for multi-GPU scaling and technical support
... .
NVIDIA Driver R300 or later, CUDA toolkit 2.3 or later
Development Environment: C/C++ Compiler, CUDA Toolkit 2.3 or newer, CMake (only for rebuilding SDK samples)"
[ https://developer.nvidia.com/optix ]. [Emphasis added.] Presto is just one of the many derivatives sprouting from CUDA.

decent info until the end.. read the link aiden posted..
""Along the way, it adopted SGI’s systems, and, then moved to PCs equipped with NVIDIA’s graphics cards.""

SGI is openGL
(see history at wiki)
http://en.wikipedia.org/wiki/OpenGL

again, nvidia is being a bit deceptive there.. that whole article makes it seem as if nvidia cards are necessary for what's pixar is doing when in reality, pixar is using open standard software and they just happen to choose nvidia as their hardware source.. (i'm sure there's more of a partnership as opposed to 'they just chose' but still...



When you have some free time, you might check out this URL also (and download the PDF) : [ http://www.nvidia.com/content/gpu-applications/PDF/GPU-apps-catalog-mar2015.pdf ]. It lists (and provides brief descriptions of) most of the growing number of applications now supported by CUDA.

again, they're somehow confusing people with reality.. that's a list (and very far from conclusive) of gpu accelerated programs.. most of those acceleration are coming from openGL..
it's not a list of CUDA apps nor is it a list which requires a user to buy nvidia cards in order to utilize the accelerated features of those apps.
 
Last edited:
Thank you for more info. on GPUs, especially Indigo 4 preview.

of note in the context of this forum (it being a mac forum).. the indigo developers have said:

Indigo's pure GPU rendering is based on OpenCL, a vendor-neutral standard for GPU computing that notably works on AMD GPUs (which don't support CUDA), and also both Intel and AMD CPUs besides NVIDIA GPUs. This means your CPU can contribute to GPU renders, and some newer CPUs even have built-in GPUs, which are also fully exploited; for example Intel's i7 "Haswell" processors have an excellent built-in GPU that is competitive with many mid-range stand-alone GPUs.

[...]

We've also tested multi-GPU rendering on Apple's new Mac Pro running Mac OS X Yosemite, which features dual AMD D700 GPUs:
<image>
Apple have been a great help in getting Indigo's GPU rendering working on Mac, and we're looking forward to posting more GPU rendering results on the Mac Pro soon.

Keep an eye out for development updates as Indigo's GPU rendering mode matures with more features and faster rendering!

i suppose that could be spun into various negatives by the forum but the way i read it is that apple themselves are excited with their design and are definitely assisting developers to tune their product towards the nmp ala final cut.. i imagine this indigo snippet is playing out with many more developers as well.. it's just behind the scenes stuff for now.

-----

[edit].. also, just a quick note that i'm not just searching around the interwebz trying to find something positive to say about openCL.. indigo used to be free software but in 2009, it went commercial.. the first 100 people to buy it got it at a discounted cost as well as free upgrades for life.. i'm one of those 100 :)

ilife.png


so i'm using indigo as an example simply because it's what i know.. i imagine the other renderers are at least messing around with openCL but if not, i guess i'll just consider myself lucky that the software i've been using since well before all this next gen acceleration stuff happens to be a group of people choosing to be pioneers in the field.

another pure gpu renderer i'm familiar with is thea.. they were strictly using cuda until recently:
thea cheif dev said:
OpenCL has a strange beauty that made us keep visiting this topic again and again. And finally, after making a big dive, we managed to port Presto on OpenCL.

At this point though, we cannot give a date as to when this port will be publicly available. But we want to inform our users that we are aware of the need and we are always pushing the hardware to the limits.
*note- not to be confused with pixar's presto software.
https://www.thearender.com/cms/index.php/features/engines/presto-gpu.html
 
Last edited:
Funny, but Glare claims multi-platform support which you seem to spinning someplace else: (http://www.indigorenderer.com/features/)

lol. what's your agenda anyway?

that's their gpu accelerated implementation which has been in the software for a couple of years.. currently, the gpu basically assists the cpu and takes some of its load off.. but once the cpu peaks, that's it.. you're at the limit.. (like- you can't really put a much better gpu (or multiple) in there and get better performance.. even low-to-mid grade cards won't max out).. it's faster than cpu only but not leaps&bounds faster..

the new implementation (in which they're dropping cuda alltogether) is pure gpu rendering.. (though with openCL.. the cpus will actually assist the gpus.. maybe not to a super noticeable degree but the cpu will no longer be the major bottleneck)
 
Last edited:
also of note with the current accelerated implementation is that it's quite limiting.. (this is the same or similar with all the other renderers too.. it's a hurdle that generally hasn't been overcome yet)

normally, you have a variety of tracing options which are beneficial depending on the scene/lighting/materials:

indigocurrent1.png

.
.
.
.
.

choose what you want to accelerate with:

indigocurrent2.png


.
.
.
.
.

turn on the acceleration and all the tracing options go away and you're stuck with normal path tracing:

indigocurrent3.png

.
.
.

so currently, you're getting 2-3x speed ups only if your model/materials/lighting are conducive to standard path tracing..

in this scene: (see.. i really do design stuff and really do render stuff :) (and really do go on to build the stuff irl))

WAIL_1250.jpg


using path tracing with cuda would resolve at around the same time as using bidirectional mlt without cuda.. it's a wash.




.

[EDIT]
[EDIT2]..removed taunt from edit1 so weaselboy doesn't have to do it for me.
 
Last edited:
of note in the context of this forum (it being a mac forum).. the indigo developers have said:



i suppose that could be spun into various negatives by the forum but the way i read it is that apple themselves are excited with their design and are definitely assisting developers to tune their product towards the nmp ala final cut.. i imagine this indigo snippet is playing out with many more developers as well.. it's just behind the scenes stuff for now.

-----

[edit].. also, just a quick note that i'm not just searching around the interwebz trying to find something positive to say about openCL.. indigo used to be free software but in 2009, it went commercial.. the first 100 people to buy it got it at a discounted cost as well as free upgrades for life.. i'm one of those 100 :)

Image

so i'm using indigo as an example simply because it's what i know.. i imagine the other renderers are at least messing around with openCL but if not, i guess i'll just consider myself lucky that the software i've been using since well before all this next gen acceleration stuff happens to be a group of people choosing to be pioneers in the field.

another pure gpu renderer i'm familiar with is thea.. they were strictly using cuda until recently:

*note- not to be confused with pixar's presto software.
https://www.thearender.com/cms/index.php/features/engines/presto-gpu.html

I have 116 GPUs and have lots of CPUs (see my sig.) that I use for rendering. I use GPU as well as CPU renderers. I own twice as many seats of TheaRender as I do any other GPU/CPU renderers; so I'm very familiar with their Presto. Octane is the next most numerous GPU renderer in my studio. The GPU renderers that I currently have the fewest seats of are Redshift3d and FurryBall. BTW-the next version of OctaneRender (v.3) is said to support CUDA and OpenCL on AMDs and on CPUs. It's suppose to drop in the 3rd quarter of this year. I like Octane because it currently supports more apps via plugins, most recently adding a Nuke plugin, and will soon support additionally AfterEffects, Photoshop, Houdini, MotionBuilder, and a little later Zbrush and Unreal Engine 4. That's in addition to it's current plugin support for 3ds Max, AutoCAD, Inventor, Maya, Blender, Carrara, CINEMA 4D, DAZ Studio, ArchiCAD, LightWave, MODO, Revit, Rhinoceros, SketchUp, Poser, and Softimage (now frozen in time). Checkout the upcoming interactive Brigade real time path tracer's demo with the red automobile here: http://home.otoy.com/render/brigade/showcase/.
 
Last edited:
I have 116 GPUs and have lots of CPUs (see my sig.) that I use for rendering. I use GPU as well as CPU renderers. I own twice as many seats of TheaRender as I do any other GPU/CPU renderers; so I'm very familiar with their Presto. Octane is the next most numerous GPU renderer in my studio.
nice!.. we probably know some of the same interneters if you frequent thea or octane forums.. i don't use either but a few people using my modeling software do.. fwiw, some of the sweetest stuff i see comes out of Thea.
(though i shouldn't imply thea is responsible for it.. these guys are just super talented and could make awesome renderings with any software.. just that they choose to use thea)

Checkout the upcoming interactive Brigade real time path tracer's demo with the red automobile here: http://home.otoy.com/render/brigade/showcase/.
what the?! is that real? otoy are the octane people, right? [edit] right.. brigade looks incredible though.. a realtime path tracing game engine? i wouldn't of guessed we were that close to that type of tech[/edit]
 
... . what the?! is that real? otoy are the octane people, right? [edit] right.. brigade looks incredible though.. a realtime path tracing game engine? i wouldn't of guessed we were that close to that type of tech[/edit]

Heck. It's real. Octane is from Otoy. We're just months away from the release of Real real-time rendering via Brigade - late summer or fall 2015.
 
Heck. It's real. Octane is from Otoy. We're just months away from the release of Real real-time rendering via Brigade - late summer or fall 2015.

thanks for the heads-up.
i'll keep an eye out for it.. looks real interesting.

(especially because otoy seems quick to make plugins for the modeling apps.. rhino for mac doesn't have an SDK yet but the official release should be soon.. within months it seems.. pretty much the first renderer to make the plugin directly to rhino for mac once the SDK is released is potentially going to be able to scoop up a lot of new users)
 
I didn't entirely mean that Pixar software is built from nothing into something.
just that it looks (as in- the way it's rendered on the screen) like it's using OpenGL as the base and not cuda. or the gpu is being accessed via OpenGL and not cuda or openCL.

but I'm just basing that off the software I use in which the realtime rendering isn't raytracing.. instead using OpenGL 'tricks'.. that monsters inc workflow looks real similar to me. I don't know anything about how that software (presto) was actually written.

Opengl and Cuda aren't the same thing and they do totaly different job. You should read a bit on it.
 
Opengl and Cuda aren't the same thing and they do totaly different job. You should read a bit on it.

what's the topic?
you sorta jumped in the middle of a conversation and now i'm being told i need to read something yet again.

as far as i gather, the topic was pixar's animation software- presto.
are you saying that software needs either cuda or openCL? and that it doesn't incorporate openGL?

or are you talking about their rendering/lighting program?

or.. what are you talking about exactly? why do i need to read? and what do i need to read?

---
you're wrong.. you should read :rolleyes:
what is that?

----------

oh wait.. nvrmnd.. i remember you now..
one of those people that are always putting people on their ignore list but somehow keep responding to them :rolleyes:
i know your type.. keep_on_keepin_on.. bro
 
what's the topic?

oh wait.. nvrmnd.. i remember you now..
one of those people that are always putting people on their ignore list but somehow keep responding to them :rolleyes:
i know your type.. keep_on_keepin_on.. bro

LOL. Good to know I'm still making an impact on this thread. Can't remember why I was ignored. My business approach must have been too much for some to handle. And of course I was right...still no nMP. :D

Either end of summer or not until early next year. ;)
 
what's the topic?
you sorta jumped in the middle of a conversation and now i'm being told i need to read something yet again.

as far as i gather, the topic was pixar's animation software- presto.
are you saying that software needs either cuda or openCL? and that it doesn't incorporate openGL?

or are you talking about their rendering/lighting program?

or.. what are you talking about exactly? why do i need to read? and what do i need to read?

---
you're wrong.. you should read :rolleyes:
what is that?

----------

oh wait.. nvrmnd.. i remember you now..
one of those people that are always putting people on their ignore list but somehow keep responding to them :rolleyes:
i know your type.. keep_on_keepin_on.. bro

You're just trolling now.

OpenGL is for graphics rendering. OpenCL and Cuda are for GPGPU programming. You can write whole program that are meant to run on your graphic card but don't produce anything on your display. The whole bitcoin mining is a prime example of this. Scientific simulations are another example. They use the GPU for number crunching, not for display.

OpenGL on the other hand is for programming shaders and effects.

Again, go read.
 
So I have been following this thread for awhile. I wanted to get everyone's opinion whether or not we should expect an update at WWDC. I was bored last night so I did a deep dive into available and soon to be available tech that is suitable for the mac pro. I think there is enough information out there that we could make a few reasonable guesses.

First, lets make the assumption that the form factor does not change. Apple put a lot of investment and engineering into the current form factor, love it or hate it, its going to be around for awhile.

We will start with CPUs. Since the nMP came out, Haswell-E processors and a new chipset (X99) have been released. These are upgrades from Ivy Bridge-E and the X79 used currently. Neither of these are earth shattering, with maybe a 5% increase in performance. One benefit is the potential for added cores, I think up to 18 (I may be wrong on this). The modest per core performance benefit is likely why apple didn't upgrade to it last year. Apple could wait for Broadwell-E, but this won't be launched until Q1 2016. If indications from the consumer Broadwell chips that have been released can be extrapolated, the increased performance won't be anything too exciting. Additionally, Broadwell-E doesn't include any chipset enhancements, something that may be important for thunderbolt (more on this later).

Second, and more interestingly, are the GPUs. Lets make the assumption that apple sticks with AMD, since AMD seems to be much more willing to make custom chips for a given form factor (see consoles, iMac, current Mac Pro) than nvidia, not to mention these chips perform a bit better for compute workloads. Currently, the GPUs in the mac pro are a generation old, and will soon be 2 generations old, with the release of AMDs 300 series GPUs in the next 2 months, its reasonable we could see this in a mac pro. While Apple doesn't always go with the latest and greatest tech in their products, they were the first ones to use AMD's Tonga architecture with the m295X in the iMac last year. Another thing to keep in mind that despite the current GPUs being "workstation" parts, they are more similar to under clocked desktop chips with a custom circuit board. Thus, there is no need for Apple to wait on the workstation class parts that are released a few months after the consumer parts come out. I could see Apple being one of the first ones to use AMDs new GPUs, especially if AMD has improved the performance to power ratio significantly. Another factor that may work in favor of this is added SDKs to make these chips work together. There is a lot of work on this with AMD's mantle and Directx 12 in windows, and while these technologies obviously can't directly translate to OS X, something like it could be announced for OS X (something perfect to announce at a developers conference).

Last, what pulls everything together is a potential retina display. Lets assume Apple really wants this for their "pro" machines, but doesn't want a dual cable hack that current 5k monitors use and isn't moving away from thunderbolt. Displayport 1.3 was finalized last fall and can carry a 5k signal, and the earliest we may see it would be this summer, potentially in new GPUs from AMD. However, Thunderbolt 2 does not have enough bandwidth to carry displayport 1.3, so to see a retina thunderbolt display we would need to see thunderbolt 3. There is a rumored thunderbolt chip (alpine ridge) from intel that is supposed to release with the new skylake CPUs (as skylake has more bandwidth coming from the CPU). However, skylake is only being released for consumers (laptops/desktops) this year and we won't see release for workstation/server chips (like those used in the Mac Pro) until 2017. If Apple uses the new thunderbolt 3 controller with Haswell-E, there would be a bandwidth bottleneck that is shared between 2 GPUs and 3 thunderbolt 3 controllers. Since I am only an armchair enthusiast, can anyone correct me if it is in fact possible to use thunderbolt 3 controllers with existing CPUs?

So to summarize, CPUs have been chugging along with incremental improvements as Intel's releases have slipped. GPUs have made some relatively big jumps since the current Mac Pro was released. It looks like that there is some soon to be released technology that would enable a retina display for the mac pro, but the soonest this tech could be released would be the second half of this year, making WWDC the earliest it could be announced. Anyone want to take bets if we see a new mac pro at WWDC this year?
 
Thoughtful post.

You forgot to mention that basically whether Apple wants to or not, they will be shipping faster PCIE SSDs since Samsung has moved to the SM951.

So they can offer 50% faster drives just by shifting to a new part. Possibly more speed if they replumb the SSD to PCIE 3.0. They could then DOUBLE drive throughput. (No doubt that 50-100% faster would be more than "incremental")

You might also want to include RAM, something I know little about so perhaps someone else could chime in.

I think they should include NVIDIA GPUs and stop pretending that a reworked desktop card becomes "workstation" card just because they say so.

I think it quite possible that they will have a nMP announcement, but I doubt it will be as big a deal as 2013.

They should either come out with new display or lower price or cancel 27", just an embarassment now. Just last OS update added reference to a specific display for use with retina imac. I think I was wrong about it being Asus 321, likely it is 5K Dell. If they are putting Dell displays in OS in April, can't see them bring new display in June.

All guessing obviously.
 
First, lets make the assumption that the form factor does not change. Apple put a lot of investment and engineering into the current form factor, love it or hate it, its going to be around for awhile.

Agreed.

We will start with CPUs. Since the nMP came out, Haswell-E processors and a new chipset (X99) have been released. These are upgrades from Ivy Bridge-E and the X79 used currently. Neither of these are earth shattering, with maybe a 5% increase in performance. One benefit is the potential for added cores, I think up to 18 (I may be wrong on this). The modest per core performance benefit is likely why apple didn't upgrade to it last year. Apple could wait for Broadwell-E, but this won't be launched until Q1 2016. If indications from the consumer Broadwell chips that have been released can be extrapolated, the increased performance won't be anything too exciting. Additionally, Broadwell-E doesn't include any chipset enhancements, something that may be important for thunderbolt (more on this later).

Technically the Xeons in the Mac Pro come from the EP series... not the E.

Benchmarks showed some performance increases over Ivy, but also some decreases. Net performance was up 3% on average.

You're right that the top end EP chip offers 18-cores, but it comes at an insane $4500 price point which means only the super rich will be able to afford one after Apple's normal margins are applied.

The only real benefit to mainstream buyers is a new Haswell-EP 8-core part at around $1000 vs Ivy's 8-Core part at $1700. The low end 1600 series 4-core and 6-core parts haven't changed in price so there's unlikely to be any change in Mac Pro pricing on the entry level.

Second, and more interestingly, are the GPUs. Lets make the assumption that apple sticks with AMD, since AMD seems to be much more willing to make custom chips for a given form factor (see consoles, iMac, current Mac Pro) than nvidia, not to mention these chips perform a bit better for compute workloads. Currently, the GPUs in the mac pro are a generation old, and will soon be 2 generations old, with the release of AMDs 300 series GPUs in the next 2 months, its reasonable we could see this in a mac pro. While Apple doesn't always go with the latest and greatest tech in their products, they were the first ones to use AMD's Tonga architecture with the m295X in the iMac last year. Another thing to keep in mind that despite the current GPUs being "workstation" parts, they are more similar to under clocked desktop chips with a custom circuit board. Thus, there is no need for Apple to wait on the workstation class parts that are released a few months after the consumer parts come out. I could see Apple being one of the first ones to use AMDs new GPUs, especially if AMD has improved the performance to power ratio significantly. Another factor that may work in favor of this is added SDKs to make these chips work together. There is a lot of work on this with AMD's mantle and Directx 12 in windows, and while these technologies obviously can't directly translate to OS X, something like it could be announced for OS X (something perfect to announce at a developers conference).

The current parts use AMD Pitcairn and Tahiti GPU cores. There was a minor bump in performance (20%?) with Hawaii and Tonga but that also came at a thermal penalty... they ran hotter. The new part coming out is the Fiji core and that will be for the top-end only, so could form the basis for a D700 successor. It will likely support HBM memory for better bandwidth, but there's only so much AMD can do to improve performance without compromising thermals until they move to 20nm.

And, while the Fiji core offers an upgrade for the D700s... what about the D500 and D300 GPUs? Maybe they inherit the D700 and D500 cores respectively?

Also, you need to keep in mind that Apple is running two high-end Tahiti cores and a Xeon CPU off a 450W power supply with a single cooling fan. This means they are certainly binning the GPUs for optimal thermal performance. That will likely mean some hysteresis between any new GPU launch and it being added to the nMP.

I believe that until we see GPUs move to 20nm process, we're not going to see any significant improvements. And with Apple pretty much consuming all available 20nm FAB production with A8 (and it's successor), I'm starting to wonder if we'll ever see 20nm GPUs.

And unless Nvidia changes their tune about doing custom cards, we'll never see Nvidia in a nMP.

Last, what pulls everything together is a potential retina display. Lets assume Apple really wants this for their "pro" machines, but doesn't want a dual cable hack that current 5k monitors use and isn't moving away from thunderbolt. Displayport 1.3 was finalized last fall and can carry a 5k signal, and the earliest we may see it would be this summer, potentially in new GPUs from AMD. However, Thunderbolt 2 does not have enough bandwidth to carry displayport 1.3, so to see a retina thunderbolt display we would need to see thunderbolt 3. There is a rumored thunderbolt chip (alpine ridge) from intel that is supposed to release with the new skylake CPUs (as skylake has more bandwidth coming from the CPU). However, skylake is only being released for consumers (laptops/desktops) this year and we won't see release for workstation/server chips (like those used in the Mac Pro) until 2017. If Apple uses the new thunderbolt 3 controller with Haswell-E, there would be a bandwidth bottleneck that is shared between 2 GPUs and 3 thunderbolt 3 controllers. Since I am only an armchair enthusiast, can anyone correct me if it is in fact possible to use thunderbolt 3 controllers with existing CPUs?

I agree with you that a new 5K TB display is coming at some point... but I doubt it is immanent. All of Apple's displays since 2008 have been designed as docking stations for MacBooks... not for use with Mac Pros. You only have to look at the short pig-tail cable with a magsafe charger that comes attached to the display to understand this. Mac Pro owners need to forget about a new display from Apple until the MacBook line gets refreshed to DP1.3 which will probably happen over time via USB-C... not Thunderbolt.



Thoughtful post.

You forgot to mention that basically whether Apple wants to or not, they will be shipping faster PCIE SSDs since Samsung has moved to the SM951.

So they can offer 50% faster drives just by shifting to a new part. Possibly more speed if they replumb the SSD to PCIE 3.0. They could then DOUBLE drive throughput. (No doubt that 50-100% faster would be more than "incremental")

You might also want to include RAM, something I know little about so perhaps someone else could chime in.

I think they should include NVIDIA GPUs and stop pretending that a reworked desktop card becomes "workstation" card just because they say so.

I think it quite possible that they will have a nMP announcement, but I doubt it will be as big a deal as 2013.

They should either come out with new display or lower price or cancel 27", just an embarassment now. Just last OS update added reference to a specific display for use with retina imac. I think I was wrong about it being Asus 321, likely it is 5K Dell. If they are putting Dell displays in OS in April, can't see them bring new display in June.

All guessing obviously.

Agree on the SSD's... that's probably the most compelling upgrade available at the moment.

DDR4 isn't going to offer anything in the way of performance gains except in rare edge cases... it's great for reduced power consumption in mobile computers but in desktops offers little advantage. Large CPU cache sizes take a lot of pressure off the memory subsystem.

Agree, they should definitely switch to Nvidia, but unfortunately, as we know, that's probably up to Nvidia.

Agree also that if they do announce something, it will be a quiet update to the website. Nothing will warrant keynote time.

And yeah, the 27" Thunderbolt with old mag-safe and USB 2 for $999 is a ridiculously poor value and a total embarrassment to Apple. It should have been updated years ago.

----------

i have a front row seat and a large box of popcorn. The only thing missing from this back & forth is ole' tesselator! Now those were the good old days! :d

lol!
 
First, lets make the assumption that the form factor does not change. Apple put a lot of investment and engineering into the current form factor, love it or hate it, its going to be around for awhile.

Apple probably put a lot more investment and engineering into the battery for the watch.

I doubt that the form factor will change much for the nnMP, but the reason will be ego. Changing the form factor will be an admission that the nMP missed the mark for many people.

Apple could update and keep the Tube for the people who like it, and introduce a new dual socket system at the same time.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.