Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
While this is true, 12 core vs. 16 cores isn't going to make any difference for a word processor. Applications that can't be parallelized never go beyond one core anyway, so what difference does 12 or 16 cores make?

Just for your general knowledge, multithreaded word processors were available 20 years ago. Just sayin'.

In any event, we aren't talking about word processors. We are talking about programs that are designed to use multiple cores, in my case, it's things like Poser 2014, Vue 11, and Z-Brush.
 
newton -> ipad

Newton a lengthy restart as iPad ( which rolled out first as iPhone ). But little to no ARM foundation in its present form if no Newton.

cube -> new mac

Largely more so a positioning shuffling. Instead of imac , cube , Power Mac (Mac Pro). It became Mac mini , iMac , Power Mac(Mac Pro). The Mac market didn't need 2-3 systems inside of the $1K-2K price point. Just one. Mac desktops stopped sliding when Apple clearly let iMacs take the lead.

----------

Just for your general knowledge, multithreaded word processors were available 20 years ago. Just sayin'.

20 years ago multithreading was implemented just fine on one core (what would have been called processor back then ).

Scaling threads/workers to cores doesn't necessarily equate with multithreading.
 
While this is true, 12 core vs. 16 cores isn't going to make any difference for a word processor. Applications that can't be parallelized never go beyond one core anyway, so what difference does 12 or 16 cores make?

This isn't true. Application that can't parallelized can still benefit from multiple core. They can, for exemple start many non dependant processes. In fact this is the strenght of cpu vs gpu.
 
Just for your general knowledge, multithreaded word processors were available 20 years ago. Just sayin'.

Yes. I'm very mad I can't get a 16 core Mac Pro because now how am I going to run my Microsoft Word?

(I've been a software developer for a long time. Please don't scold me on my general knowledge. Yes, I know what a thread is. No, I really doubt Microsoft Word is doing anything worthwhile with more than 1 core.)

In any event, we aren't talking about word processors. We are talking about programs that are designed to use multiple cores, in my case, it's things like Poser 2014, Vue 11, and Z-Brush.

You were the one who was talking about Microsoft Word, but regardless...

I'm pretty sure all the programs you listed make heavy use of the GPU as well.

This isn't true. Application that can't parallelized can still benefit from multiple core. They can, for exemple start many non dependant processes. In fact this is the strenght of cpu vs gpu.

Any app that can "start many non dependent processes" is by definition parallel. That's like saying you have something with 4 wheels, a motor, and seats that drives on roads but is not a vehicle.
 
Yes. I'm very mad I can't get a 16 core Mac Pro because now how am I going to run my Microsoft Word?

(I've been a software developer for a long time. Please don't scold me on my general knowledge. Yes, I know what a thread is. No, I really doubt Microsoft Word is doing anything worthwhile with more than 1 core.)



You were the one who was talking about Microsoft Word, but regardless...

I'm pretty sure all the programs you listed make heavy use of the GPU as well.

Maybe you should read up on the subjet instead of playing the good little fanboy here.

Plenty of applications are cpu bound and will be for a very long time. Better get used to it.

----------

Yes. I'm very mad I can't get a 16 core Mac Pro because now how am I going to run my Microsoft Word?

(I've been a software developer for a long time. Please don't scold me on my general knowledge. Yes, I know what a thread is. No, I really doubt Microsoft Word is doing anything worthwhile with more than 1 core.)



You were the one who was talking about Microsoft Word, but regardless...

I'm pretty sure all the programs you listed make heavy use of the GPU as well.



Any app that can "start many non dependent processes" is by definition parallel. That's like saying you have something with 4 wheels, a motor, and seats that drives on roads but is not a vehicle.

Err no.

It's just your limited knowledge that make sound like that.
 
Maybe you should read up on the subjet instead of playing the good little fanboy here.

Plenty of applications are cpu bound and will be for a very long time. Better get used to it.

CPU bound is not the same things as parallelizable.

Photoshop is currently heavily CPU bound, but it's not very parallel. It won't matter if you throw 12 or 16 cores at it.
 
CPU bound is not the same things as parallelizable.

Photoshop is currently heavily CPU bound, but it's not very parallel. It won't matter if you throw 12 or 16 cores at it.

You're still wrong, but that's ok. We understand...

Actually having more core will help your workflow, since you can assign cores to applications. So if an application uses 4 core max on a 16 core machine this leaves you with 12 other cores for othe applications. You do know this, now don't you? I hope you also know that people may use more than one power hungry application at a time on a 16 core, multi gb ram system now, do you?
 
You're still wrong, but that's ok. We understand...

Sure. Whatever.

Actually having more core will help your workflow, since you can assign cores to applications.

No you can't, at least not on the Mac.

So if an application uses 4 core max on a 16 core machine this leaves you with 12 other cores for othe applications. You do know this, now don't you?

Sure.

I hope you also know that people may use more than one power hungry application at a time on a 16 core, multi gb ram system now, do you?

Not as likely...

Multiple applications at once will slam the disk and slam memory. So yes, you could. It probably wouldn't be a great idea unless you had a very very well decked out machine.

Google has this problem on their servers and their solution is to not run with disks at all but to throw everything into RAM.

Or you know, if you're in two apps you could just get two machines. Or a rendering farm. I do this when I'm working on something on the MBP that's a big task. I just boot it over to the Mac Pro and keep working.

It's just your limited knowledge that make sound like that.

Besides repeatedly telling me I'm wrong, you haven't actually said very much about what I pointed out.
 
Yes. I'm very mad I can't get a 16 core Mac Pro because now how am I going to run my Microsoft Word?

(I've been a software developer for a long time. Please don't scold me on my general knowledge. Yes, I know what a thread is. No, I really doubt Microsoft Word is doing anything worthwhile with more than 1 core.)



You were the one who was talking about Microsoft Word, but regardless...

I'm pretty sure all the programs you listed make heavy use of the GPU as well.

You would be pretty wrong. To quote the manufacturer of Z-Brush:

Note About Graphics Cards:
ZBrush is software rendered, meaning that ZBrush itself is doing the rendering rather than the GPU. Your choice of GPU will not matter so long as it supports the recommended monitor resolution.


E-on Software, the makers of Vue:

Multi-processor rendering: Respectively 4/8 CPUs on 32/64 bit OS.

Smith Micro, the maker of Poser:

Poser Pro 2014 lets you take advantage of high performance 64-Bit Macintosh and Windows Operating Systems and hardware with the new 64-Bit FireFly Render Engine. The 64-Bit render engine efficiently uses all available system memory to render even the most complex scene files in the shortest possible time. Distribute 64-Bit rendering jobs via the Queue Manager for even greater time savings at render time.

I am sure that the program managers are looking at the possibility of moving rendering to the GPU, but not in the near term (5 years). OpenCL simply hasn't been up to snuff at this point.

As far as word processors, I wasn't talking about that PoS called Microsoft Word, I was talking about Describe - I was an OS/2 user back at that time, preemptive multi-tasking multiple applications on a 386DX40 with 16Mg of ram.

Ahh, those were the days.
 
This isn't true. Application that can't parallelized can still benefit from multiple core. They can, for exemple start many non dependant processes. In fact this is the strenght of cpu vs gpu.

even then.. most applications that can be parallelized would be fine with just two cores..it's not as if every app that multithreads pushes all the cpus to 100%.. not even close.

very few apps will max out all the cores..

and the ones that do, the benefits aren't as strong as you guys are making it out to be.. basically, you're arguing that this:

min12.png



is so much better than this:

min16.png



but to me, both of those suck.. and in both situations, i'm not going to sit around watching that progress bar.. i'm outta there..

the only way to really make a difference is to put hundreds of cores in the mix.. (or smarter/cleaner/more evolved algorithms & software development and/or utilizing hardware that is much faster at certain processes other than cpu)
 
I am sure that the program managers are looking at the possibility of moving rendering to the GPU, but not in the near term (5 years). OpenCL simply hasn't been up to snuff at this point.

5 years? i doubt that (but even then, if it does take 5 years for certain apps to come around, that still fits comfortably within the new macs 10 year expectancy..

but you might be surprised if you talked to the devs.. i mean, i don't use a single app whose developers aren't either A_ experimenting with it now or B_ have already incorporated into the official application.
 
I don't think anyone's arguing that the nMP will be cheaper than the standard Quad2.66 1,1 model.

Nobody has a clue what the price of the NuPro will be.

If the NuPro came in at the same entry price as the OldPro then it would be pretty amazing value c.f. a comparable OldPro system, which doesn't have dual "workstation-class" GPUs or a 1.25 GB/s PCIe-connected SSD. Those are eye-wateringly expensive upgrades - enough to add a few nice Thunderbolt gizmos to your NuPro to make up for the lack of internal expansion.

There is a very, very strong incentive for Apple to make the price attractive to potential customers who don't, strictly speaking, need dual FirePros and ECC RAM.

Edit: Also, there's this:

Image

...did you read down to the bit where it explains why that card is only really suitable for enterprise-scale servers that queue huge numbers of read commands and, quote, "you wouldn't want to drop Micron's drive into a workstation"?

The nMP is already offering 1.25 GBps on its internal PCIe SSD.
 
I am sure that the program managers are looking at the possibility of moving rendering to the GPU, but not in the near term (5 years).

In tech product management, 'near term' classified as 5 years? Chuckle. No.

Anyone who says they are not moving for 5 years far more likely has a relatively high percentage of "stuck in the mud" customers rather than real technology change improvements that could be leveraged.

The 5 year mark is where tech tends start to get fuzzy and hand wavy. Not the end of the "short term". It is far more so the end of the long term plans.


OpenCL simply hasn't been up to snuff at this point.

Not so much OpenCL. All of these mention 64 bit which means two things on x86. First, better math (more registers , wider load/store , etc. ) and > 4GB of RAM. For the second, there aren't alot of > 4GB RAM video cards in the mainstream yet. That just means the problems just don't have enoungh RAM to sit in, not necessary that the GPU is the 'wrong' processor. For the first, it really isn't a large gap.

There are some gaps on OpenCL but they are closing. OpenCL 2.0 brings shared virtual memory, shared address space, and dynamic scheduling.
http://www.anandtech.com/show/7161/...pengl-44-opencl-20-opencl-12-spir-announced/3

Some Apple specific extensions tackle some of the shared address space issues in 10.8 (Mavericks). For user bases that have largely moved to PCI-e v3, the shared memory solutions aren't going to take 4-5 to deploy to a large user base.


As far as word processors, I wasn't talking about that PoS called Microsoft Word, I was talking about Describe - I was an OS/2 user back at that time, preemptive multi-tasking multiple applications on a 386DX40 with 16Mg of ram.

Decoupling the computation thread from relative slow movers ( GUI handling at human interaction speeds, printing , etc. ) is substantively different from chopping a computation task into smaller pieces that tackling them. There was plenty of time slice quantum slices for those tasks on a single core/CPU because it was not fully consumed. That's quite different that loading the core/CPU down with almost "too much" work for its function units by a single thread (or time slicing threads while they wait on memory/disk with symmetric mutlithreading (what Intel relabeled Hyperthreading).
 
While this is true, 12 core vs. 16 cores isn't going to make any difference for a word processor. Applications that can't be parallelized never go beyond one core anyway, so what difference does 12 or 16 cores make?



Any modern processor architecture (even back to the iil ol iPhone 3GS) has DSP acceleration built in. That was even a big selling point of the G4 vs. the G3. As those architectures get better a DSP card gets less necessary because it's already included in the CPU.
you probably arent familiar with teh econsystem of products that have grown up around dsp cards for audio. The continuing joke among their users is that the DSP isn't as much the function of the cards as being an elaborate copy protection dongle system that happens to lessen the load on your CPU.

EG Universal Audio DSP products do not run "native". You MUST have a UAD DSP card connected by PCIe, or Firewire.
I've tested the external DSP cards via thunderbolt, and they work witha firewire adapter. But I have not built any big projects on that kind of setup. I will admit I am a bit gun shy.
If the quality of the UAD plugins and my PCIe based TC Electronic plug ins wasn't so high, I wouldn't be bothered to go outside of the default plugins in Logic.

----------

...I hope you also know that people may use more than one power hungry application at a time on a 16 core, multi gb ram system now, do you?
This is pretty much the problem, from where I stand.
I approach the Mac Pro withteh mindset of someone used to using multiple headless computers, X-windows, batch processes etc.
I'm used to my computers crunching on things for a while and having to put fans in doorways sometimes.
If the GPGPU thing gives me more horsepower I am grinning ear to ear.
However I feel that this will be another "too soon" move by Apple. And the support for this kind of topology will materialize when we are on the 2nd or 3rd iteration of this product.
But of course, this is kind of a chicken-egg thing in that way.
 
You would be pretty wrong. To quote the manufacturer of Z-Brush:

Note About Graphics Cards:
ZBrush is software rendered, meaning that ZBrush itself is doing the rendering rather than the GPU. Your choice of GPU will not matter so long as it supports the recommended monitor resolution.


E-on Software, the makers of Vue:

Multi-processor rendering: Respectively 4/8 CPUs on 32/64 bit OS.

Smith Micro, the maker of Poser:

Poser Pro 2014 lets you take advantage of high performance 64-Bit Macintosh and Windows Operating Systems and hardware with the new 64-Bit FireFly Render Engine. The 64-Bit render engine efficiently uses all available system memory to render even the most complex scene files in the shortest possible time. Distribute 64-Bit rendering jobs via the Queue Manager for even greater time savings at render time.

I am sure that the program managers are looking at the possibility of moving rendering to the GPU, but not in the near term (5 years). OpenCL simply hasn't been up to snuff at this point.

So this isn't necessarily true. As far as I can tell, all those apps (ZBrush has a bunch of posts saying yes and no) use OpenGL viewports.

OpenGL is NOT used for final rendering (as is common, as that rendering process typically takes advantage of things like ray tracing that OpenGL doesn't do.) Fortunately OpenCL is being adopted for things like ray tracing in future applications.

As far as word processors, I wasn't talking about that PoS called Microsoft Word, I was talking about Describe - I was an OS/2 user back at that time, preemptive multi-tasking multiple applications on a 386DX40 with 16Mg of ram.

Ahh, those were the days.

Sure, but things are a little different today. I think the last time I had a Word Processor really push around my machine was on a Mac SE. :)
 
you probably arent familiar with teh econsystem of products that have grown up around dsp cards for audio. The continuing joke among their users is that the DSP isn't as much the function of the cards as being an elaborate copy protection dongle system that happens to lessen the load on your CPU.

That is in part because there was/is no "Open" layer on top of the DSPs. Almost everyone did down to the raw metal with vendor specific ABIs and APIs. Vendor specific APIs lead to these kinds of limitations over time.


.... And the support for this kind of topology will materialize when we are on the 2nd or 3rd iteration of this product.
But of course, this is kind of a chicken-egg thing in that way.

This specific product or OpenCL ? OpenCL isn't 'new' and is about to go on its third iteration. ( 1.0 -> 1.1 -> 1.2 -> 2.0 ). Missing pieces like SPIR ( http://www.anandtech.com/show/7161/...pengl-44-opencl-20-opencl-12-spir-announced/2 ) will fall into place over the next year ( before another Mac Pro comes out ). Being LLVM based, it would be more than weird if it took a long time for Apple to roll this out for the Mac.

Likewise, two GPUs standard is a pretty 'old' toplogy for Macs. New to the Mac Pro but not new in Macs. More symmetric than previous versions is a minor difference though. (although I don't think all of the Mac Pro 2013 configs are going to be symmetric GPUs. )

It is first generation for AMD's Graphics Core Next (GCN) architecture though. However, that is more so a topology component than the topology itself.
 
So this isn't necessarily true. As far as I can tell, all those apps (ZBrush has a bunch of posts saying yes and no) use OpenGL viewports.

Barely. ZBrush is often trottled out because they have built a graphics rendering kernel, for better or worse, on x86. Their model doesn't really track what GPUs have historically done so they effectively render on their own virtual GPU that follows their model (pixol/voxels).

You can see in some of the previous products the OpenGL requirements were pretty tame. Like OpenGL 2.0.

Given they are still 32-bit ( https://support.pixologic.com/index.php?/Knowledgebase/Article/View/66/0/is-zbrush-4r2-64-bit ) it has all the telltale signs there

"... The ZBrush render engine benefits from several core enhancements in version 4R2. Its kernel is now 32 BIT, producing more accurate renders and even more crisp images. ... "
http://pixologic.com/zbrush/features/zbrush4r2/rendering/

that they are couple to some significant bundle of x86 assembler and/or quirky optimizer code that basically puts a high amount of inertia on the core product.

Apple's glacial approach to OpenGL updates probably didn't help much either. That only encourages these kinds of "roll my own stack' kinds of solutions.
 
Barely. ZBrush is often trottled out because they have built a graphics rendering kernel, for better or worse, on x86. Their model doesn't really track what GPUs have historically done so they effectively render on their own virtual GPU that follows their model (pixol/voxels).

You can see in some of the previous products the OpenGL requirements were pretty tame. Like OpenGL 2.0.

Given they are still 32-bit ( https://support.pixologic.com/index.php?/Knowledgebase/Article/View/66/0/is-zbrush-4r2-64-bit ) it has all the telltale signs there

"... The ZBrush render engine benefits from several core enhancements in version 4R2. Its kernel is now 32 BIT, producing more accurate renders and even more crisp images. ... "
http://pixologic.com/zbrush/features/zbrush4r2/rendering/

that they are couple to some significant bundle of x86 assembler and/or quirky optimizer code that basically puts a high amount of inertia on the core product.

Apple's glacial approach to OpenGL updates probably didn't help much either. That only encourages these kinds of "roll my own stack' kinds of solutions.

Apple's OpenGL support hasn't necessarily been awful. There isn't a huge reason I could think of for these apps to go with OpenGL 3 or 4 over OpenGL 2. But regardless, Mavericks brings things up to 4. Most of the complaining I've heard is from game developers trying to port over DirectX code and having no feature parity, but things like viewports don't typically use those features.

There is some annoyances in the shaders, but nothing that can't be dealt with.

I think the OpenGL version on Mac thing is something that is dramatically over discussed. The OpenGL version isn't the source of the speed problems, rather the OpenGL stack itself on OS X is. The version just controls what features you get, it won't accelerate an existing application. But there has been good progress in Mountain Lion and especially Mavericks in getting the graphics stack much faster, possibly in preparation for a the new Mac Pro and rumored Macbook Pros with integrated graphics. If Apple was to adopt integrated graphics in pro oriented machines they better make sure those things really fly.

The cost for a on CPU real time render is pretty extreme, so I can't believe that approach is sustainable in the long run. Just the shuffling between RAM and VRAM along... ugh...

That is in part because there was/is no "Open" layer on top of the DSPs. Almost everyone did down to the raw metal with vendor specific ABIs and APIs. Vendor specific APIs lead to these kinds of limitations over time.

The current DSP standards are unfortunately still not fully standardized. It's a bit better now with Intel Macs. The PowerPC Altivec/Intel SSE split probably didn't help things. Now there is a bit of a standard around SSE, but ARM has the potential to mess with that again with NEON.

Frameworks like Apple's Accelerate help bridge the gap by being processor agnostic, but unfortunately I'm not aware of anything like a cross platform DSP library.

I haven't worked much with Intel's AVX, but apparently it's supposed to be very shiny, and it looks like Accelerate and the latest LLVM/Clang support it:
http://mil-embedded.com/articles/avx-leap-forward-dsp-performance/
 
Last edited:
I'm confused about all of the heat here about the new MP.

People just like to whine and complain. The fact that Pixar, The Foundry, and Black Magic are all behind this thing and are raving about it nullifies anything bad the people on this forum can say about it, especially since these companies had early access to the machines and have been using them.

Some people for some reason need to be known as a "pro" and they think that if they need to make any amount of adjustment in their workflow that it'll ruin them so they start whining and boohooing and calling anything that doesn't do exactly what they want it to do and how they want to do it a "non-pro" option.

What these same people don't realize is they're being left in the dust by newer, better technologies by failing to adapt.

Take a look at the new Mac Pro, sure it doesn't have all the internal expansion but you can daisy chain the thing for cluster computing, its insanely powerful out of the box (the Pixar demo proved this beyond a doubt), you can take it on location with you easily due to its small size (a HUGE plus! Ever have to travel with large Mac Pros? Not fun) and any number of other things.

I know people say "Oh no, I can't fit four hard drives inside." Well, some of us need a LOT more than four so its a non-issue for many people who are the target audience for this machine. I know I hope to be getting one when they come out. I would love to use Mari on this thing.
 
I'm waiting to see the actual, purchasable product.
I'm waiting for the specs on the ones we can buy.
I'm waiting for the prices.
I'm waiting to see how it can be adapted to various situations. (storage, Pci, gpu, cpu, etc.)
I'm waiting for actual users to report about their experiences.

Might be a great new thing.... might not be....

Time will tell.

+1 on all this. Can't start a meaningful debate until all the info is on the table.
 
People just like to whine and complain. The fact that Pixar, The Foundry, and Black Magic are all behind this thing and are raving about it nullifies anything bad the people on this forum can say about it, especially since these companies had early access to the machines and have been using them.

Yeah, that's the thing... People here are claiming this machine will be totally unsuitable for pros in the real world. Meantime, pros in the real world seem to be quite happy with it.

I think if you're a bean counter at Apple, the new Mac Pro looks attractive because it's basically a turn key system. Large pro houses could (and look like they will) just buy the things in bulk as desktop machine.

Does Pixar care that it doesn't have 16 cores? No. They have thousands of cores a network switch away. And 16 cores is still going to be just as unusable for their workflows.

There will be pros who fight this new machine tooth and nail, but I think Apple is set to sell many more of these than they sold the old Mac Pro.
 
As someone who was an active Mac user when the Cube was out...

The Cube did not die because it was under powered, lacked expandability, or lacked a market...

The Cube died because it was too expensive. For the same price, you could buy a more powerful Power Mac G4.
Not replying to everything, that's later; but here's for your comment:

My boss (very high-level scientist in an Ivy League university) loved the cube and bought three or four of them. I upgraded them for him (gpus i think) when such became available. I can't wait to ask him what he thinks about the new MP (he just got back into town)....
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.