Except that the bulk of their computers sell probably at around the $1,200-$1,500. Apple has steep drop off in sales at $2,500, proof? That's why they dropped the 17" MBP and want to drop the old Mac Pro.
, but I will not be at all surprised if it comes in at $1,999.
Just the buzz Apple wants, and will turn the Mac Pro into a healthy production line.
.... I added my total solid state storage needs together... which is currently around 500GB (even though I have 1TB).
If I'm going to buy a 2013 Mac Pro, I will get it configured with between 500GB and 1TB depending on the price.
Would the entry Mac Pro sell better closer to the $2,000 border with the iMac? Yes. But Apple can work on that over time. If in 2-3 generations the equivalent of the W5000 card can drive 6 screens , SSDs are substantially cheaper , and DDR4 RAM has gotten to volume levels so the price doesn't carry quite as large a premium then they can move the price down a couple hundred if start around $2,499.
That's unlikely, when does Apple lower prices over time? AFAIK they pick a price tier and stick with it. Whatever price they put on this machine at the beginning is the one it's going to have. Regardless, sure $1,999 is entirely possible, but a bit of a stretch. However starting above $2,499 isn't going to happen. A desktop computer with no PCI, 4 slots and no drive expansion that starts at $3k? Two graphics chips and a Xeon aren't that expensive.
That's unlikely, when does Apple lower prices over time? AFAIK they pick a price tier and stick with it.
Whatever price they put on this machine at the beginning is the one it's going to have.
Regardless, sure $1,999 is entirely possible, but a bit of a stretch. However starting above $2,499 isn't going to happen.
A desktop computer with no PCI, 4 slots and no drive expansion that starts at $3k?
Two graphics chips and a Xeon aren't that expensive.
you want some insight into how the price of this computer is going to be determined? go to 8 mins in this video and watch for a little bit..
YouTube: video
3 months into incorporating next, they already have the price figured out and it's determined via polls/interviews on how much people are willing to pay. next didn't even have a prototype at this stage in development.
that's all there is to it.. this new mac? they had the price figured out long before..
and seemingly, any time this dude is involved with the computer, that target price is always around $3000.. for the past 25 years
the nExt computer ended up being something like $6000 when it finally hit the shelves.. and we all know the fate of next computers.. i think they have it figured out by now how to set a 3g target price and stick to it because they also know what happens when the price goes over that amount.
regardless, i guarantee they're not designing/prototyping the new mac then adding up the cost of it's parts then determining the sales cost.. the cost was already figured out at the very beginning of the process.
For me, the problem is that the iMac comes with a large, high quality monitor built into it that Apple sells on its own for 1 grand...
Speaking of monitors... Add on top of that another $4,000 for the Apple-branded 4K display that I'll want. Plus a thunderbolt external enclosure. ....
For me, the problem is that the iMac comes with a large, high quality monitor built into it that Apple sells on its own for 1 grand (Thunderbolt display). Sure, there are cheaper options, but I think people would struggle to find 27" 1440p monitors for much under 500.
So even selling at 2 grand, the new Mac Pro would have to be a fair bit more powerful than the top level iMac. Otherwise, why would people buy it?
For 4 grand, I think you would be expecting 12 cores, and two top-end GPUs. With a nice amount of RAM and SSD.
Is Apple going to sell $4K 4K monitors?
- So, what about OpenGL? Yeah, if you do games. Not too many apps can use two GPU's for 3D. Not everybody needs that either these days since many of the cards are good-enough for displaying geometry on the screen for rather many purposes. And in high-end, you have always had ways to get around that.
i'm not a code wiz or anything like that and i'm speaking strictly from an end user pov but the dual gpu setup isn't aimed at openGL type improvements..
from what i see, a modest single gpu can handle all the openGL tasks just fine because what happens is the cpu maxes out long before openGL does.. if you're working on a large 3d model, it will become sluggish due to cpu reasons first at which point, the openGL tasks have no problem keeping up with the sluggish model.
where openCL comes into play isn't really anything to do with displaying graphics and/or effects.. it can do the same type of calculations on gpus as the cpus are doing only much faster and more efficient.. in essence, the 2nd gpu can be viewed as the second cpu only a helluva lot faster than you'll get by added another set of cores..
..if your developers are tapping into it that is.
In Blender for exemple, the rendering will crash if you run out of vram.
further-
it seems (to me) that cpu assigned tasks are more or less topped out and the only way to get better performance is to add more cores (and even then, we're talking about a relatively small amount of applications that will benefit from more cores).. the developers have been refining cpu coding for 30 years or whatever and they can't really streamline the code or refine the algorithms much further..
the real speed enhancements in applications we'll see over the next decade are going to come from the gpu side of things.. with the computer config'd as the new mac, apple engineers believe (i assume they believe this) they're giving the devs much more potential energy to play with..
i also think, looking down the road, apple is expecting the ability to be able to plug additional gpus in via thunderbolt on an as needed basis.. it's arguable that tbolt is too slow for this-- even if it again doubles in throughput.. but even then, i'd expect someone to get more computing power for a lot cheaper if plugging in a gpu module as opposed to a cpu module.. because as far as i can gather, you can't simply plug in additional cpus via pcie and expect any sort of anything whereas you can with gpus.. right now, if you want more cpu power, you have to plug in another complete computer or cram more cpus into a single computer but in a way that isn't customizable.. you can only customize that at time of purchase and not expandable down the road.
further-
it seems (to me) that cpu assigned tasks are more or less topped out and the only way to get better performance is to add more cores (and even then, we're talking about a relatively small amount of applications that will benefit from more cores).. the developers have been refining cpu coding for 30 years or whatever and they can't really streamline the code or refine the algorithms much further..
the real speed enhancements in applications we'll see over the next decade are going to come from the gpu side of things.. with the computer config'd as the new mac, apple engineers believe (i assume they believe this) they're giving the devs much more potential energy to play with..
i also think, looking down the road, apple is expecting the ability to be able to plug additional gpus in via thunderbolt on an as needed basis.. it's arguable that tbolt is too slow for this-- even if it again doubles in throughput.. but even then, i'd expect someone to get more computing power for a lot cheaper if plugging in a gpu module as opposed to a cpu module.. because as far as i can gather, you can't simply plug in additional cpus via pcie and expect any sort of anything whereas you can with gpus.. right now, if you want more cpu power, you have to plug in another complete computer or cram more cpus into a single computer but in a way that isn't customizable.. you can only customize that at time of purchase and not expandable down the road.
----------
really? wow.. i mean i've definitely run out of ram before when rendering which has led to a crash but i didn't realize vram could top out in the same way..
but then again, i really try not to dive too much into coding/technomumbojumbo.. that's a trap i'd pretty much rather avoid![]()
...If you go to deep in the details of your scene with multi millions/billions of quads then yes you can blow the vram limits when rendering. It's all in knowing how to turn high polys to low polys with the least quality loss before rendering the scene. With enough ram we wouldn't have to bother, just like we don't really bother with file size anymore since we are now playing in the terrabytes of disk space realm now.
You must be forgetting the ACD 30" display that sold for $3300 for the first two or three years!![]()
With multiple GB VRAM sizes that is in the billions of bytes and hundreds of millions of 32 bit words. The W9000 Fire Pro card comes with 6GB of VRAM. The standard configuration of the current Mac Pro range in the 3GB to 6GB range. ( 6GB being over a billion 32 bit words. ) Pretty much already at the point that the VRAM on GPGPU/GPU cards is as large was what was in most workstations several years ago.
Even super duper sized screens are only in the 10's of millions of pixels while the local storage capacity is in the 100's of millions. If careful about caching textures and there is room for even highly complex scenes.
I'm sure can blow out regular mainstream cards/VRAM (e.g., in most iMacs.), but in the 4-6GB zone (range covered by W7000, W8000, and W9000 ) it is in the practical-to-do range for a pretty broad range of rendering tasks.
You are talking about screen resolution and I'm talking about complex 3D object.
There isn't a relation between the two. I can use an 800x600 screen and still design very complex scene with millions of quads in it.
All of those quads are in relation to each other when it comes to rendering and they ALL have to be taken into account when you run your render pass.
This is also why the NVidia Titan is a popular video card for 3D modeller since it comes with 6Gb also.
But some are hitting really close to that limit already and dreams about getting their hand on the new 12Gb Quadro.
You tell me how you get 1.5 billion polygons on a 800x600 screen.