Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
And they're also a bit larger than the Apple Tube.

However, some people value having 24 cores, 16 DIMMs with 512 GiB support, quad double-width Nvidia Quadros or Teslas over a quieter, smaller system with 64 GiB, 12 cores, and two desktop ATI GPUs.

Would you rather that your task takes 1 hour on a system that's barely loud enough to notice, or 4 hours on a system that you can't hear?

Only just a bit? Efficiency, performance per watt, dba. I suspected this design would be a game changer from a thermodynamics point of view all along and I'm even more convinced after having one here.

Many ultra top end power users like yourself and tutor have outgrown any Mac Pro apple could have possibly released, but I reckon within 5 years you two will be considering options for mobo, CPU and GPU with this thermal core ripped off by the market at its heart. Probably expanded to 6, 9, dozen, more slots around the core to host your cards. If you scale up the kind of savings this design permits to one of your top end rigs or right past you to the large render farms and even supercomputers the market will no doubt head in this direction.

And this little tube will be the daddy of all of them!
 
Last edited:
Would you rather that your task takes 1 hour on a system that's barely loud enough to notice, or 4 hours on a system that you can't hear?

I prefer to complete the job in half an hour on a small render farm that cost half the money of a 24core workstation, and still keep my workplace quite.
 
Last edited:
Only just a bit? Efficiency, performance per watt, dba. I suspected this design would be a game changer from a thermodynamics point of view all along and I'm even more convinced after having one here.

Many ultra top end power users like yourself and tutor have outgrown any Mac Pro apple could have possibly released, but I reckon within 5 years you two will be considering options for mobo, CPU and GPU with this thermal core ripped off by the market at its heart. Probably expanded to 6, 9, dozen, more slots around the core to host your cards. If you scale up the kind of savings this design permits to one of your top end rigs or right past you to the large render farms and even supercomputers the market will no doubt head in this direction.

And this little tube will be the daddy of all of them!

I suspect that the space efficiency of rectangular servers will continue to reign in the data center. The design of cold aisle containment systems like for the attached thermal flow image for a 800 Kw server farm isn't suddenly going to convert to tubes.


I also doubt that tubes will be a factor in the move towards containerized data centers.


container-sgi.gif
____

I prefer to complete the job in half an hour on a render farm that cost half the money of a 24core workstation, and still keep my workplace quite.

If you have an embarrassingly parallel task like video rendering that can be farmed out, that's a good solution.
 

Attachments

  • thermals.png
    thermals.png
    323.1 KB · Views: 109
Last edited:
Only just a bit? Efficiency, performance per watt, dba. I suspected this design would be a game changer from a thermodynamics point of view all along and I'm even more convinced after having one here.

Many ultra top end power users like yourself and tutor have outgrown any Mac Pro apple could have possibly released, but I reckon within 5 years you two will be considering options for mobo, CPU and GPU with this thermal core ripped off by the market at its heart. Probably expanded to 6, 9, dozen, more slots around the core to host your cards. If you scale up the kind of savings this design permits to one of your top end rigs or right past you to the large render farms and even supercomputers the market will no doubt head in this direction.

And this little tube will be the daddy of all of them!

The thermal core is not some kind of "game changer," especially in terms of thermodynamics. There are really important reasons why you wouldn't want it in a multi-GPU rig like Aiden describes in such a configuration:
- Much more difficult to replace a part in the event of failure (this problem isn't going anywhere, you need direct contact with the GPU/CPU and heat sink)
- Extremely limited upgrade options (Due to proprietary design of the cards)
- Extremely limited replacement options (Due to proprietary design)
- no SLI (or no Xfire, depending on your proprietary cabling)

Moreover, this is not any more efficient in terms of actual cooling, just in terms of space. All the "thermal core" (can we just call it a "shared heat sink" ? ) does is take excess thermal capacity of the heat sink. When all three of the main components in the nMP are on maximum load, we've already seen what happens: the CPU temp on the nMP ramps up to 90C and throttles which is totally unacceptable -- can't even handle 450Watts!

The design's fine for this under-clocked heavily binned/skimmed hardware. However, throw in something like what Aiden is talking about -- Four Quadros and two CPUs -- and you're talking about a fan/heat sink combination 5 times as big and twice as loud. It'll look like a shop-vac.

IRoQgLM.jpg

Introducing: The HP-TUBE - 4 NVidia Quadros, 24 Cores, totally pointless

Sure, NVidia will make a more efficient chip some day. Then they'll clock it up some more and take up the slack, not downclock it like Apple's done so it can fit in a smaller box.

The only thing that impresses me about the nMP is the binning that AMD has done with the chips on the D700. Instead of up-clocking a more efficiently binned chip, they down-clocked it. Even though they're pretty darn under-clocked, the low wattage is very impressive, but this is no miracle -- we'd probably be seeing this all the time, if people cared that much about wattage Vs Price-point. Wait a minute, we do see this all the time--these chips are used in these machines called laptops.
 
Last edited:
There are really important reasons why you wouldn't want it in a multi-GPU rig like Aiden describes in such a configuration.

Right now I'm getting quotes on a bunch of ProLiant SL270S servers:
c03230081.png

It's a 4U (17.8 cm high (that's 7" for the colonials)) half width rackmount server (fit two of them in a 4U rack slot) with up to:
  • 24 cores with dual E5-26xx/E5-26xx-v2 processors
  • 256 GiB ECC RAM (with 16 GiB DIMMs, 32 GiB support coming)
  • two GbE ports
  • two 10 GbE ports (optional)
  • low profile PCIe slot for FC or 40 Gbps Infiniband
  • eight 6 GiB Nvidia Tesla K20X CUDA GPGPUs or Intel Xeon Phi co-processors
  • eight hot swap SAS/SATA/SSD disk slots
  • dual 1500 watt power supplies

So, in one 17.8 cm box I get 48 cores of Xeon CPUs, 512 GiB of RAM (for the moment), 44 Gbps of Ethernet, 43K CUDA cores with 96 GiB of ECC VRAM, and from SSD to 19.2 TB of SAS storage. (A single 42U rack would be 480 cores, 5120 GiB RAM, 440 Gbps Ethernet, 430K CUDA cores with 960 GiB ECC VRAM... and 30 Kw.)

How many tubes would I need to match that? (Can't be answered - the tube has 0 CUDA cores and non-ECC VRAM.)

But is not as quiet as the new Mini Pro. ;)
 
Last edited:
Man! I've been on here less than a month and already have people on my ignore list? :( Too much negativity.
 
Man! I've been on here less than a month and already have people on my ignore list? :( Too much negativity.

That's too bad. You can learn more from people with differing viewpoints than from people with whom you always agree. The ostrich with its head in the sand doesn't learn anything.

I've been here 13 years, and my ignore list has always been empty.
 
Last edited:
I didn't know we had an ignore list. I'm gonna ignore Mr. Downer right away. ;)

Beware...your IP will be mysteriously blocked too. ;)

----------

That's too bad. You can learn more from people with differing viewpoints than from people with whom you always agree. The ostrich with its head in the sand doesn't learn anything.

I've been here 15 years, and my ignore list has always been empty.

Actually...it really isn't "too bad." I'm not looking for the opinion that you're giving...just like the missionaries and Jehovah's Witness that come knocking on my door. I didn't ask them to stop by and I don't go looking for them. I'm in this thread because it's about the capabilities of the nMP and VirtualRain has provided VERY good information on that. If I wanted to learn how I could build another "more bigger" and "most fastest" computer than the "tube," then I would follow you around. You clearly have an opinion, and you're obviously a very intelligent guy, but maybe everyone isn't interested in then negative things you have to say about their new purchases? Maybe?

And, like you...I've been on other forums for many years and have never had anyone on my ignore list. You were the first.;)
 
Beware...your IP will be mysteriously blocked too....

That's fine - I'm OK.

One of the other posters seems to have touched a new Mini Pro for a couple of hours, and had a religious experience related to its noise output.

A couple of hours.

I've had a couple of Dell Xeon workstations in my office for a year - and I can't hear them over the gentle "whoosh" of the air conditioning.

It's all relative....
 
I suspect that the space efficiency of rectangular servers will continue to reign in the data center. The design of cold aisle containment systems like for the attached thermal flow image for a 800 Kw server farm isn't suddenly going to convert to tubes.


I also doubt that tubes will be a factor in the move towards containerized data centers.

you better hope viki isn't reading this.

iRobot_Viki.jpg

(irobot)

----------

Man! I've been on here less than a month and already have people on my ignore list? :( Too much negativity.

I didn't know we had an ignore list. I'm gonna ignore Mr. Downer right away. ;)

aww.. that's no fun..
:)

while there are times when i might feel "jeez.. i wish everybody would just agree with me on this"..
but then whats the point in discussing anything.
(unless, obviously, you're just looking for info and not necessarily discussing.. but in those cases, it's just a matter of weeding through the interwebs)
 
This has been the best thread on the nMP that I have seen. I hope it gets back to tech and not to flamin'.

My primary use for the nMP that I have on order (6c/16GB/1TB) is LR5 and I have a couple of questions.

1) Is 16GB ram enough, if I don't want to run other apps at the same time as LR?

2) I see that Promise2 drive arrays, can deliver 1,300MB/sec data over TB2, which is faster than the PCIe SSD in the nMP. Would it be better to go with the 256MB SSD and a large Promise2 box, or is it still a good idea to go 1TB internal SSD?

3) For LR, is a 32GB iMac a better proposition?

I would really love to see some LR speed reviews. Anyone can share any links?

Thanks guys.
 
This has been the best thread on the nMP that I have seen. I hope it gets back to tech and not to flamin'.

Yeah... Some of these guys here have strayed somewhat off topic... Although at least the continued discussion (even if it's off on a tangent) keeps the thread alive and on the first page.

My primary use for the nMP that I have on order (6c/16GB/1TB) is LR5 and I have a couple of questions.

1) Is 16GB ram enough, if I don't want to run other apps at the same time as LR?

2) I see that Promise2 drive arrays, can deliver 1,300MB/sec data over TB2, which is faster than the PCIe SSD in the nMP. Would it be better to go with the 256MB SSD and a large Promise2 box, or is it still a good idea to go 1TB internal SSD?

3) For LR, is a 32GB iMac a better proposition?

I would really love to see some LR speed reviews. Anyone can share any links?

Thanks guys.

1. If LR works anything like Aperture it will eat up all available RAM as a disk cache for your photo library. What you actually "need" for productive editing is probably in the neighbourhood of 8GB or less, so 16GB should be plenty. I got 32GB more for the future than immediate needs.

2. I would max out the internal SSD... If you're anything like me, 1TB is enough for OS/Apps and several months or more of photos. And the 1TB reads at 1260MB/s... It's super fast. The Promise is great for supplemental storage (eg archives, backups or media libraries). You can load it up with SSDs as well, but I'd start first with the internal. You may not need any more than 1TB. And Apples prices are not unreasonable for the SSD options (especially given the performance).

3. There's certainly some discussion going on around that. I think the performance for photo editing between a loaded iMac and entry level nMP would be unnoticable. The value prop of the iMac is really the value of the included display. The opposite, of course, is true for the nMP where it offers flexible choice for displays, now including 4K displays which is something I think would really benefit photography work more than anything else.

As more people get the nMP I'm sure we'll see LR performance tested on these things. I just don't know it well enough to properly evaluate it's performance.
 
When all three of the main components in the nMP are on maximum load, we've already seen what happens: the CPU temp on the nMP ramps up to 90C and throttles which is totally unaccetptable

Read carefully:
(Source Anantech, no offense but IMO a far better source than your post)

"My next task was to see what actually happens in this worst case scenario. If you’re running all of the parts at full tilt, are any of them going to throttle? I have to work pretty hard to get the fan to spin up under OS X, but in Windows it’s a lot easier since I can just toss a single multi-GPU workload at the problem.

I started out by running LuxMark, an OpenCL workload, on both GPUs as well as a multithreaded 7-Zip benchmark on all of the CPU cores. I monitored both CPU and GPU frequencies. The result was no throttling across the board:



Getting an accurate reading on GPU frequencies from Tahiti based GPUs ends up being harder than I expected, but I saw what Ryan reminded me is typical behavior where the GPUs alternate between their 650MHz base clock and 850MHz max turbo. We don’t have good tools to actually measure their behavior in between unfortunately.

The same was true for the CPU. Even with all 12 cores taxed heavily, I never saw any drops below the CPU’s 2.7GHz base clock.

Next I tried a heavier workload on the CPU: a H.264 video encode. Here I just ran the x264 5.01 benchmark in parallel with the LuxMark workload. Once again, I saw no drop in CPU or GPU clocks although I believe I was approaching the limits of where that would hold true. The system was pulling an average of 410W at that point, with peak power draw at 429W.



If you’re wondering, there was little to no impact on the x264 benchmark from having LuxMark run in the background. The first rendering pass took about a 3% hit, likely due to the CPU not being able to turbo as high/at all, but the second heavily threaded pass was on par with my standalone run without LuxMark in the background. LuxMark on the other hand saw around a 14% reduction in performance, from 2040K samples per second down to 1750K when run in parallel with the x264 test. We’re still talking about two extremely compute intensive tasks, the fact that I can run both with little performance reduction is an example of the sort of performance scaling that’s possible if you leverage all of the compute in the Mac Pro.

So far I wasn’t surprised by the platform’s behavior. The Mac Pro’s thermal core and fan was enough to handle a real world workload without throttling".

The only way to make CPU throttling is to run synthetic benchmark and force the computer to slow down, but this is not a real life scenario.
 
Last edited:
That's fine - I'm OK.

One of the other posters seems to have touched a new Mini Pro for a couple of hours, and had a religious experience related to its noise output.

A couple of hours.

I've had a couple of Dell Xeon workstations in my office for a year - and I can't hear them over the gentle "whoosh" of the air conditioning.

It's all relative....

Religious? That's swear words to those with scientific reasoning lol. I believe solely in proof and experiment and know personally a director of a company outside the computer industry who is buying the can to take apart and study, adapting the design to fit current rectangular thinking with meshed shelving. No interest in computers whatsoever, never mind shiny black cans. Only uses them to display schematics and read emails. His interests are solely thermal efficiency and just a single moving part which is the most likely component to break down in the design and not two, three or more of them.

The 'religious' I would describe are those who despite seeing or hearing proof contrary to their beliefs or thinking prefer instead to deride or insult others regardless of whether experiment shows there's a benefit or not. We have too many of these flat earthers in society today as it is :D
 
So, in one 17.8 cm box I get 48 cores of Xeon CPUs, 512 GiB of RAM (for the moment), 44 Gbps of Ethernet, 43K CUDA cores with 96 GiB of ECC VRAM, and from SSD to 19.2 TB of SAS storage. (A single 42U rack would be 480 cores, 5120 GiB RAM, 440 Gbps Ethernet, 430K CUDA cores with 960 GiB ECC VRAM... and 30 Kw.)

How many tubes would I need to match that? (Can't be answered - the tube has 0 CUDA cores and non-ECC VRAM.)

But is not as quiet as the new Mini Pro. ;)

That's a nice machine, but you are comparing a totally different class of product with a totally different price range, and likely a very different kind of user case scenario(I'm sure nobody will run that nearby his desk). Also, why don't you tell us the price, let's see how many will sell their macpro and buy your server:)
 
Last edited:
Read carefully:
(Source Anantech, no offense but IMO a far better source than your post)

Okay, you can ignore what my post was actually all about: That the new Mac Pro's "thermal core" can't handle more than 450Watts and that it doesn't cool anything more efficiently, just saves space. I was also pointing out that this is not a great option for workstations which are built for the latest performance, such as AidenShaw's config (which I mentioned 3 times in the post) -- a 24 core, 4 Quadro beast.

And yes, it does throttle, as your carefully-chosen quotations even point out (at the verrry bottom):

Anandtech said:
Average power while running this workload was 437W, peaking at 463W before CPU throttling kicked in. If you plot out a graph of power vs. time you can see the CPU throttling kick in during the workload.

Yes, it was not a real-life scenario in terms of the actual programs run, but 450+Watts is a real life scenario for other workstations.

I was demonstrating that the "thermal core" isn't a magical unicorn-poop encrusted component, just that it's cooling a lower wattage batch of components and that it's impractical for configurations much more in-line with current workstations such as >2 GPU and >1 CPU which have not been under-clocked, as the nMP has. The lower wattage wasn't Apple's design, it's just the result of extreme binning and under-clocking a 2-year-old video card. They also dropped PCIe slots (75Watts each) and internal drive bays.

People running workstations often value performance (shocker, I know), that's why they don't down-clock their equipment and they don't settle for last-generations core-count. If they wanted a quieter device, they could do that simply by turning down the clock and reducing the processor count -- not surprisingly, this is exactly what Apple has done.

2 years ago, these two video cards would be using > 500 watts, now they use < 300. That has nothing to do with Apple nor the thermal core.

----------

That's a nice machine, but you are comparing a totally different class of product with a totally different price range, and likely a very different kind of user case scenario(I'm sure nobody will run that nearby his desk). Also, why don't you tell us the price, let's see how many will sell their macpro and buy your server:)

I think he's comparing space efficiency.
 
Last edited:
maybe everyone isn't interested in then negative things you have to say about their new purchases? Maybe?

That's fine - I'm OK.

We are gathered here today to sing praises for Apple's new Mac Pro. Nobody wants to hear anything different. This is not a place for discussion, just an echo-chamber for Apple computer.

People want vindication for their decision to buy this thing, not be confronted by the realities of competing products (though they make blind jabs at them, like strawmen in the dark). I think a lot of people bought this having assured themselves that it was simply the fastest thing ever with the most workstationiness.

It is the mark of an educated mind to be able to entertain a thought without accepting it. - Aristotle
 
I think that is unfair to say...

I for one am really interested in hearing what performance I can expect from the nMP, and VirtualRain's review was very informative. The latter part of the discussion between you i..... is just, well, irrelevant! How well the nMP does/does not stack up against something entirely different, is just like comparing apple and oranges...

That people post here, looking for test data is due to the fact that very few actually has taken delivery of their nMPs, and had the time to run a battery of tests.

With such a fast internal PCIe SSD, the TB2s are not seen by Apple as being used for primary storage. Most users in this segment have NASes, or at least external housings with Terabytes of HDs.

Let's get the discussion back on track, and keep the stupid comments elsewhere.
 
That's a nice machine, but you are comparing a totally different class of product with a totally different price range, and likely a very different kind of user case scenario(I'm sure nobody will run that nearby his desk). Also, why don't you tell us the price, let's see how many will sell their macpro and buy your server:)

I was replying to a comment that suggested that tubes would take over the supercomputing market, so I offered a reality check on where today's GPGPU supercomputing market is. Price isn't relevant, because as you say it's for a different market - but an eight-core Apple is cheaper than a single 2688-core Tesla K20X GPCPU.

</tangent> - let's get back to VirtualRain's excellent review.
 
  • Like
Reactions: slughead
And if you have even *more* money, you can buy a supercomputer with hundreds of times the power, but that's not the point of these forums.

Of course. I was just answering the question about whether computers with more than twelve cores are available.

The nMP is 45dBA with a reasonable FCP X load according to Anandtech, that's about what a lot of really powerful rigs do at load.

As he says in the article, that's a crudely measured number taken in a room that was louder than the MP's base level before he even turned it on. So I wouldn't use that number for any real comparison, it's likely way on the high side. If other workstations are 45dBA that is louder than the MP.

The thing I'm wondering now is, do LogicProX makes use of the dual GPU

Not right now and many have speculated that if it ever does it could be minimal. Audio math just isn't well suited for GPU hardware.

you can see the CPU throttling kick in during the workload

Did you miss the part where he said he was only able to get it to throttle with a power virus, which he had to do because he couldn't get it to throttle with any real world workload (or even benchmark). Sounds like throttling isn't a real world concern although maybe someone will find a real world workload that gets it to throttle.
 
Not right now and many have speculated that if it ever does it could be minimal. Audio math just isn't well suited for GPU hardware.

That's interesting, first time I heard such thing as I expect scientific engineering simulations will be more complicated then audio math and those softwares benefit from OpenGL and or OpenCL cards.
 
1) Is 16GB ram enough, if I don't want to run other apps at the same time as LR?
Personally I think it's plenty. I've never managed to get it to use up my 8Gb.

2) I see that Promise2 drive arrays, can deliver 1,300MB/sec data over TB2, which is faster than the PCIe SSD in the nMP. Would it be better to go with the 256MB SSD and a large Promise2 box, or is it still a good idea to go 1TB internal SSD?
For Lightroom or in general? Unfortunately Lightroom doesn't seem to benefit much from fast disk or even SSD.

3) For LR, is a 32GB iMac a better proposition?
An iMac will probably perform just as well for Lightroom, at least a 4-core. Lightroom apparently doesn't use the GPU at all. It also doesn't seem to make good use of extra processors and hyper threading, so I'm not sure how much extra benefit a six core machine would offer. I'm hopeful, but at the same time not terribly optimistic.

I would really love to see some LR speed reviews. Anyone can share any links?
Me too!!
 
It's not an issue of how complicated it is, it's that audio is a real time application. New audio is tracked along with playback of existing tracks, and instruments are triggered by a midi keyboard with either sample playback or synthesis that has to be fast enough to feel like real time without audible delay. Another factor is that GPU is massively parallel and audio is hard to break up like that much less combine all the tracks back together exactly in sync. A grossly oversimplified explanation but hopefully it gives a general idea of the issues.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.