Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
Well, running 45-50 VMs on a workstation is a wrong approach, anyway. There should be a linux server installed running ESXi. Ok, preferably on a separate room :p (it's going to need a lowered temperature room anyway).

I use MACs because of the availability, my knowledge and it turns out to be silent processing.

The heat dissipation has not been an issue, so far.

And, so far, I can scale up easily.
 
Hmm... :confused:

My point was the NOISE. Or the lack of it. ESXi server rack makes a noise like a jet engine.

In workstation market there are few options: HP, Dell, Lenovo.. but when there's Xeon E5 + dual GPU inside and CPU is under a load... how many of them stay silent?

And to extend this: if you could chain three nMPs' together, you'd get three times the 450w unit power, but they'd still be quiet. If you'd buy an HP Workstation with Dual Xeon and Dual GPU, that's not a quiet machine under load any more.
 
You mean there should be a multi-socket x64 server running ESXi.

ESXi runs on the bare metal - there is no operating system underneath it.

In fact, many server boards have a SD card (or μSD) on the mobo. ESXi can run off an SD card - completely diskless.

Yes, my bad, a typo (should have known, as I've installed it myself plenty of times). Although, under the hood, is still a linux :)
[doublepost=1464270152][/doublepost]
Hmm... :confused:

My point was the NOISE. Or the lack of it. ESXi server rack makes a noise like a jet engine.

In workstation market there are few options: HP, Dell, Lenovo.. but when there's Xeon E5 + dual GPU inside and CPU is under a load... how many of them stay silent?

And to extend this: if you could chain three nMPs' together, you'd get three times the 450w unit power, but they'd still be quiet. If you'd buy an HP Workstation with Dual Xeon and Dual GPU, that's not a quiet machine under load any more.

There's no question that nMP is silent. It's just that virtualization is - by default - a server's job (as like: you don't need a machine with dual workstation-class gpus and a super-fast small internal ssd). Using an nMP for that, you just end up paying a premium price for apple's silence. Besides that, running 45-50 VMs, I think it is safe to assume they are running on plenty of spinning disks (unless we're talking for an insanely expensive setup) and these disks will surely make some noise of their own.
 
Last edited:
Then how you understand it? If by loading up the processing units with work - then I think this is my definition, also. Its all about context and bigger picture, my friend :)
like-
"I'm on my way to work right now.. Today, i need to generate an invoice, edit some photos, and engineer a display unit for so&so.. This is the work i need to use my computer with"

you know.. work.
 
like-
"I'm on my way to work right now.. Today, i need to generate an invoice, edit some photos, and engineer a display unit for so&so.. This is the work i need to use my computer with"

you know.. work.

And if your work makes the machine consume 450 watt, you'll get 450 watt of heat in return, this is physic.
 
And if your work makes the machine consume 450 watt, you'll get 450 watt of heat in return, this is physic.
and i'm not arguing against that.

what was said earlier.. what i'm saying is a mistake.. is the implication that there's a direct relation between wattage and how much work i can get done.

as if i can use a 1000w computer and accomplish twice as much work in a day than if i use a 500w computer.
that's the fallacy.. there's almost no relation between wattage and the amount of work i can do..
maybe in extreme examples or something but not in real life.
 
and i'm not arguing against that.

what was said earlier.. what i'm saying is a mistake.. is the implication that there's a direct relation between wattage and how much work i can get done.

as if i can use a 1000w computer and accomplish twice as much work in a day than if i use a 500w computer.
that's the fallacy.. there's almost no relation between wattage and the amount of work i can do..
maybe in extreme examples or something but not in real life.
If you spent a significant portion of the day waiting for the computer to produce results - there will be a strong correlation between watts and work. More watts, less waiting, more work.

And cloud computing just moves the watts to off-premise datacenters.
 
i think it's a mistake to correlate watts and heat then, make the jump from watts (or heat) to amount of work accomplished.

there are far too many factors at play to be able to say 'my computer uses x amt of watts therefore i can accomplish y amt of work'.

Ah... I think MVC is talking about "work done", a Physics term that related to energy, not "work".

Even though my understanding it's not 100% work done will transfer to heat, but that should be a good enough assumption to calculate heat lost on GPU in our current technology.
 
Last edited:
  • Like
Reactions: flat five
Then how you understand it? If by loading up the processing units with work - then I think this is my definition, also. Its all about context and bigger picture, my friend :)

what if those processing units are loaded with video games or hours of browsing and commenting on mac forums? is that "work"? ;)
 
what if those processing units are loaded with video games or hours of browsing and commenting on mac forums? is that "work"? ;)
There are people who live from playing games ;).

So to some degree that can also be considered as work. What I was thinking was rather loading up with... video editing, because that is what I do. Im sure anyone who come here has other, his own vision of digital "work". ;)
 
And to extend this: if you could chain three nMPs' together, you'd get three times the 450w unit power, but they'd still be quiet. If you'd buy an HP Workstation with Dual Xeon and Dual GPU, that's not a quiet machine under load any more.

This is valuable to me.

I need and want stealth, silent processing. So that I can do human work.

With SSD drives, there is no disk thrashing of the yesteryear.

When I solve the minutiae of technology annoyances, it will make my scaling up more pleasant.

As a semi Pro computer user that is good enough for production that translates into satisfied clients.

I am learning the technology viewpoints to my business process from this discussion.

Your shared experiences and knowledge allows me to work and make technology choices given the current constraints.
[doublepost=1464278107][/doublepost]
This "linking Mac Pros for CPU" rumor has been fun, but let's test it with math.

What? There's math?

When you link together a bunch of CPUs remotely, all their RAM has to be synced. Either the remote CPU needs to have it's own bank of RAM it's keeping synchronized, or it needs to pull data from the main bank of RAM on the master machine. The latency on Thunderbolt is already way way too high for this, but let's embrace the crazy and pretend it's not.

So knowing this, wouldn't software (or magic) regulate or adjust for this with waiting packets of info?

The throughput on DDR4 memory is 19200 megabytes a second.
The throughput on Thunderbolt 3 is optimistically 5000 megabytes a second.

Thunderbolt 3 is not fast enough to feed a CPU with data. Any rumor from the "dark net" that Thunderbolt 3 is going to be used to link CPUs from two different machines together is a bunch of nonsense. You can't even add a standard CPU on PCI Express so it's definitely crazy that you could add one on Thunderbolt.

TB3 starts the connection making it more reliable than wifi, and possibly increases the speed with future TBx?

The only thing you can do is do exactly what render farms do. Have a bunch of machines with mirrored drives that are all pulling different portions of the files and doing different portions of work.

Ultimately, this may be the solution or alternative or co-exist with an in house computer farm.

Money costs aside.
 
This is valuable to me.

I need and want stealth, silent processing. So that I can do human work.

With SSD drives, there is no disk thrashing of the yesteryear.

When I solve the minutiae of technology annoyances, it will make my scaling up more pleasant.

As a semi Pro computer user that is good enough for production that translates into satisfied clients.

I am learning the technology viewpoints to my business process from this discussion.

Your shared experiences and knowledge allows me to work and make technology choices given the current constraints.
[doublepost=1464278107][/doublepost]

What? There's math?



So knowing this, wouldn't software (or magic) regulate or adjust for this with waiting packets of info?



TB3 starts the connection making it more reliable than wifi, and possibly increases the speed with future TBx?



Ultimately, this may be the solution or alternative or co-exist with an in house computer farm.

Money costs aside.
What goMac was describing is exactly the same feature as Unified Memory in HSA foundation, and what Nvidia did with Unified Memory in GP100/CUDA8.

However, what he said is also true, abut syncing the RAM. And there are still serious problems to overcome.

I am not saying that I know everything. Most of this was told to me in a form of general ideas, and I extrapolated very much on these concepts.
 
What goMac was describing is exactly the same feature as Unified Memory in HSA foundation, and what Nvidia did with Unified Memory in GP100/CUDA8.

However, what he said is also true, abut syncing the RAM. And there are still serious problems to overcome.

I am not saying that I know everything. Most of this was told to me in a form of general ideas, and I extrapolated very much on these concepts.

GPUs are much easier to chain on Thunderbolt because GPUs are designed around not streaming a lot of data. The cache on a CPU is in the megabytes, while the cache on a GPU is in the gigabytes. With a GPU you're filling up the cache with an initial load, and if you're streaming data, you're trying to keep the amount of data you're streaming low.

It's why game consoles are moving to integrated GPUs. Integrated GPUs are much better at streaming data, even though the processing power is lower. And it's why GPGPU is still mostly sticking with external GPUs. Most of the algorithms have an initial data set they can first load into the GPU before they start processing. So less streaming, more GFLOPs is what they need.
 
  • Like
Reactions: koyoot
lol - pardon my lame attempt at humor. but now you're getting all philosophical on me, saying my computer has a point of view and is possibly sentient!

All that your computer see is electrical signal and charge... It can't differenciate between a game or a productivity application. For it, it's all work all the time.
 
Hmm... :confused:

My point was the NOISE. Or the lack of it. ESXi server rack makes a noise like a jet engine.

In workstation market there are few options: HP, Dell, Lenovo.. but when there's Xeon E5 + dual GPU inside and CPU is under a load... how many of them stay silent?

And to extend this: if you could chain three nMPs' together, you'd get three times the 450w unit power, but they'd still be quiet. If you'd buy an HP Workstation with Dual Xeon and Dual GPU, that's not a quiet machine under load any more.
One can scale down the HP Workstation to match the capability of the nMP. One cannot scale up the nMP to match the capabilities of the HP Workstation. If you want a quiet system the HP can be configured to be just as capable as the nMP and remain quiet. Or one can buy a lower model HP Workstation.
 
All that your computer see is electrical signal and charge... It can't differenciate between a game or a productivity application. For it, it's all work all the time.

yeah i got it, the first time you said it. it's just that you were trying to anthropomorphize my computer so i was trying to have some fun and make bad dad jokes. apparently the humor was lost. going back to the corner....

I have Siri, Cortana, Alexa, Google.

Collectively, maybe they can be sentient.

but can they dream???
 
but can they dream???

That would be VR and AR.

I'm told that my nMP is a laggard in this area.

In a business context, fast forward to the future, I want to remotely have business interactions individually and in group settings.

Completely replacing the current video skype or FaceTime experience of today.

Staying in the MAC ecosystem, I prefer newer MP to have the power to support this coming shift in communications.

I read that Apple is ignoring the VR/AR space. But is it?

Is it possible that Apple could step in, with all their apples, in a backbone MP that plays in this space, and allows Microsoft, FB, and Google to provide the front end or headgear?
 
Apple is working on its own VR system.. but the only clues we have are the purchases and hires and patent applications Apple has made. And they have a lot of patents pending in this regard. There are VR glasses made of a frame and iPhone in it. There are separate VR glasses with its own CPU. There's even patents for VR / AR user interface.

Simple glasses solution is for pure media consumption with pre-recorded material. Serious VR needs a beefy system, and that could be a VR/Homekit hub, a Zen/Polaris system and better model maybe with Vega/GTX1080 class (If released within an year). The picture is transmitted to the glasses with latest bluetooth/wlan protocol and combined with some AR elements within the glasses. This is how I've read those patents.

But when... there we go again. Apple has made patent application for VR for years.

And before Apple releases any serious VR-gadgets, they need a tool to develop content for it. There's a need for a Mac Pro in future.
 
Last edited:
Power consumed always is dissipated as heat. It is exactly the same thing with petrol engines. First and biggest challenge when VW engineers were designing Bugatti Veyron was enormous amount of heat generated by engine. They had to find a way to dissipate over 800 KW of thermal power. KiloWatt's!
No wonder VW has so many problems these days, they're measuring heat in kilowatts.

They should just go back to lying about their diesel engines.
 
  • Like
Reactions: JamesPDX
Apple is working on its own VR system.. but only clues we have are the purchases and hires and patent applications Apple has made. And they have a lot of patents pending in this regard. There are VR glasses made of a combination of glasses frame and iPhone in it. There are separate VR glasses with its own CPU.

And before Apple releases any serious VR-gadgets, they need a tool to develop content for it. There's a need for a Mac Pro in future.

Apple has an excellent way of mainstreaming convoluted technologies or is in a great position to do so.

So, in the future future, what machine is needed to support whoever wins out in the VR/AR space to provide conversations with one or many in a way that closely simulates our current idea of a physical meeting?

Without looking like we have scuba gear to do this?
 
  • Like
Reactions: JamesPDX
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.