Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
For example, if you combine 4 Mac Pros into one big cluster iPad Pro can be input device for it.

I'm convinced that this is my computing direction influenced by discussions from here. Thank you.

This means that I can grow and learn as I go as a semi Pro.

I want to keep my technology investments & add more as my business grows.

The hardware reliability that I experience is at an all time high and my software is stable, now I want to scale upwards.

I probably represent many semi Pro (s) out there with many devices.
 
I'm convinced that this is my computing direction influenced by discussions from here. Thank you.

This means that I can grow and learn as I go as a semi Pro.

I want to keep my technology investments & add more as my business grows.

The hardware reliability that I experience is at an all time high and my software is stable, now I want to scale upwards.

I probably represent many semi Pro (s) out there with many devices.
You would be better served by reading some official sources instead of some speculations and musing from some anonymous guy on a forum...
 
I feel like I'm on the "Magical World of Tomorrow" ride at Disney.

Who has been doling out the unicorn tea and why didn't I get any?

I don't like being in agreement with MacVidCards, but here we are.
[doublepost=1464204725][/doublepost]
Blizzard Entertainment is using Swift and Metal for the new version of World of Warcraft.

I know they're using Metal but I haven't seen anything about Swift. I would be surprised if they were using Swift.
[doublepost=1464204797][/doublepost]
I'm not stating that some things are impossible some time in the future, I'm talking about what you can do today or tomorrow. (And not some distant "tomorrow", I'm typing this late Tuesday afternoon, and I'm talking about what you can do Wednesday morning.)

You can do it today using a Mac Pro as a big expensive router. Technically you could do it on any computer with more than one Thunderbolt port and set up IP bridging (which is the same thing an ethernet router is going to do). But the number of Thunderbolt ports is your limit to how many machines can talk on a single "router." It's the same way a ethernet router works, just without the embedded mass produced hardware.

You probably shouldn't do it because Thunderbolt cables at length are crazy expensive, but no one seems to be thinking about that...
 
Later as you goo deeper on Python, explore Pypy (python compiler), and take your time on learning to use NumPy / SciPy very important for HPC, as PyCuda, PyOpenCL.

Python is very powerful for Math analysis, as for complex data manipulation and/or analysis (rivals with R), despite being a 'scripting' language there are ways to compile it into code that runs near 95% as fast as the same algorithm in C++.
Python would struggle to run even 5% as fast as the same algorithm in C++ so I am genuinely curious about the methods you are describing :eek:
 
Python would struggle to run even 5% as fast as the same algorithm in C++ so I am genuinely curious about the methods you are describing :eek:

I was thinking the same thing, wondering if on average python is 4 or just 3 orders of magnitude slower than C++. Python is a scripting language and a user interface for libraries written in C, C++ and that F language I shall not mention.
 
You would be better served by reading some official sources instead of some speculations and musing from some anonymous guy on a forum...

I take the thoughts expressed here as official opinions and the official sources as Apple, Microsoft, Google and many others.

I just live in their technology world, both present and future.

The far future is a different story.
 
  • Like
Reactions: linuxcooldude
Python would struggle to run even 5% as fast as the same algorithm in C++ so I am genuinely curious about the methods you are describing :eek:

I was thinking the same thing, wondering if on average python is 4 or just 3 orders of magnitude slower than C++. Python is a scripting language and a user interface for libraries written in C, C++ and that F language I shall not mention.

Pypy allows you to compile Python into efficient binaries, NumPy and SciPy on the other hands are Math centered extension to Python and optimized for heavy workload.

Pypy http://pypy.org/features.html

about its performance http://speed.pypy.org/

Read this http://www.software.ac.uk/blog/2015...computing-perspective-supercomputing-2015?mpw

My personal approach on an application I just developed to accelerate some data processing in my operation, I didn't use Pypy neither Cython, just Python and PyOpenCL (which allows to launch a GPU kernel and interface its data to NumPy, while those kernels actually are C functions you can code from Python).

There are many compilers for python (Pypy, Numba, Cython, and Pyston this one is very promising)
 
Last edited:
I'm convinced that this is my computing direction influenced by discussions from here. Thank you.

This means that I can grow and learn as I go as a semi Pro.

learn software.. don't worry about the computer.

you simply can not buy yourself into (what i'd consider at least), the professional computing world.. you have to learn your way in..

the money/buying computers is the easy part.. incredibly easy part.

(in my recommendation)

[doublepost=1464235212][/doublepost]
Yes and No, when you connect 2 macs Thru TB2 you have either option for Target mode (non-retina iMac) which allow you to use the embedded display as display for your mac, or as Target Disk, enabling your Mac as a über HDD encolure, and the other option is to use the Thunderblt cable as a 10/20 GBps LAN connection w/o switch see http://www.macworld.com/article/2142073/connecting-two-macs-using-thunderbolt.html
Then is like any network, you can share storage, printers, etc.

In near future this feature should receive some improvements as to enable you (if your target is a macbook) to use your macbook pro as Display/Keyboard/touchpad/USB hub for your Mac Pro.

hmm.. maybe i'm missing something and if so, tell me the 'duh' part : )
..
but it's super easy to hook up two macs.
via wifi, with two authorized computers in range and turned on, they show up in each other's finder sidebar under 'shared'..

click on that computer and it mounts to the one your using.. you browse it just as you do an other folder/image/etc..
open a file that's on the shared computer's drive and it opens on the current computer in the appropriate app without the file transferring.. play a movie file that's stored on one computer on the other's hardware without transferring the file.. drag&drop (or however) files from one to the other for data transfers.

hook up via thunderbolt instead of wifi for faster transfers if desired.

do it with 5 macs.. no difference.. have 5 desktops for example, opened on a single display, and do whatever you want with any of the files.. either using them or transferring them between the various drives.

what's the desired ability with regards to this that isn't currently happening that i'm not realizing?

(edit- oh, controlling the other computer through this one's keyboards and what not.. things like that?)
 
Last edited:
learn software.. don't worry about the computer.

It's been years since I coded, and I was sloppy then. Still, I got results.

I could have been a contender.

Although, I will be reading some software books this summer for ideas and insight.


what's the desired ability with regards to this that isn't currently happening that i'm not realizing?

Yes, I'm able to control MACs with software and access drives for access to data and databases using RDP and login software.

The real device management is full utilization of all the resources. Adding processors and components as time, money and technology allows.

What would happen if we could incrementally add local capabilities and supplement with cloud exchanges?

The sum of all the parts used as redundancy and power computing.

In a matter of weeks, I shifted from considering a 12 core MP to adding several 4 core MP. Even if Apple releases or not releases anything on the MP line. No longer matters.

The nMP is good enough now.

I think it will be good enough years from now.

Someone will figure this shared resources concept out, if not Apple.

And if no one does, I will be disappointed.
 
It's been years since I coded, and I was sloppy then. Still, I got results.

I could have been a contender.

Although, I will be reading some software books this summer for ideas and insight.
oh.. i meant learn software as in learn how to use particular software to the best of your ability.
photoshop or excel or premier or autocad (or whatever)

knowing some code is good for particular usages but i (personally) wouldn't consider it a must have skill for a pro user.
(not counting devs etc.. obviously)


Yes, I'm able to control MACs with software and access drives for access to data and databases using RDP and login software.

The real device management is full utilization of all the resources. Adding processors and components as time, money and technology allows.

What would happen if we could incrementally add local capabilities and supplement with cloud exchanges?

The sum of all the parts used as redundancy and power computing.
oh, back to that.
this really is already possible but it's generally controlled by applications.
if you have an application that's capable of using 100 cpu cores then , more likely than not, it's going to have networking options available.

if you're not using the type of software that will eat up this much cpu, then right, there isn't a generic OS based way to enable it anyway.. there wouldn't be much use for it.
(well, maybe.. you're not giving details on what you're wanting to accomplish so it's hard to tell if clustering computers would be beneficial to your work)
[doublepost=1464240858][/doublepost]
The nMP is good enough now.
heh, don't get tooo caught up in the things you read around here.

nmp falls somewhere in the top 95% of personal computers being used today.. especially if you're on mac.

people around here just arguing about the top 5% of modern computers.. and how mac pro sux because the boxx they lust over is in the top 96%.. nmp is only 95.. real pros can't use that. duh

(but now be prepared for counterarguments to this saying 'nmp doesn't even make 90th percentile :mad:' )
 
Last edited:
This "linking Mac Pros for CPU" rumor has been fun, but let's test it with math.

When you link together a bunch of CPUs remotely, all their RAM has to be synced. Either the remote CPU needs to have it's own bank of RAM it's keeping synchronized, or it needs to pull data from the main bank of RAM on the master machine. The latency on Thunderbolt is already way way too high for this, but let's embrace the crazy and pretend it's not.

The throughput on DDR4 memory is 19200 megabytes a second.
The throughput on Thunderbolt 3 is optimistically 5000 megabytes a second.

Thunderbolt 3 is not fast enough to feed a CPU with data. Any rumor from the "dark net" that Thunderbolt 3 is going to be used to link CPUs from two different machines together is a bunch of nonsense. You can't even add a standard CPU on PCI Express so it's definitely crazy that you could add one on Thunderbolt.

The only thing you can do is do exactly what render farms do. Have a bunch of machines with mirrored drives that are all pulling different portions of the files and doing different portions of work.
 
(well, maybe.. you're not giving details on what you're wanting to accomplish so it's hard to tell if clustering computers would be beneficial to your work)

Thanks for the interest.

To be specific, I'm using VM Fusion 8 to run 45 to 50 virtual machines on the nMP.

At 30 machines, the processor throughput shows that it's at max capacity.

I thought that a 12 core would resolve this, although at beyond 32 machines, no other machines can connect online.

So I have multiple disconnects: processor speeds and going offline. I don't think that there is any correlation.

I don't know enough yet.

Therefore, the solution is to add more MP cylinders. Short term problem solved.

Longer term, keep adding as needed.
 
Thanks for the interest.

To be specific, I'm using VM Fusion 8 to run 45 to 50 virtual machines on the nMP.

At 30 machines, the processor throughput shows that it's at max capacity.

I thought that a 12 core would resolve this, although at beyond 32 machines, no other machines can connect online.

So I have multiple disconnects: processor speeds and going offline. I don't think that there is any correlation.

I don't know enough yet.

Therefore, the solution is to add more MP cylinders. Short term problem solved.

Longer term, keep adding as needed.

Why do that on nMP ? VMware runs great on Linux. An entire Linux farm of cheaper computers running your VMs would cost about as much as a single nMP.
 
Why do that on nMP ? VMware runs great on Linux. An entire Linux farm of cheaper computers running your VMs would cost about as much as a single nMP.
If there's not a second room with A/C available to hide the noisy competition? Then it's either earplugs or a silent workstation. Maybe the competition has caught up, but in 2014 nMP didn't have much competition in the silent WS market.
 
If there's not a second room with A/C available to hide the noisy competition? Then it's either earplugs or a silent workstation. Maybe the competition has caught up, but in 2014 nMP didn't have much competition in the silent WS market.

Well, running 45-50 VMs on a workstation is a wrong approach, anyway. There should be a linux server installed running ESXi. Ok, preferably on a separate room :p (it's going to need a lowered temperature room anyway).
 
Someone forgot how things work.

450 watts of work done = 450 watts of heat generated

Only way to generate more heat is to get more work done. (Or use AMD products)

6,1 is silent due to getting little done. No magic unicorn hair involved. AC needs directly related to work done. (Or use of AMD products)
 
Well, running 45-50 VMs on a workstation is a wrong approach, anyway. There should be a linux server installed running ESXi. Ok, preferably on a separate room :p (it's going to need a lowered temperature room anyway).
You mean there should be a multi-socket x64 server running ESXi.

ESXi runs on the bare metal - there is no operating system underneath it.

In fact, many server boards have a SD card (or μSD) on the mobo. ESXi can run off an SD card - completely diskless.
 
Someone forgot how things work.

450 watts of work done = 450 watts of heat generated

Only way to generate more heat is to get more work done. (Or use AMD products)
i think it's a mistake to correlate watts and heat then, make the jump from watts (or heat) to amount of work accomplished.

there are far too many factors at play to be able to say 'my computer uses x amt of watts therefore i can accomplish y amt of work'.
 
i think it's a mistake to correlate watts and heat then, make the jump from watts (or heat) to amount of work accomplished.

there are far too many factors at play to be able to say 'my computer uses x amt of watts therefore i can accomplish y amt of work'.
Power consumed always is dissipated as heat. It is exactly the same thing with petrol engines. First and biggest challenge when VW engineers were designing Bugatti Veyron was enormous amount of heat generated by engine. They had to find a way to dissipate over 800 KW of thermal power. KiloWatt's!

Second factor you have to look how much compute power per watt you get. If you have two GPUs with 129W of power, totaling for 258W and you get from them 7 TFLOPs you get 27 GFLOPs/watt of efficiency. That is not bad for 28 nm process. Only Fury X has higher efficiency, Fury Nano and I think some models of Fury, that are not OC'ed, and currently GTX 1080 and GTX 1070.

You have to remember that efficiency is the amount of compute power you get from each watt consumed by the graphics chips. For Example Titan X has similar power envelope and brings 6 TFLOPs of compute power. That accounts for 24.5 GFLOPs/watt. Fury X - 275W TDP, 8.6 TFLOPs of compute power. 31 GFLOPs/Watt. Fury Nano 180W of TDP and 7 TFLOPs of compute power. 38.9 GFLOPs/watt. GTX 1080 - 8.2 TFLOPs of compute power with 180W power consumer. 45.5 GFLOPs/watt. GTX 1070 - 6.1 TFLOPs, 150W - 40.96 GFLOPs/watt. Of course you can look at gaming benchmarks to allude yourself how efficient are GPUs, and drive your mindshare based on that factor, but that has nothing to do with real efficiency. Even Top500 list uses the amount of compute power you get from each watt consumed as efficiency mark.
 
  • Like
Reactions: AdamSeen
Power consumed always is dissipated as heat. It is exactly the same thing with petrol engines. First and biggest challenge when VW engineers were designing Bugatti Veyron was enormous amount of heat generated by engine. They had to find a way to dissipate over 800 KW of thermal power. KiloWatt's!

Second factor you have to look how much compute power per watt you get. If you have two GPUs with 129W of power, totaling for 258W and you get from them 7 TFLOPs you get 27 GFLOPs/watt of efficiency. That is not bad for 28 nm process. Only Fury X has higher efficiency, Fury Nano and I think some models of Fury, that are not OC'ed, and currently GTX 1080 and GTX 1070.

You have to remember that efficiency is the amount of compute power you get from each watt consumed by the graphics chips. For Example Titan X has similar power envelope and brings 6 TFLOPs of compute power. That accounts for 24.5 GFLOPs/watt. Fury X - 275W TDP, 8.6 TFLOPs of compute power. 31 GFLOPs/Watt. Fury Nano 180W of TDP and 7 TFLOPs of compute power. 38.9 GFLOPs/watt. GTX 1080 - 8.2 TFLOPs of compute power with 180W power consumer. 45.5 GFLOPs/watt. GTX 1070 - 6.1 TFLOPs, 150W - 40.96 GFLOPs/watt. Of course you can look at gaming benchmarks to allude yourself how efficient are GPUs, and drive your mindshare based on that factor, but that has nothing to do with real efficiency. Even Top500 list uses the amount of compute power you get from each watt consumed as efficiency mark.
hmm. maybe I'm just understanding the word 'work' in a different way than you all are using it
 
hmm. maybe I'm just understanding the word 'work' in a different way than you all are using it
Then how you understand it? If by loading up the processing units with work - then I think this is my definition, also. Its all about context and bigger picture, my friend :)
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.