Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
hey Stacc..
this is already possible with os x.. not so much on a system wide handling but on a per application basis, it's entirely possible.

OS X does have some very user friendly networking capabilities (like- all i do to join up my computers is have them within range (wi-fi) of each other and have them both turned on)..

then the render networking is handled by each particular application (generally, when you license a rendering application, you'll get a couple free nodes with it.. a node is basically a smaller app you'll install on the other computers in your network in order to utilize their resources for the processes occurring on the master computer)

anyway, with os x 10.11 in conjunction with my rendering application (indigo), all i have to do for the mini cluster you speak of is open my laptop then select 'network rendering' in indigo (i.e. push a button)

I just imagine plugging your macbook into a mac pro and seeing the number of processors jump from 2 to 14 in activity monitor. Is this feasible? Probably not but I am not knowledgeable enough to know the technicalities here. Thunderbolt 3 offers about 20% of the bandwidth available of the slowest QPI (i.e. what intel uses to connect its processors in dual processor systems).
 
I just imagine plugging your macbook into a mac pro and seeing the number of processors jump from 2 to 14 in activity monitor. Is this feasible? Probably not but I am not knowledgeable enough to know the technicalities here. Thunderbolt 3 offers about 20% of the bandwidth available of the slowest QPI (i.e. what intel uses to connect its processors in dual processor systems).


flat Five is right, did you know if you plug an mac pro to a macbook thru thunderbolt, they create a network connection (at upto 20 gbps), if you run an render app that supports remote render servers you instantly have available the all the Mac Pro resources for your laptop duty as long you run the Render (or compute) Server on your mac pro).

what you say about seen mac pro's CPU on the mac as yours, I don't believe it could occur soon, but apps capable to use resources on another computer will become more common.

**************************

The info I read on a post somewhere on darknet, its about some new "compute accelerator" to be used along a Mac Pro in a similar way, but not an device running OS/X but something as CNK/INK/ComputeNodeLinux, I remenber somebody at intel Forums was on the endeavor to enable Offloading on Xeon Phi Nights Corner card on a TB Cage on a Mac running OS/X, that was totally experimental, given XeonPhi Knights Landing implement an all new interface ( each CPU is seen as a Compute Node on a Basic Network) even on CPU on PCIe Cards, this works should be easier now, as long Intel Releases a toolchain for OS/X and the Xeon Phi OSX "network' drivers, at least on Phython should be very easy to off load compute to an Xeon Phy.

**************************

The other Leak was about a Target Mode for the Mac Book (similar to the one on older iMacs that allowed you to use as monitor for another Mac), but even better since the new "target mode" allows also share storage and USB peripherals on the Macbook (keyboard, touch pads, webcam actually are USB peripherals, if not all at least most of them)
 
Last edited:
Simply because changing the language will most likely break your code, while optimizing the speed of a mature and well defined language should not break your code.
Yep, but they did change the language, breaking any existing code :(
They removed an existing loop construct and replaced it with one that is significantly slower, harder to optimise, all because it "feels more Swift like." I don't mind too much but I just wish performance was a higher priority.

It's not all doom and gloom though. A commit into the standard library that replaced the old loop was reverted because it "caused major regressions in our string benchmarks." They can use a faster alternative (such as a while loop, which thankfully wasn't axed) to maintain performance.
https://github.com/apple/swift/commit/a652945d9a93bf08b4f28990184a97e22f074d37
 
  • Like
Reactions: tomvos
I just imagine plugging your macbook into a mac pro and seeing the number of processors jump from 2 to 14 in activity monitor. Is this feasible? Probably not but I am not knowledgeable enough to know the technicalities here.

that would have to happen at the OS level.. i think it's possible but i don't really think it's very practical.. or, very very few scenarios would make use of it for the amount of work it would require to be put into it.. especially when considering many/most of the few scenarios that could make use of it already are by way individual application support.

fwiw, IP over thunderbolt was first supported in mavericks.. just a couple months prior to mp6,1 release.
(and this is the connection used for render clustering)

that said, i do see what you're saying and i would also find it neat to just plug macs together and have all their resources etc being recognized as one complete machine automagically or without 3rd party participation.
[doublepost=1463888242][/doublepost]
But, are there many commercial applications written in Python? It's certainly popular for the end-user programmers (e.g. science, AI, ML, ...)

that's how i use it.. as an end-user that's scripting a CAD program.
i can make a script that can complete a task in one second that may take me hours to draw by hand.. so my python code to me seems awesome.

but an actual skilled coder would probably look at my script and find it to be very inefficient or, show an example of what i'm doing on a larger scale that would make it more obvious to me that i'm writing non-optimum code.

but i don't care because it's working quickly for my use case and doesn't require me to study it for years.

anyways, the 'blame python' comment does hold some meaning i imagine.. a fair amount of the people writing python aren't really coders.. but we can still manage to accomplish some pretty good stuff with it.
 
Python is great because it allows people to develop scripts in a user friendlyish way where speed doesn't matter too much.

Swift should be renamed to Sluggish if they want to follow this same paradigm. My dream for Swift is the speed of C/C++ but safe so that programmers can't do stupid things [and more user friendliness like Python without compromising performance]. If macOS is going to be rewritten in Swift some day (instead of C) then I don't want my computer to run 2x slower because someone decided that they didn't like how the cheetah loop looked, so they replaced it with a snail.
 
The info I read on a post somewhere on darknet, its about some new "compute accelerator" to be used along a Mac Pro in a similar way, but not an device running OS/X but something as CNK/INK/ComputeNodeLinux, I remenber somebody at intel Forums was on the endeavor to enable Offloading on Xeon Phi Nights Corner card on a TB Cage on a Mac running OS/X, that was totally experimental, given XeonPhi Knights Landing implement an all new interface ( each CPU is seen as a Compute Node on a Basic Network) even on CPU on PCIe Cards, this works should be easier now, as long Intel Releases a toolchain for OS/X and the Xeon Phi OSX "network' drivers, at least on Phython should be very easy to off load compute to an Xeon Phy.
Imagine doing this with whole computers ;)

HSA implementation in software allows your CPU to run only OS, and application, and execution of tasks in application can be done on outside compute unit, that is connected by wires or wirelessly to the computer.

I don't think it is exactly what Apple plans, however HSA is enough flexible to accommodate every possible implementation of it, regardless if it is connecting multiple computers into bigger one cluster, regardless of what type of hardware it uses(x86 or ARM), or creating a compute cluster from GPUs and connecting to them SoC's that are running system and applications.

However if what you are saying is true, my source was correct. Other thing is writing whole OS with this in mind. And as far as I know it is not possible with current versions of OS X. I think it would be have to be written from the ground up for this idea.

Maybe this is what macOS is in contrary to OS X 10.11/12?
 
Python is great because it allows people to develop scripts in a user friendlyish way where speed doesn't matter too much.

Swift should be renamed to Sluggish if they want to follow this same paradigm. My dream for Swift is the speed of C/C++ but safe so that programmers can't do stupid things [and more user friendliness like Python without compromising performance]. If macOS is going to be rewritten in Swift some day (instead of C) then I don't want my computer to run 2x slower because someone decided that they didn't like how the cheetah loop looked, so they replaced it with a snail.

Such a language already exist... It's called Rust.
 
  • Like
Reactions: ShadovvMoon
Imagine doing this with whole computers ;)

HSA implementation in software allows your CPU to run only OS, and application, and execution of tasks in application can be done on outside compute unit, that is connected by wires or wirelessly to the computer.

I don't think it is exactly what Apple plans, however HSA is enough flexible to accommodate every possible implementation of it, regardless if it is connecting multiple computers into bigger one cluster, regardless of what type of hardware it uses(x86 or ARM), or creating a compute cluster from GPUs and connecting to them SoC's that are running system and applications.


This is the same thinking that I'm pursuing.

As I accumulate Mac processors, I would like to discover a way to share processor, memory, and storage from each to manage peak utilization periods. Someone must know how to do this?

My approach is to add on more resources as my reuirements increase and the availability of more capable systems appear.

Even if this is not baked into Mac OS, would there be supplemental software to simply connect MACs and share resources?

Someone in the MAC world must be considering a solution of a farm of clusters?
 
As I accumulate Mac processors, I would like to discover a way to share processor, memory, and storage from each to manage peak utilization periods. Someone must know how to do this?
[...]
Someone in the MAC world must be considering a solution of a farm of clusters?

what, specifically, are you wanting to do with the cluster?
the solutions are already out there for many situations where it makes sense to cluster..

if you want to generically hook macs together and have them recognized by the system monitor as a unified whole for not much of a reason other than doing it then no, it's not happening right now.. but if you have a specific use case for wanting/needing to do this then you probably already can via particular application support.

---
storage is already possible to share through the OS alone (fwiw)
 
  • Like
Reactions: Flint Ironstag
This is the same thinking that I'm pursuing.

As I accumulate Mac processors, I would like to discover a way to share processor, memory, and storage from each to manage peak utilization periods. Someone must know how to do this?

My approach is to add on more resources as my reuirements increase and the availability of more capable systems appear.

Even if this is not baked into Mac OS, would there be supplemental software to simply connect MACs and share resources?

Someone in the MAC world must be considering a solution of a farm of clusters?
Well, it has to be at some point. All of "this" is implementation of Internet of Things idea. Nobody knows exactly what Apple plans, however it looks more and more apparent that they are developing whole new platform, and want in unifying their ecosystem revolutionize it a bit. There is no revolution in it however...

I sure hope this will come to fruition. 4 Mac Pro's can be connected through TB3, or any external solution into one big cluster.

And anyone with iPad, or any other Mac can connect to it wirelessly and do the job that is required for any particular project. At least thats how I imagine it would work.
 
what, specifically, are you wanting to do with the cluster?
the solutions are already out there for many situations where it makes sense to cluster..
---
storage is already possible to share through the OS alone (fwiw)

Thank you for this clarifying question.

My simple solution is to buy another bigger, faster MAC Pro, as they are released.

Cost aside, this works for me, for now.

What I have and would continue to have is uncompleted tasks when processor resources are exhausted in peak utilization periods. It seems that I can control the memory and storage resources because I know the requirements ahead of time. I will only run the number of tasks within those constraints.

For example, I executed 20 user tasks on the cMP and a few of those tasks went into a suspend mode because of a lack of processor availability. Fortunately, in the morning, I can manually restart those suspended tasks wherever they left off.

This occurs while I have 3 other processors (Mac mini) idly standing by that could be tapped, if there was a way.

My task utilization continues to expand and I'm searching for a solution to this challenge.

In a world of multiple devices, aside from file sharing, true resource ulization and allocation must emerge in separate systems.

It's expanding on the idea of using multi core processors in one system to transgressing and using available processors, memory and storage in other systems. It's somewhat like full peer to peer utilization.

I'm looking for a solution that continues to evolve with each additional system purchased within the Mac OS world.

A supercomputer comprised of all the sum of the parts.

Someone else out there is thinking this?
 
Thank you for this clarifying question.

My simple solution is to buy another bigger, faster MAC Pro, as they are released.

Cost aside, this works for me, for now.

What I have and would continue to have is uncompleted tasks when processor resources are exhausted in peak utilization periods. It seems that I can control the memory and storage resources because I know the requirements ahead of time. I will only run the number of tasks within those constraints.

For example, I executed 20 user tasks on the cMP and a few of those tasks went into a suspend mode because of a lack of processor availability. Fortunately, in the morning, I can manually restart those suspended tasks wherever they left off.

This occurs while I have 3 other processors (Mac mini) idly standing by that could be tapped, if there was a way.

My task utilization continues to expand and I'm searching for a solution to this challenge.

In a world of multiple devices, aside from file sharing, true resource ulization and allocation must emerge in separate systems.

It's expanding on the idea of using multi core processors in one system to transgressing and using available processors, memory and storage in other systems. It's somewhat like full peer to peer utilization.

I'm looking for a solution that continues to evolve with each additional system purchased within the Mac OS world.

A supercomputer comprised of all the sum of the parts.

Someone else out there is thinking this?
AMD have already presented virtualization on GPU, where different applications can be run on unused parts of GPU, for different "clients. In other words, if you use their GPUs in a server, and someone connects to it, the application is run in parallel, on unused part of the GPU without interfering with other users.

As I have said. 3 TB3 bus Mac Pro can be connected to 3 other Mac Pro's and create more powerful cluster than one can do. It however does not end on Mac Pro...

But lets get back to MP topic for a second. Single MP in quite close future will look like this: 20 cores, 256 GB of RAM, 1 TB of SSD NVMe, 2 GPUs with 125W and 12.5 TFLOPs of FP32, and 1/4th ratio of FP64, each and 450W of PSU.
Connect 4 of those and you get: 80 cores x86, 1 TB of RAM, 4 TB's of SSD, 100 TFLOPs of compute power FP32, and 25 TFLOPs FP64. Total power consumption: 1800W. Scalability - this is the key word here.
And anyone with iPad, or other Mac can connect to this cluster and work on project without moving from house even.

Apple stalled their development of hardware and software for few last years. Maybe this is the reason why?
 
Nobody knows exactly what Apple plans, however it looks more and more apparent that they are developing whole new platform, and want in unifying their ecosystem revolutionize it a bit.

Apple is focused on other areas and leaving the Mac Pro and Mac OS incrementally updated. They have shown to be content with focusing on the semi pro users.

It would seem to me that a breakthrough strategy would be to cluster multiple Mac Pro and Mini into a supercomputer concept. This could be done through software in the OS or 3rd party software. Apple could continue to sell semi Pro systems while providing a supercomputer through a cluster of MACs.

I buy into this idea.

My cash outlay for another system would be minimized and I would get to use my existing systems.

I suspect that there are of us out there in this situation, especially if we are in a growth stage of our businesses.
 
Apple is focused on other areas and leaving the Mac Pro and Mac OS incrementally updated. They have shown to be content with focusing on the semi pro users.

It would seem to me that a breakthrough strategy would be to cluster multiple Mac Pro and Mini into a supercomputer concept. This could be done through software in the OS or 3rd party software. Apple could continue to sell semi Pro systems while providing a supercomputer through a cluster of MACs.

I buy into this idea.

My cash outlay for another system would be minimized and I would get to use my existing systems.

I suspect that there are of us out there in this situation, especially if we are in a growth stage of our businesses.
All of this is combination of HSA initiative and Internet of Things idea implementation. Nothing revolutionary. However it appears that they may have proper implementation of it, if it will come into fruition.

I am not buying that Apple was focused on another areas. IMO they stalled with development of their hardware and software, because if they would be releasing Mac Mini, Mac Pro and other Macs that are not connected directly to this idea, they could piss off their customers who would like that features.

All of what we are discussing is possible only by very deep interplay of hardware and software.
And by the looks of things in another areas of industry it looks like Industry will go in this direction also.
 
Connect 4 of those and you get: 80 cores x86, 1 TB of RAM, 4 TB's of SSD, 100 TFLOPs of compute power FP32, and 25 TFLOPs FP64. Total power consumption: 1800W. Scalability - this is the key word here.

And anyone with iPad, or other Mac can connect to this cluster and work on project without moving from house even.
This is exactly what I've attempted to express!

So now there are two of us thinking in this direction.

Who is doing this?

How are they doing it?

When are they doing it?
 
This is exactly what I've attempted to express!

So now there are two of us thinking in this direction.

Who is doing this?

How are they doing it?

When are they doing it?
Well, If Apple thinks about IoT and HSA this is what they should come up with. All you need for this idea, regardless of specs is two things:
1) Operating System written with this is mind from ground up.
2) Thunderbolt 3 or any other external connection to connect multiple computers into one.

Nobody will give you any timeline for this. However, this year is 40th anniversary of the Mac ;).
 
A supercomputer comprised of all the sum of the parts.

Someone else out there is thinking this?
hmm.. yeah, lots of people are thinking this but in grander ways than hobbling together old macs.

see cloud rendering for example. those are real supercomputers (i.e. 3000 16-core xeons w/64MB ram each).. and you can use them. easily and cheaply.

cheaper than buying some crazy high core dual (or more) cpu based thing.. or even stringing together multiple individual computers.. and way way way higher performance than those
 
hmm.. yeah, lots of people are thinking this but in grander ways than hobbling together old macs.

see cloud rendering for example. those are real supercomputers (i.e. 3000 16-core xeons w/64MB ram each).. and you can use them. easily and cheaply.

cheaper than buying some crazy high core dual (or more) cpu based thing.. or even stringing together multiple individual computers.. and way way way higher performance than those

Audio users have had something like this option for quite a while now using Vienna Ensemble Pro using a master/slave(s) set up but not everyone's work flow benefits from it. You don't magically have a more powerful computer created out of multiple computers. It's actually more of a clunky work-a-round that uses brilliant software.
 
This thread seems like it's getting a little detached from reality...

Polaris 10 is going to be a tough spot for Apple. It doesn't look bad for mid or mid-high range. But if Apple wants to brag about GPU performance they'll need to do better. If they had better tools for dual GPUs that might be another way out. Two Polaris 10s looks very competitive instead of just one.
 
see cloud rendering for example. those are real supercomputers (i.e. 3000 16-core xeons w/64MB ram each).. and you can use them. easily and cheaply.

cheaper than buying some crazy high core dual (or more) cpu based thing.. or even stringing together multiple individual computers.. and way way way higher performance than those

Cloud rendering?

I've reached the limits of my technical knowledge.

I'm bouncing off the technology bumper guards with this.

How do I learn more about cloud rendering?

I did check out AWS - Amazon Web Services, and it seemed more complex to me and inappropriate for what I'm doing.

If there is a more cost effective cloud solution than clustering multiple Mac Pro, I want to know about this.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.