Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

FrontierForever

macrumors newbie
Original poster
Nov 10, 2018
25
19
Is there a simple, off-the-shelf, way to do this?

I am guessing I need enough RAM to run a virtual OSX.

What virtualization software would best accommodate this?

I don't need a supercomputer... but making use of idle clock time on older machine seems like potentially a good value... by extending the useful life of old machines.

I am particularly interested in finding out if two entry level i3s (8 cores at 3.6Ghz) could out perform a mid-spec i7 (6 cores).
 
Is there a simple, off-the-shelf, way to do this?

I am guessing I need enough RAM to run a virtual OSX.

What virtualization software would best accommodate this?

I don't need a supercomputer... but making use of idle clock time on older machine seems like potentially a good value... by extending the useful life of old machines.

I am particularly interested in finding out if two entry level i3s (8 cores at 3.6Ghz) could out perform a mid-spec i7 (6 cores).
What are you trying to do with this extra computer time? That, in large part, determines the answer to this question.
 
  • Like
Reactions: millerj123
Explain what you mean by "clustering"?

I am guessing I need enough RAM to run a virtual OSX.

What virtualization software would best accommodate this?

Huh? What do you think "clustering" means?

No, it's not making a bunch of computers act as one big virtual computer. AFAIK there is no such thing.

There is no need for any virtualization software or more RAM. All you need to create a "cluster" is some quantity of computers, and some network to connect them. And some software that is able to use the distributed resources.

There are many applications that can work cooperatively with the same application (or companion "server") running on another computer. For example, many popular database servers, many video rendering applications, DNA sequencing applications, and even Xcode (which can farm-out parts of a big compile to multiple computers).

There is nothing "special" you need. Other than some "clustering aware" application.

Maybe you are confusing clustering with multi-core or multi-chip architectures, where several cores/chips in the same computer access common memory. Not the same thing.

The various "@home" projects (and similar) are a form of clustering (aka distributed computing).

https://en.wikipedia.org/wiki/List_of_distributed_computing_projects

While some "clusters" have some specialized hardware (e.g. very fast networking, or even shared access to memory across multiple physical computers) that's not a given. And there's none of that specialized hardware that is any option for your Minis.

Here's what Wikipedia says clustering means:

https://en.wikipedia.org/wiki/Computer_cluster

It's basically all about software that knows how to use the resources of multiple computers. You could make a Beowulf Cluster. If you had some task for it it do. But it won't create a "super Mac" that makes everything on your desktop run faster.

https://en.wikipedia.org/wiki/Beowulf_cluster
 
Last edited:
Explain what you mean by "clustering"?
Huh? What do you think "clustering" means?

I admit my ignorance by asking the question. That is why I enjoy answers from helpful people like you.

[...]
Maybe you are confusing clustering with multi-core or multi-chip architectures, where several cores/chips in the same computer access common memory. Not the same thing.

ThunderBolt3 and 100GB ethernet are not yet approaching the bus speeds of the internal architecture?

While some "clusters" have some specialized hardware (e.g. very fast networking, or even shared access to memory across multiple physical computers) that's not a given. And there's none of that specialized hardware that is any option for your Minis.

Profit margins and business models aside, why is this not possible?

Why is this not a native feature of OSX?

The OSX (server variant) seemed to offer some limited support for distributed computing.

https://en.wikipedia.org/wiki/Xgrid
 
ThunderBolt3 and 100GB ethernet are not yet approaching the bus speeds of the internal architecture?
No, not even close. Ethernet is 10 gigabits per second, Thunderbolt is 40 gigabits per second, which is about 5 gigabytes per second. For comparison, the internal SSD on a 2018 mini reads and writes at well over 1000 megabytes (over 1 gigabyte) per second. Memory (RAM) access is much faster.
Distributed, or cluster computing, isn’t built in because it’s just not something that works with what most people do on computers and it’s complex to manage. In order for distributed computing to work, the task at hand needs to be something that can be broken into discrete tasks, like rendering animation frames. The time it takes for that task to happen also needs to be greater than the network overhead of moving the files around. If it takes longer to copy the output to and from a storage destination (like a file server) than it does to process the file on a single computer, it’s not worth the time to send it to multiple computers.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.