Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I never ever stop learning every day. Homo sapiens can only evolve if we tax our brains instead of the dumbfest we have nowadays.

You can count yourself as one of my peers and so can someone else on this thread count themselves as another, though Antonis might be disappointed. I choose to learn from who I think are the very best - just like my mentor Richard Feynman did his whole life. The pleasure of finding things out and solving puzzles :D

:thumbs up:
 
:thumbs up:

Carl Sagan wasn't too bad a teacher either but that's great to educate the layman. Richard was a twin genius - his talent plus making extremely difficult things such as QM or QED much easier to comprehend. Despite a large amount of information going right over my head a lot does sink in. His lectures are still a regular read of mine after decades even today.

A bit like the two peers on this thread on their own specialised subjects - that too mostly go right over my head :D

I shall stop making them blush now :)
 
I never ever stop learning every day. Homo sapiens can only evolve if we tax our brains instead of the dumbfest we have nowadays.

You can count yourself as one of my peers and so can someone else on this thread count themselves as another, though Antonis might be disappointed. I choose to learn from who I think are the very best - just like my mentor Richard Feynman did his whole life. The pleasure of finding things out and solving puzzles :D

I firmly believe it's not the mini or the nMP that will gain the most from these new products in the end - more like the notebooks whose mobile GPU are dwarfed by the processing power of a proper desktop slice of graphical silicon. And like a terminator - it will not explode, melt down and will never stop until your GPU processing is dead :D. Unlike the throttling tiddlers inside that shiny case!

I fully agree with you that the notebook user has, relatively, a lot more to gain.

These questions of a fine man (Richard Feynman) I like most:"How can we draw inspiration to support these two pillars of western civilization so that they may stand together in full vigor, mutually unafraid? Is this not the central problem of our time? (regarding science and religion); especially in light of this principle: "You can't say A is made of B or vice versa. All mass is interaction."
 
Last edited:
I fully agree with you that the notebook user has relatively a lot more to gain.

In terms of the dominant market share and the TDP limits that compromise it's power along with reliability dissipating all that thermal energy, compounded by that awful stock thermal paste bond its a much easier equation to solve than much of your efforts.
 
In servicing a niche within a niche, keeping costs low is essential.

No disrespect intended MVC, truly . I don't doubt your knowledge or experience, but I do wonder why, with such experience and this groundbreaking discovery, you cant procure a nMP of your own ?

It could be that he doesn't want to spend his own money to buy one, while he does want to offer solutions to those who do want to own them. Like all mass, MVC is pure interaction. He's not A or B ( and neither are any of use, unless we have died); he's MVC (and is constantly being shaped and molded just as we all are). Some may/have call(ed) him conflicted. I call him extra normal and most knowledgeable and sharing. He doesn't play hide the ball and will willing tell you how to do what he does. However, that it may be too complex for the listener/reader to do on his/her own, is what justifies MVC's placing a value on his time and effort doing it for you. If you think deeply about what MVC does - he spends a lot of his time studying and experimenting to shine light on that which Apple would rather keep hidden and he's doing it for a niche market within a niche market. He helps us to get the full value from our systems, even the aged ones. Some people frequently questioning his motives most be very frustrating. It's really this simple - he is a kind, giving, inquisitive, hard working human being. But none of us are perfect.

P.S. Does anyone seriously believe that MVC is making a living off of sales of modified video cards? Are we so foolish and miserly?
 
Last edited:
I fully agree with you that the notebook user has, relatively, a lot more to gain.

These questions of a fine man (Richard Feynman) I like most:"How can we draw inspiration to support these two pillars of western civilization so that they may stand together in full vigor, mutually unafraid? Is this not the central problem of our time? (regarding science and religion); especially in light of this principle: "You can't say A is made of B or vice versa. All mass is interaction."

And to respond to that edit I can only think of far more - he has a whole book of quotes and will send this thread way off topic, this is not the place for our mutual fan club which I am chuffed to discover you are a fellow member. When I am feeling stuck and need a spot of inspiration, the extended BBC documentary on YouTube filmed all those years ago always seems to get played. To help me to remove the garbage out of my head and gain focus in solving a particular problem of any kind, nearly always nothing to do with physics but simply to attain his mindset in mine to light up that bulb in my head.

http://youtu.be/Fzg1CU8t9nw
 
MVC burns the nMP for months until he "finds" a way to make some cash.

More like finds a way to make the machine relevant. This is the exact thing the nMP fanboys were saying it could do with TB2 months before it came out - I read something about having 36 GTX Titan's in some kind of daisy-chained nightmare that was supposed to be better than having PCIe slots (where is tesselator anyway?).

PCIe slots and their versatility were the #1 argument we were all using against the nMP --that's why you can upgrade a 5 year old Mac Pro to run FCP (the supposed "killer app" of the device) faster than the nMP for 30% less. It's not that the 2009 was that powerful, it was that it was actually upgradeable. We said it would happen, it did, and the nMP had no answer to it, until perhaps now.

Now he's found a way to breathe new life into nMP's which had a 2 year old vid card when they shipped and will probably be feeling their age in a few years. He's attempting to kill Apple's "planned obsolescence" they built into these $4 grand machines. Yes, he's making money, so is Apple when they created these disposable "workstations" in the first place.
 
Thanks Antonis. Dumb me. I guess I need to learn more about CUDA from the true master - Antonis.

Not sure what you're arguing about here.

I definitely never claimed to be such thing, and it is not about CUDA at all. I just used the term "CUDA monster" as an opportunity to point out something else; that such an expensive upgrade is not realistic for such an underpowered mahcine, outside of a lab for experiment's shake. And that's just common sense, I thought it was obvious. The word "dumb" was not going towards you.

Back to the topic, if you (for some reason) are bothered with the word "CUDA", you may replace it with any other term, it's just the same. This Mac Mini will never be a "monster" in anything, due to the rest of its h/w limitations, so it'd be a total waste to invest in a pricey upgrade for it.

As I already wrote, nMP is a totally different story, of course.


I never ever stop learning every day. Homo sapiens can only evolve if we tax our brains instead of the dumbfest we have nowadays.

You can count yourself as one of my peers and so can someone else on this thread count themselves as another, though Antonis might be disappointed. I choose to learn from who I think are the very best - just like my mentor Richard Feynman did his whole life. The pleasure of finding things out and solving puzzles :D

I firmly believe it's not the mini or the nMP that will gain the most from these new products in the end - more like the notebooks whose mobile GPU are dwarfed by the processing power of a proper desktop slice of graphical silicon. And like a terminator - it will not explode, melt down and will never stop until your GPU processing is dead :D. Unlike the throttling tiddlers inside that shiny case!

What's to be disappointed about ? I've already written that these can be great news, if they end up being a stable and reliable solution. I don't see anyone in this thread really opposing to MVC's research or anyone that fails to see the great potentials his work has.

After a research is complete, though, then comes the reality (now, you'll find many people opposing to this, though). I don't believe that Mac Mini will benefit in a real life scenario (taken as a fact that I am allowed to have an opinion, I might be wrong), nMP will benefit a great deal, and I agree that laptops will potentially jump on this wagon too.
 
Last edited:
I just used the term "CUDA monster" as an opportunity to point out something else; that such an expensive upgrade is not realistic for such an underpowered mahcine, outside of a lab for experiment's shake.

Let tech enthusiasts be tech enthusiasts. If they have money to burn then let them. Yes a lot of high tech experiments can be silly and a waste of money but what it really points to is humanity's craving for knowledge and tinkering with stuff. It's our nature. If 1000 people tinker with technology, one of them will invent something life changing.
 
"Toto, I've a feeling we're not in Kansas anymore. We must be over the rainbow"

[ http://en.wikiquote.org/wiki/The_Wizard_of_Oz#Dorothy ] Whether the Mac Mini can be a CUDA monster cannot be divined from mere common sense and it's not an all or nothing proposition.

Not sure what you're arguing about here.

I definitely never claimed to be such thing, and it is not about CUDA at all. I just used the term "CUDA monster" as an opportunity to point out something else; that such an expensive upgrade is not realistic for such an underpowered mahcine, outside of a lab for experiment's shake. And that's just common sense, I thought it was obvious. The word "dumb" was not going towards you.

Back to the topic, if you (for some reason) are bothered with the word "CUDA", you may replace it with any other term, it's just the same. This Mac Mini will never be a "monster" in anything, due to the rest of its h/w limitations, so it'd be a total waste to invest in a pricey upgrade for it.

As I already wrote, nMP is a totally different story, of course.

What's to be disappointed about ? I've already written that these can be great news, if they end up being a stable and reliable solution. I don't see anyone in this thread really opposing to MVC's research or anyone that fails to see the great potentials his work has.

After a research is complete, though, then comes the reality (now, you'll find many people opposing to this, though). I don't believe that Mac Mini will benefit in a real life scenario (taken as a fact that I am allowed to have an opinion, I might be wrong), nMP will benefit a great deal, and I agree that laptops will potentially jump on this wagon too.

Antonis,

Here're some of the major points to consider:

1. We Are All Related, Have Something Of Value, Should Be Open To Opposing Opinions, And Are Constantly Changing

I agree with you one 100% that you are allowed to have and to fully express your opinion(s), regardless of whether it/they turn(s) out to be right or wrong. Not only that, I do value your opinion because I did use, even though clearly you did not use, the word "dumb" and I applied it to me, not to you. I learn, more and more every day, to accept other's opinions as being worthy of full consideration. That's what I gave to your opinion initially and am still doing so even as I compose this response. So I mean this from the very bottom of my heart, "Keep it/them coming, cousin." Exposing your opinions to me only acts as catalyst for me to think deeper, broader and with more focus. So I took and take no offense whatsoever and sincerely say to you, "Thank you, Antonis, for sharing."

Here's where we additionally differ: "... . it is not about CUDA at all. I just used the term "CUDA monster" as an opportunity to point out something else; that such an expensive upgrade is not realistic for such an underpowered mahcine, outside of a lab for experiment's shake. And that's just common sense, I thought it was obvious. ... .

Back to the topic, if you (for some reason) are bothered with the word "CUDA", you may replace it with any other term, it's just the same. This Mac Mini will never be a "monster" in anything, due to the rest of its h/w limitations, so it'd be a total waste to invest in a pricey upgrade for it."

2. Relative To What(?) Where Specificity Matters

First off, I caution you to avoid using the word "never, for I'm sure that someone once said, "There'll never be {well why don't you fill it in}", only to be proven wrong (sooner or later). For me, the most important thing that came to mind after reading about MVC's project is CUDA. I cannot replace "CUDA" with any other term and expect that "it's just the same" because it more likely won't be the same. That's why I said "CUDA monster," and not "Monster" in general. I have 24 systems that outperform the Mini. In fact, most of them easily outperform the 2013 MacPro. One's use of a system (i.e., their important applications) dictates whether it's a "Monster" at a specific task. I too consider the Mini to be currently underpowered for that which I do, but that's relative to what I have and can build or could purchase if I didn't build my own systems. Moreover, consider this: Could an "expensive upgrade" to the least expensive Apple system be less expensive, in the end, than buying a more expensive system such as the 2013 MP to implement the same mod, if the performance outcome won't be significantly different. In sum, the most important thing that MVC's new lead has to offer that's pertinent to that which I do, is CUDA 3d rendering. That means a lot to me because that my $$$ maker.

3. In The Technologically Driven World That We Have Created, Are Creating and Will Create, Common Sense Has It's Limitations; Overly Expansive Exclusions Can Prevent Or Hamper New Discoveries

Let's begin here: "Common sense is a basic ability to perceive, understand, and judge things, which is shared by ("common to") nearly all people, and can be reasonably expected of nearly all people without any need for debate." [ http://en.wikipedia.org/wiki/Common_sense ] That which I now know (and admittedly there's a lot more that I can and will learn) about CUDA I could not derive by the application of just common sense. It requires lots of study and participation in Nvidia CUDA developer programs and learning to apply that gained knowledge to real world needs through application development. Consider these four possibilities*/ posed by the application of CUDA technology to 3d rendering from four different CUDA 3d application providers:

a) "[O]ctanerender™ is [a] fully GPU accelerated [3d renderer] and renders images at extreme speeds, up to 50x faster than CPU based unbiased renderers. [ http://render.otoy.com ];

b) "Why use GPU?
Up to 15x -100x faster than conventional CPU ray tracing." [ http://furryball.aaa-studio.eu/aboutFurryBall/index.html ];

c) "Redshift is the world’s first fully GPU-accelerated, biased renderer. Redshift renders scenes many times faster than existing CPU-based renderers.
Save time and money, and unleash your creativity!" [ [https://www.redshift3d.com/products/redshift ]; and

d) TheaRender is a hybrid renderer that has an engine for CPU(s) only, an engine for GPU(s) only and can do both CPU and GPU rendering simultaneously. Thus, with such options, the jury might appear to be still out regarding whether the Mini might also be a CUDA monster here. But my experiences with TheaRender, Blender, Octane Render, FurryBall Render, Redshift 3d, as well as other GPU rendering options, tell me that the Mini would be a CUDA monster here also.

4) Once The GPU Gets The CUDA Data To Compute, The Speed At Which the GPU Performs It's Functions Is Independent of the Computer's Memory, CPU(s), Buss Speed And Long Term Storage

The CPU's role in the GPU's execution of CUDA functions is non-existent, nada, completely absent, 0. To be sure, the computer's CPU does play a part in creating that information on which the GPU will perform compute functions and the system's buss speed does play a role in how long it takes for that information to get to and from the CUDA card. Moreover, the speed of the long-term storage does play a role in how long it takes to read and write inputs and outputs to and from the CUDA card. But it has been my experience that having the same CUDA card in an 8-core 2007 MP (that's simply my oldest one) which has no hyper threads and placing the same CUDA card in one of my 32 core systems with 64 threads total does not result in any significant difference in the speed at which the CUDA card performs. However and surely, there are a few seconds difference in the time elapsed between when I initiate a render project on the 2007 MP and that render project begins, whereas that time is almost nonexistent on my 32-core systems. But as between my Nehalem, Westmere, Sandy and Ivy Bridge systems there is no perceptible time difference (that range of systems covers 4-core single processor systems to quad CPU 8-core systems). So I would surmise that on a recent dual core Mac Mini, that lack of perceptible time lag would maintain.

In sum, CUDA does those compute chores that can be parallelized and once were previously solely or mainly within the domain of CPU(s) or other specialized co-processors to perform.

You can profit from reading this: "5 Technologies Revolutioni[z]ing Animation In 2014." [ http://www.creativebloq.com/audiovisual/technologies-revolutionising-animation-2014-61411961 ].

*/ There are many more areas/applications, in addition to 3d animation, where CUDA excels (which I respectfully submit to you are not mainly within the purview of just "Common sense'):

a) Computational Finance;
b) Defense and Intelligence;
c) Computer Aided Design;
d) Computational Fluid Dynamics;
e) Computational Structural Mechanics;
f) Electronic Design Automation;
g) Color Correction and Grain Management;
h) Video Editing;
i) Encoding and Digital Distribution;
j) On-air Graphics;
k) On-set, Review and Stereo Tools;
l) Simulations;
m) Weather Graphics;
n) Oil and Gas;
o) Computational Chemistry and Biology;
p) Numerical Analytics; and
q) Physics.


[ http://www.nvidia.com/content/PDF/gpu-accelerated-applications.pdf ].

For all of the above reasons, including especially relatively lower costs, I chose to and still say that the full realization of what MVC is doing can make the MacMini a "CUDA monster."

P.S. Giving credit to the one whose tutoring of me first sent me into the CUDA cave, I shall always be grateful to my comrade in hacking [ http://www.insanelymac.com/forum/topic/282584-top-20-hackintosh-geekbench-scores/ ]: the one, the only, the builder of the most beautiful systems, the spiritual and the generous - PunkNugget.
 
Last edited:
Thanks MacVidCards!!!
Once again your passion for the MacPro platform comes through again. I greatly appreciate your countless hours spent into the night and continued support of the Apple Mac Pro & Thunderbolt platforms. Looking forward to your progress on the platform and your sharp whit.

....To the NaySayers
For those who are violating FORUM Policy by blatantly bashing MVC. Booooooo! Get a life, create a product, and bring it to market. Stop sitting on the sidelines and being a naysayer. If it's so darn easy.. Get off the toilet and show us what YOU can deliver.

And to Tutor
Thanks for your continued education to the masses. It's always a pleasure to read your eloquent manner of telling someone to stuff it!
 
Last edited:
Thanks MacVidCards!!!
Once again your passion for the MacPro platform comes through again. I greatly appreciate your countless hours spent into the night and continued support of the Apple Mac Pro & Thunderbolt platforms. Looking forward to your progress on the platform and your sharp whit.

....To the NaySayers
For those who are violating FORUM Policy by blatantly bashing MVC. Booooooo! Get a life, create a product, and bring it to market. Stop sitting on the sidelines and being a naysayer. If it's so darn easy.. Get off the toilet and show us what YOU can deliver.

And to Tutor
Thanks for your continued education to the masses. It's always a pleasure to read your eloquent manner of telling someone to stuff it!

I appreciate the thanks. But, I wasn't telling him to stuff it. Indeed, I want to hear more from my cousin Antonis, for we're all one big family, whether we reside in Asia, Africa, North America, South America, Antarctica, Europe, or Australia. I am suggesting that he, as well as all of my other family members, learn more daily, and always keep in mind human frailty and imperfections and the need to help one another.
 
I appreciate the thanks. But, I wasn't telling him to stuff it. Indeed, I want to hear more from my cousin Antonis, for we're all one big family, whether we reside in Asia, Africa, North America, South America, Antarctica, Europe, or Australia. I am suggesting that he, as well as all of my other family members, learn more daily, and always keep in mind human frailty and imperfections and the need to help one another.

This makes you a bigger man than I, I was inclined to tell the poster that he couldn't find his 4th point of contact with both hands if he tried but your post was much more eloquent and mature.
 
I appreciate the thanks. But, I wasn't telling him to stuff it. Indeed, I want to hear more from my cousin Antonis, for we're all one big family, whether we reside in Asia, Africa, North America, South America, Antarctica, Europe, or Australia. I am suggesting that he, as well as all of my other family members, learn more daily, and always keep in mind human frailty and imperfections and the need to help one another.

Two quotes from you know who out of dozens I could choose from:

The worthwhile problems are the ones you can really solve or help solve, the ones you can really contribute something to. … No problem is too small or too trivial if we can really do something about it.

We absolutely must leave room for doubt or there is no progress and no learning. There is no learning without having to pose a question. And a question requires doubt. People search for certainty. But there is no certainty. People are terrified — how can you live and not know? It is not odd at all. You only think you know, as a matter of fact. And most of your actions are based on incomplete knowledge and you really don't know what it is all about, or what the purpose of the world is, or know a great deal of other things. It is possible to live and not know.

Happy holidays to everyone over the pond.
 
Simply an awesome thread. Thanks MVC and Tutor and everyone for defending what is less doable with a Mac more and more: hacking, pure and simple.
 
Not sure what you're arguing about here.

I definitely never claimed to be such thing, and it is not about CUDA at all. I just used the term "CUDA monster" as an opportunity to point out something else; that such an expensive upgrade is not realistic for such an underpowered mahcine, outside of a lab for experiment's shake. And that's just common sense, I thought it was obvious. The word "dumb" was not going towards you.

Back to the topic, if you (for some reason) are bothered with the word "CUDA", you may replace it with any other term, it's just the same. This Mac Mini will never be a "monster" in anything, due to the rest of its h/w limitations, so it'd be a total waste to invest in a pricey upgrade for it.

As I already wrote, nMP is a totally different story, of course.




What's to be disappointed about ? I've already written that these can be great news, if they end up being a stable and reliable solution. I don't see anyone in this thread really opposing to MVC's research or anyone that fails to see the great potentials his work has.

After a research is complete, though, then comes the reality (now, you'll find many people opposing to this, though). I don't believe that Mac Mini will benefit in a real life scenario (taken as a fact that I am allowed to have an opinion, I might be wrong), nMP will benefit a great deal, and I agree that laptops will potentially jump on this wagon too.

It was simply meant in jest and very late at night here too, apologies. This is just the very beginning of these designs; and as far as CUDA and eGPU /Mac EFI are concerned there's two pretty safe pairs of hands to help realise this potential. There will be many more hours of sweat, tears, progress and retreats for not only this duo but I get the feeling it will be great to see them contribute and solve a truly worthwhile problem.
 
And BTW, eGPUs have been working for years before that post you quoted.

DIY eGPU and Nando4 have been the doing all the work while we slept.

We are all "late to the party"

The difference is, I'm bringing beer. Beer in the form of boot screens and much easier time working in Windows.

EFI boot screens means more than shinyness. It means you can plug a couple boxes in under your desk and the Mini for all intents and purposes has a GTX Titan in it (or whatever). You don't have to do any of the silly pre-boot prep that past efforts did, it "just works".

Thank you. There are three existing Mac Mini DIY eGPU implementations at

DIY eGPU Implementations Hub: TB, EC, mPCIe (Thunderbolt) (Tech|Inferno)

Those all using a US$200 AKiTiO Thunder2 PCIe Box (16Gbps-TB2) enclosure. Details of each at:



Note:

1. none have needed a modified video card bios.
2. there is no guide as yet for attaching an eGPU to a 2014 Mac Mini.

@MacVidCards , you appear to have attached a Titan to to a 2014 Mac Mini, plus have an account MVC on Tech|Inferno.

Did you want to create a guide there on how you did it so others with the same machine can confidently do the same?
 
Last edited:
It was simply meant in jest and very late at night here too, apologies. This is just the very beginning of these designs; and as far as CUDA and eGPU /Mac EFI are concerned there's two pretty safe pairs of hands to help realise this potential. There will be many more hours of sweat, tears, progress and retreats for not only this duo but I get the feeling it will be great to see them contribute and solve a truly worthwhile problem.

My apologies, too, since I think I overreacted a bit (sometimes, things look worst when read than what I initially meant - especially since this is not my native language ;) ). I always value your posts here, especially concerning cMP and nMP issues, and there's always things to learn in this thread from all of you guys. I was very glad to read here MVC's latest work in progress. It will definitely need time, but thinking of the potentials is exciting at least.
 
[ http://en.wikiquote.org/wiki/The_Wizard_of_Oz#Dorothy ] Whether the Mac Mini can be a CUDA monster cannot be divined from mere common sense and it's not an all or nothing proposition.



Antonis,

Here're some of the major points to consider:

1. We Are All Related, Have Something Of Value, Should Be Open To Opposing Opinions, And Are Constantly Changing

I agree with you one 100% that you are allowed to have and to fully express your opinion(s), regardless of whether it/they turn(s) out to be right or wrong. Not only that, I do value your opinion because I did use, even though clearly you did not use, the word "dumb" and I applied it to me, not to you. I learn, more and more every day, to accept other's opinions as being worthy of full consideration. That's what I gave to your opinion initially and am still doing so even as I compose this response. So I mean this from the very bottom of my heart, "Keep it/them coming, cousin." Exposing your opinions to me only acts as catalyst for me to think deeper, broader and with more focus. So I took and take no offense whatsoever and sincerely say to you, "Thank you, Antonis, for sharing."

Here's where we additionally differ: "... . it is not about CUDA at all. I just used the term "CUDA monster" as an opportunity to point out something else; that such an expensive upgrade is not realistic for such an underpowered mahcine, outside of a lab for experiment's shake. And that's just common sense, I thought it was obvious. ... .

Back to the topic, if you (for some reason) are bothered with the word "CUDA", you may replace it with any other term, it's just the same. This Mac Mini will never be a "monster" in anything, due to the rest of its h/w limitations, so it'd be a total waste to invest in a pricey upgrade for it."

2. Relative To What(?) Where Specificity Matters

First off, I caution you to avoid using the word "never, for I'm sure that someone once said, "There'll never be {well why don't you fill it in}", only to be proven wrong (sooner or later). For me, the most important thing that came to mind after reading about MVC's project is CUDA. I cannot replace "CUDA" with any other term and expect that "it's just the same" because it more likely won't be the same. That's why I said "CUDA monster," and not "Monster" in general. I have 24 systems that outperform the Mini. In fact, most of them easily outperform the 2013 MacPro. One's use of a system (i.e., their important applications) dictates whether it's a "Monster" at a specific task. I too consider the Mini to be currently underpowered for that which I do, but that's relative to what I have and can build or could purchase if I didn't build my own systems. Moreover, consider this: Could an "expensive upgrade" to the least expensive Apple system be less expensive, in the end, than buying a more expensive system such as the 2013 MP to implement the same mod, if the performance outcome won't be significantly different. In sum, the most important thing that MVC's new lead has to offer that's pertinent to that which I do, is CUDA 3d rendering. That means a lot to me because that my $$$ maker.

3. In The Technologically Driven World That We Have Created, Are Creating and Will Create, Common Sense Has It's Limitations; Overly Expansive Exclusions Can Prevent Or Hamper New Discoveries

Let's begin here: "Common sense is a basic ability to perceive, understand, and judge things, which is shared by ("common to") nearly all people, and can be reasonably expected of nearly all people without any need for debate." [ http://en.wikipedia.org/wiki/Common_sense ] That which I now know (and admittedly there's a lot more that I can and will learn) about CUDA I could not derive by the application of just common sense. It requires lots of study and participation in Nvidia CUDA developer programs and learning to apply that gained knowledge to real world needs through application development. Consider these four possibilities*/ posed by the application of CUDA technology to 3d rendering from four different CUDA 3d application providers:

a) "[O]ctanerender™ is [a] fully GPU accelerated [3d renderer] and renders images at extreme speeds, up to 50x faster than CPU based unbiased renderers. [ http://render.otoy.com ];

b) "Why use GPU?
Up to 15x -100x faster than conventional CPU ray tracing." [ http://furryball.aaa-studio.eu/aboutFurryBall/index.html ];

c) "Redshift is the world’s first fully GPU-accelerated, biased renderer. Redshift renders scenes many times faster than existing CPU-based renderers.
Save time and money, and unleash your creativity!" [ [https://www.redshift3d.com/products/redshift ]; and

d) TheaRender is a hybrid renderer that has an engine for CPU(s) only, an engine for GPU(s) only and can do both CPU and GPU rendering simultaneously. Thus, with such options, the jury might appear to be still out regarding whether the Mini might also be a CUDA monster here. But my experiences with TheaRender, Blender, Octane Render, FurryBall Render, Redshift 3d, as well as other GPU rendering options, tell me that the Mini would be a CUDA monster here also.

4) Once The GPU Gets The CUDA Data To Compute, The Speed At Which the GPU Performs It's Functions Is Independent of the Computer's Memory, CPU(s), Buss Speed And Long Term Storage

The CPU's role in the GPU's execution of CUDA functions is non-existent, nada, completely absent, 0. To be sure, the computer's CPU does play a part in creating that information on which the GPU will perform compute functions and the system's buss speed does play a role in how long it takes for that information to get to and from the CUDA card. Moreover, the speed of the long-term storage does play a role in how long it takes to read and write inputs and outputs to and from the CUDA card. But it has been my experience that having the same CUDA card in an 8-core 2007 MP (that's simply my oldest one) which has no hyper threads and placing the same CUDA card in one of my 32 core systems with 64 threads total does not result in any significant difference in the speed at which the CUDA card performs. However and surely, there are a few seconds difference in the time elapsed between when I initiate a render project on the 2007 MP and that render project begins, whereas that time is almost nonexistent on my 32-core systems. But as between my Nehalem, Westmere, Sandy and Ivy Bridge systems there is no perceptible time difference (that range of systems covers 4-core single processor systems to quad CPU 8-core systems). So I would surmise that on a recent dual core Mac Mini, that lack of perceptible time lag would maintain.

In sum, CUDA does those compute chores that can be parallelized and once were previously solely or mainly within the domain of CPU(s) or other specialized co-processors to perform.

You can profit from reading this: "5 Technologies Revolutioni[z]ing Animation In 2014." [ http://www.creativebloq.com/audiovisual/technologies-revolutionising-animation-2014-61411961 ].

*/ There are many more areas/applications, in addition to 3d animation, where CUDA excels (which I respectfully submit to you are not mainly within the purview of just "Common sense'):

a) Computational Finance;
b) Defense and Intelligence;
c) Computer Aided Design;
d) Computational Fluid Dynamics;
e) Computational Structural Mechanics;
f) Electronic Design Automation;
g) Color Correction and Grain Management;
h) Video Editing;
i) Encoding and Digital Distribution;
j) On-air Graphics;
k) On-set, Review and Stereo Tools;
l) Simulations;
m) Weather Graphics;
n) Oil and Gas;
o) Computational Chemistry and Biology;
p) Numerical Analytics; and
q) Physics.


[ http://www.nvidia.com/content/PDF/gpu-accelerated-applications.pdf ].

For all of the above reasons, including especially relatively lower costs, I chose to and still say that the full realization of what MVC is doing can make the MacMini a "CUDA monster."

P.S. Giving credit to the one whose tutoring of me first sent me into the CUDA cave, I shall always be grateful to my comrade in hacking [ http://www.insanelymac.com/forum/topic/282584-top-20-hackintosh-geekbench-scores/ ]: the one, the only, the builder of the most beautiful systems, the spiritual and the generous - PunkNugget.

First of all, thank you sincerely for taking all this time to write this. It's one of the most well-written and contributing posts I can remember reading here. I definitely see your point now (alas, it'd be really bad news for me if I didn't). Posts like this make me coming back.

Regardless, I just don't want to see Apple get away with their dreadful choices concerning Mac Mini. I really hope that people will reject this machine, so they get the message that computers are different than smartphones. I'm sure that no professional would mind if they could have the option of a quad-core cpu and upgradeable RAM, even for workflows that only require raw gpu power.

Concerning the MVCs new experiments, as I've written already, I believe these are very exciting news. Seeing an eGPU working on the nMP; now that would be the day. So, cheers to a promising future, where machines will keep lasting longer than Apple intends to.
 
We'll always be cousins. Cousins should treat one another with respect and kindness.

First of all, thank you sincerely for taking all this time to write this. It's one of the most well-written and contributing posts I can remember reading here. I definitely see your point now (alas, it'd be really bad news for me if I didn't). Posts like this make me coming back.

Regardless, I just don't want to see Apple get away with their dreadful choices concerning Mac Mini. I really hope that people will reject this machine, so they get the message that computers are different than smartphones. I'm sure that no professional would mind if they could have the option of a quad-core cpu and upgradeable RAM, even for workflows that only require raw gpu power.

Concerning the MVCs new experiments, as I've written already, I believe these are very exciting news. Seeing an eGPU working on the nMP; now that would be the day. So, cheers to a promising future, where machines will keep lasting longer than Apple intends to.

Antonis,

Thanks for the reply. I always try to be true to my nickname.

Now, we are in complete agreement. Whether one'd think it or not, I agree completely with your sentiments expressed in your last two paragraphs because those issues, along with a few others, are why I began feeling the need to acquire the skills and to take the time to build and mod my own systems. I'm, however, glad that I can now build and mod my own systems and I'm happiest when I can help others to do so, whether involving Macs or otherwise.

I look forward to hearing from you in the future. So please continue to participate or you'll be missed.

P. S. - Regarding this - "... (sometimes, things look worst when read than what I initially meant - especially since this is not my native language )," i.e., difference in languages, it is but one of the kinds of human difference that we encounter to exercise the depth perception of our hearts. I'm shooting for the ability to do 1,000,000,000,000 reps per day.
 
Last edited:
P. S. - Regarding this - "... (sometimes, things look worst when read than what I initially meant - especially since this is not my native language )," i.e., difference in languages, it is but one of the kinds of human difference that we encounter to exercise the depth perception of our hearts. I'm shooting for the ability to do 1,000,000,000,000 reps per day.

"There are some emotions in the human heart that are so profoundly ineffable, they have never been named. But if you could talk to 6800 different cultures and ask them their names for the human emotions, you'd have a beginning . . . That quality in human beings to invent a sound next to another sound that represents an image is magic..."


.
 
I can't resist jumping into this thread. Like Tutor, I'm not young, and I try very hard not to get stuck in my own ruts.

It took the death of my cMP (5,1) to make me re-examine my negative feelings towards (a) external storage, and (b) AIO machines. When I was forced to think hard about what I really needed to get my work done, I realized that (gasp!) the retina iMac and a Thunderbolt storage bay were the best choice. I'm still shaking my head at myself.

Tutor and Gav Mack have been tossing around Feynman quotes. I'm an admirer myself. But the other day I remembered something I read as a graduate student a half-century ago, that's relevant even to technical fields like computing. It's about ecology, but if you let it sink in, you'll see that it's about more than that, including some of what's been discussed in this thread.

" . . . the theoretician deals with all conceivable worlds while the laboratory worker deals with all possible worlds and the field worker is confined to the real world. The laboratory ecologist must ask the theoretician if his possible world is an interesting one, and must ask the field worker if it is at all related to the real one."

(Richard Slobodkin, 1961, American Naturalist 95, 147-153)
 
I can't resist jumping into this thread. Like Tutor, I'm not young, and I try very hard not to get stuck in my own ruts.

It took the death of my cMP (5,1) to make me re-examine my negative feelings towards (a) external storage, and (b) AIO machines. When I was forced to think hard about what I really needed to get my work done, I realized that (gasp!) the retina iMac and a Thunderbolt storage bay were the best choice. I'm still shaking my head at myself.

Tutor and Gav Mack have been tossing around Feynman quotes. I'm an admirer myself. But the other day I remembered something I read as a graduate student a half-century ago, that's relevant even to technical fields like computing. It's about ecology, but if you let it sink in, you'll see that it's about more than that, including some of what's been discussed in this thread.

" . . . the theoretician deals with all conceivable worlds while the laboratory worker deals with all possible worlds and the field worker is confined to the real world. The laboratory ecologist must ask the theoretician if his possible world is an interesting one, and must ask the field worker if it is at all related to the real one."

(Richard Slobodkin, 1961, American Naturalist 95, 147-153)

I loved that quote, and if you juxtapose the words "laboratory worker" and "field worker" with "theoretical physicist" and "experimental physicist" it is even more applicable to their fields than Slobodkin's. There have been arguments like this at CERN for 40 years :D

Feynman was both and also a brilliant communicator to boot - but in fields like this thread I am proud to consider myself a theoretical problem solver only who likes to discuss and debate these problems with the experimentalist problem solvers who are experts in this field. And not only will each other benefit from each other's expertise so will others in the quest to expand all our knowledge in solving just one particular problem that ultimately benefits all of us.

Though a 5k retina iMac with the option GPU is not compatible with my Macintosh laws of physics, thermodynamics in particular with that 125w GPU seemingly right on the edge of the TDP of what that chassis can handle without that fan getting worked very hard. So yes you need to shake your head a bit more :D
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.