Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
jump-to-conclusions-mat.jpg
 
People. Notice that this claim is made without any tests or evidence what so ever. It's a whim the author has, and it then turns into assertive statements.

This thread is chock full of FUD and disinformation campaigns. His bogus claim ( TB ports are additive ) lacks any evidence at all.

Thunderbolt controllers have a x4 PCI-e v2 connection to the host PCI-e subsystem. The throughput can't possible be higher. TB v2 doesn't change that.

" ... Whereas most Thunderbolt storage devices top out at 800 - 900MB/s, Thunderbolt 2 should raise that to around 1500MB/s (overhead and PCIe limits will stop you from getting anywhere near the max spec).
.... Thunderbolt 2/Falcon Ridge still feed off of the same x4 PCIe 2.0 interface as the previous generation designs. Backwards compatibility is also maintained with existing Thunderbolt devices since the underlying architecture doesn't really change. .. "
http://www.anandtech.com/show/7049/intel-thunderbolt-2-everything-you-need-to-know

That's after Anand sat down and actually talked to Intel's representatives about TB v2.


The Thunderbolt controller is a switch.

lightridge_thunderbold_inside_600px.png

http://www.tomshardware.com/reviews/thunderbolt-performance-z77a-gd80,3205-4.html


The two physical sockets share the same switch inside the Thunderbolt controller. The host side of that switch is the single set of x4 PCIe v2 lanes. You cannot derive a switch's max throughput numbers by just adding up the number of physical ports and multiplying by the individual max throughput. The whole point of a switch is match what comes in to what goes out. You can't just count them all up.


Corner cases could squeeze slightly more out of TB by having different controllers in devices push data to each other ( PCIe can do non host data transfers ), but those aren't going to matter much in a context of the throughput from multiple peripherals to a single host.
 
PROBLEM: As the dataset and images to be processed grows exponentially, the user needs the fastest way posible to access those datasets and images in a constant read/write fashion TODAY AND THE NEAR FUTURE.

what user? who are you talking about?
i thought you said you were switching to windows anyway since you can't do your graphics work on your imac anymore.

(and in case you* can't read between the lines- can you please admit (to yourself) that the macpro is way more computer than you need and that you're just arguing about stuff that doesn't actually matter to 99%users for the sake of arguing?)

*you meaning - you and at least half the people in this thread
 
This thread is chock full of FUD and disinformation campaigns. His bogus claim ( TB ports are additive ) lacks any evidence at all.

That's hardly what I said. Current v1 Thunderbolt has a 10Gb/s signaling rate, each cable and socket has one channel for data at a 10Gb/s speed, that's the spec.

If what you say is true then that means that each port is only capable of 750MB/s in v2.



FWIW, a disinformation campaign implies that there is an intention to deceive, that's just laughable.
 
I personally can't wait for Apple to announce a price. I want to see how much money I'm going to have to spend.

This thing should make my Realflow and Houdini sims FLY. I also can't wait to use Mari on it!!! That Pixar demo still amazes me. The amount of data per channel that they were painting fluidly was awesome.

Anyone thinking this isn't a "pro" machine you're wrong. Good luck finding a faster machine.
 
Awwwwkay... I see you're one of those. Every forum has some. You keep missing the point that we are talking about SOLUTION, which implies a combination of devices and factors to remedy A PROBLEM.

I see you are one of those.... Folks who move the goal post once what their assertions turn out to be full of holes and fallacies. You claim in post 432 was that a Thunderbolt wasn't faster than SATA/SAS. It is. TB v2 also has higher throughput than the MP 2013 SSD ( that is probably temporary, future replacement SSD will probably be faster. )

That a single wire (really cable ) connectivity made a different. All of these are multiple wires. Multiple SAS ports on a single card or mulitple Thunderbolt controllers in a single host don't really amount to a large pragmatic difference.

There may be a sunk cost factor for some folks but it the speed issue is largely overblown by both sides in this thread. Thunderbolt has limits and relatively few folks are actually pushing x8 data I/O cards well past x4 on single person workstations.

The benchmark that was suppose to "prove" your assertion was far more about differences in RAID cards than it had anything to do with SAS. There were SAS drives in both the TB and external SAS enclosure. The bottlenecks were far more so the different RAID controllers than any of the connectivity.


PROBLEM: As the dataset and images to be processed grows exponentially, the user needs the fastest way posible to access those datasets and images in a constant read/write fashion TODAY AND THE NEAR FUTURE.

Exponentially growing data isn't going to fit inside of a single box for long. That is the one of the primary areas the new Mac Pro is aiming at. It is also relatively rare that exponentially data is confined to just one user's access. Hypergrowth data drives increasing storage subsystem costs. For most businesses that drives that data storage into centralized ( not individuals) storage solutions.

The new Mac Pro is exactly aligned with that. Minimal internal bulk storage and flexible, fast external interconnect ( via Thunderbolt). Is Thunderbolt the fastest possible? No. Is it faster than what most folks have deploy for SAN/NAS storage? Yes.

Frankly super high data growth means that SSDs won't matter as much ( probably will be applied to tiered storage solutions like Apple's "Fusion Drive". ). Hot spots are moved to much fewer SSDs ( while the bulk resides on a much higher number of HDDs ).

Multiple users streaming that data back to their clients is a fit for Thunderbolt. Right now there are a variety of solutions folks use. Aggregated 1GbE, FC , 10GbE , etc.


SOLUTION: Raid based drive array using the fastest affordable controller and enclosure, which incidently TB isn't.

The operative question is to what. A SAN/NAS head node or to single user workstation?


You can't even buy or install a tweaked or better TB controller card in the nMP, while you can shop around for a better performing SAS one. Hell, you can even install more than one in your old MP.

This misses the whole root cause of the performance "problem" in your cited benchmark. The throttling controller was the RAID controller, not the Thunberbolt one. Solutions to problems are derived by finding the root causes to the problem and solving those root cause. The primary issue I'm pointing out in your posts is that your pronouncements MASK the root cause issues; not illuminate them. There is no way you are trying to engage in effective problem solving.

Once identify that the external RAID controller is a problem a user can shop for a different one. You could complain about there not being as many choices. That is partially correct if limiting to RAID controllers directly embedded inside the enclosure.


The ultimate goal of Apple toward TB isn't to produce the best and most performing data transfert technology.

No. It is best and more performing data transfer technology that most people can afford. Apple isn't particularly interested in 7K workstations with 8K direct attached storage (DAS) subsytems for single users hanging off them, but most folks/organizations don't assign 14-15K to single users access.

As there are all kinds of corner cases that can be presented as problem. No solution covers all cases. These multiple thousand $ "SAS" solutions are way out of lots of folks budgets.


It's to have the smaller and more discrete connector capable to do the job while being nearly invisible as to not impact the design too much. Form over function...

It is far more a different function rather than different form. The function is not aimed at "budget is no object solutions" so can install anything. The function isn't about installing anything. It far more aimed at installing what most folks are actually going to use.




And here you have it... It can't do the job that the old model could and there is no way to add an expansion card to do it...

If install ultra mega "top fuel" bandwidth demographics were buying Mac Pros in significantly large, and growing, numbers Apple probably would be still selling them. They aren't.

There is unquestionably a subset of the user base that Apple is moving away from. However, fixating on where Apple is not going doesn't really say much of anything about where they are going or about solution-problem matching. Throwing around FUD and disinformation only muddles that.

----------

That's hardly what I said. Current v1 Thunderbolt has a 10Gb/s signaling rate, each cable and socket has one channel for data at a 10Gb/s speed, that's the spec.

The 10Gb/s is for Thunderbolt data; not PCIe data. They are NOT the same thing. Some TB data is encoded PCIe data for transport, but they are in no way equivalent.

If what you say is true then that means that each port is only capable of 750MB/s in v2.

The last part is utter BS and continues your refusal to recognize TB as being a switch. The ports share the bandwidth. If one port is doing nothing the other can get the 1500MB/s. If you load the both down with only PCIe data and if the switch can perfectly allocate an even share it drops down to 750 MB/s in v2. Because it is the controller which gates the bandwidth.

In v1 you can get a small additive effect because again the controller gates down the PCIe bandwidth on each port down to 10Gb/s. That means one port can't completely saturate the part of the switch that passes through the host's internal network. However, that isn't twice as much as a single port. That is "x4 minus what the other port" is doing. Either way the ports don't "add". The max host throughput is gated by the controllers bandwidth to the host; it is not driver by physical ports. The physical ports drive how that it is divided up. Period. The aren't any 'source' of bandwidth.
 
The 10Gb/s is for Thunderbolt data; not PCIe data. They are NOT the same thing. Some TB data is encoded PCIe data for transport, but they are in no way equivalent.

Nope. There are two channels, each rated at 10Gb/s, one is dedicated to data the other to display signal.

thunderboltcable_bandwidth_600px.png



The last part is utter BS and continues your refusal to recognize TB as being a switch. The ports share the bandwidth. If one port is doing nothing the other can get the 1500MB/s. If you load the both down with only PCIe data and if the switch can perfectly allocate an even share it drops down to 750 MB/s in v2. Because it is the controller which gates the bandwidth.

1500MB/s adds up to 3 PCIe v2 lanes, not 4.
 
It is not 1500MB/s per port. More like 1500MB/s per controller for TB v2.

Where are you getting the information that the Mac Pro will have only three controllers? I understand consumer devices only having one controller per two TB ports, but that doesn't automatically mean this trend will continue on the Mac Pro. We'll have to wait and see what Apple decides to do.

more like 4,500MB ( presuming nothing like a 4K display on the one of those ).

Read the specifications for TB again. Since it uses DP 1.2, it's able to pass a 4K display signal alongside the traditional 20Gbps data channel, provided they are not coming from the same port (4K needs ~17Gbps of raw data on its own, though TB muxes a DP signal into the TB data channels if daisy chaining). Using a 4K monitor on a TB2 chain may degrade data performance more substantially than a current TB monitor on a chain due to the increased usage of data by the 4K signal, though this is likely why Apple is offering six ports (to maximize data throughput while allowing three 4K displays - a rare feat even for professionals).

The six ports are more likely a nod to legacy connections a current Mac Pro user would likely have. 1 or 2 DVI (or perhaps display port ) monitor. That's two ports down (primarily for backwards compatibility mode). If eventually gets one of the new 3rd party super duper 4K monitors that is another port purely in backwards compatible DisplayPort v1.2 mode. (so 2-3 ports down). One or two x4 cards. So 1-2 more ports down for a decent number of users if they minimize transition costs and get chain-enders. So left with about 2.

You're grossly overestimating the impact of a display on Thunderbolt's bandwidth. Remember, Thunderbolt has two synchronous 10Gbps channels per cable; Thunderbolt 2 will turn that into a single 20Gbps Data Channel, though Intel claims the ability to transmit display information across a separate channel than the PCIe data. Even Anandtech's benchmarks with a Pegasus RAID and some Thunderbolt chain devices revealed less than a 100MB/s performance hit when using the current Thunderbolt Display alongside the RAID box, and that's performance based around the 2011 implementation of the spec (which has had both host and device controller updates since). Still, let's take your scenario into account, and say that you're using three of the ports strictly for storage, and the other three are a mixed use of 4K Display and Data accessories in a daisy chain: you're still looking at 4500MB/s on the storage alone, plus reduced data performance on the accessories daisy chained on the 4K Display side, though still likely in the 5000MB/s to 7000MB/s range (assuming a much more significant hit to bandwidth due to the use of 4K Displays, and also assuming six controllers). If we reduce controllers to three (one per pair of ports), then we can still achieve 4500MB/s easily while driving the displays on different ports, since the controller won't have to interweave the signals together on the same cable (this also assumes that the new spec won't be able to drive DisplayPort monitors independently of the data channel, which seems unlikely given the synchronous nature of the specification).

Apple's engineers are top notch, and they know far more about Mac Pro usage scenarios than we do. If they believe six TB2 ports driven by three controllers is enough, then I trust their judgement on the matter. Given how much data many Mac Pro users eat, though, I wager it's still a 50/50 bet that they'll use independent controllers for each port, to maximize data throughput.
 
I see you are one of those.... Folks who move the goal post once what their assertions turn out to be full of holes and fallacies. You claim in post 432 was that a Thunderbolt wasn't faster than SATA/SAS. It is. TB v2 also has higher throughput than the MP 2013 SSD ( that is probably temporary, future replacement SSD will probably be faster. )

That a single wire (really cable ) connectivity made a different. All of these are multiple wires. Multiple SAS ports on a single card or mulitple Thunderbolt controllers in a single host don't really amount to a large pragmatic difference.

There may be a sunk cost factor for some folks but it the speed issue is largely overblown by both sides in this thread. Thunderbolt has limits and relatively few folks are actually pushing x8 data I/O cards well past x4 on single person workstations.

The benchmark that was suppose to "prove" your assertion was far more about differences in RAID cards than it had anything to do with SAS. There were SAS drives in both the TB and external SAS enclosure. The bottlenecks were far more so the different RAID controllers than any of the connectivity.




Exponentially growing data isn't going to fit inside of a single box for long. That is the one of the primary areas the new Mac Pro is aiming at. It is also relatively rare that exponentially data is confined to just one user's access. Hypergrowth data drives increasing storage subsystem costs. For most businesses that drives that data storage into centralized ( not individuals) storage solutions.

The new Mac Pro is exactly aligned with that. Minimal internal bulk storage and flexible, fast external interconnect ( via Thunderbolt). Is Thunderbolt the fastest possible? No. Is it faster than what most folks have deploy for SAN/NAS storage? Yes.

Frankly super high data growth means that SSDs won't matter as much ( probably will be applied to tiered storage solutions like Apple's "Fusion Drive". ). Hot spots are moved to much fewer SSDs ( while the bulk resides on a much higher number of HDDs ).

Multiple users streaming that data back to their clients is a fit for Thunderbolt. Right now there are a variety of solutions folks use. Aggregated 1GbE, FC , 10GbE , etc.




The operative question is to what. A SAN/NAS head node or to single user workstation?




This misses the whole root cause of the performance "problem" in your cited benchmark. The throttling controller was the RAID controller, not the Thunberbolt one. Solutions to problems are derived by finding the root causes to the problem and solving those root cause. The primary issue I'm pointing out in your posts is that your pronouncements MASK the root cause issues; not illuminate them. There is no way you are trying to engage in effective problem solving.

Once identify that the external RAID controller is a problem a user can shop for a different one. You could complain about there not being as many choices. That is partially correct if limiting to RAID controllers directly embedded inside the enclosure.




No. It is best and more performing data transfer technology that most people can afford. Apple isn't particularly interested in 7K workstations with 8K direct attached storage (DAS) subsytems for single users hanging off them, but most folks/organizations don't assign 14-15K to single users access.

As there are all kinds of corner cases that can be presented as problem. No solution covers all cases. These multiple thousand $ "SAS" solutions are way out of lots of folks budgets.




It is far more a different function rather than different form. The function is not aimed at "budget is no object solutions" so can install anything. The function isn't about installing anything. It far more aimed at installing what most folks are actually going to use.






If install ultra mega "top fuel" bandwidth demographics were buying Mac Pros in significantly large, and growing, numbers Apple probably would be still selling them. They aren't.

There is unquestionably a subset of the user base that Apple is moving away from. However, fixating on where Apple is not going doesn't really say much of anything about where they are going or about solution-problem matching. Throwing around FUD and disinformation only muddles that.

----------



The 10Gb/s is for Thunderbolt data; not PCIe data. They are NOT the same thing. Some TB data is encoded PCIe data for transport, but they are in no way equivalent.



The last part is utter BS and continues your refusal to recognize TB as being a switch. The ports share the bandwidth. If one port is doing nothing the other can get the 1500MB/s. If you load the both down with only PCIe data and if the switch can perfectly allocate an even share it drops down to 750 MB/s in v2. Because it is the controller which gates the bandwidth.

In v1 you can get a small additive effect because again the controller gates down the PCIe bandwidth on each port down to 10Gb/s. That means one port can't completely saturate the part of the switch that passes through the host's internal network. However, that isn't twice as much as a single port. That is "x4 minus what the other port" is doing. Either way the ports don't "add". The max host throughput is gated by the controllers bandwidth to the host; it is not driver by physical ports. The physical ports drive how that it is divided up. Period. The aren't any 'source' of bandwidth.

Awwwwkay...

Buddy, you are way off the track. Actually you are about a mile off the track. And frankly you are doing more harm than good with your pedantic rehashing of the same old debunked crap pages after pages. You may think you sound wise but in all actuality you sound more like a fanboy/troll. try something else beside kool-aid.

As a port, thunderbolt is a good idea. But it can't replace an add in card in a pro setting. The few beneficial thing that TB brings are offset by the fact that if the port dies or if its performance drop over time you are stuck with it.
You can't update/upgrade it and your production stop until the whole workstation is sent to apple for repair. The old design didn't have such limitation. If one of the build in port went bad or if your raid controller wasn't doing it for you anymore you could buy a new one and replace it yourself! And yes, peoples and companies do in fact upgrade/repair their PCs all the time. You don't replace a $3k+ workstation just because a controller goes bad or a HDD is too small.

But, hey, if Apple has decided that they want out of the scientific/engineering field completly then I'm ok with that, I understand. What they want to sell are CAD/Design workstation to rich and trendy production house, and this is a good thing also, for them. But for my field of work, I'll be moving to PC/Windows/Linux from now on.

I'll still use a Mac as my artistic/hobby plateform of choice, but it will be a mac mini. No way in hell will I pay more than the price of a mini on a closed up, proprietary system.

----------

what user? who are you talking about?
i thought you said you were switching to windows anyway since you can't do your graphics work on your imac anymore.

(and in case you* can't read between the lines- can you please admit (to yourself) that the macpro is way more computer than you need and that you're just arguing about stuff that doesn't actually matter to 99%users for the sake of arguing?)

*you meaning - you and at least half the people in this thread

What???

I'm not talking about the iMac silly...
I have multiple MacPro here in the lab.

----------

I personally can't wait for Apple to announce a price. I want to see how much money I'm going to have to spend.

This thing should make my Realflow and Houdini sims FLY. I also can't wait to use Mari on it!!! That Pixar demo still amazes me. The amount of data per channel that they were painting fluidly was awesome.

Anyone thinking this isn't a "pro" machine you're wrong. Good luck finding a faster machine.

Go check Boxx, Dell and HP... They all make more powerfull workstation and you get to chose your video card as a bonus and you can even have more than 2!
 
Nope. There are two channels, each rated at 10Gb/s, one is dedicated to data the other to display signal.

No. The data on those channels is encoded as Thunderbolt data not PCIe data. The Thunderbolt data moves at 10Gb/s. That is only how fast the Thunderbolt data moves from device to device. It is not the arrival/departure rate of the native protocols. That isn't the only gating factor as to how fast the PCIe goes into/out of the Thunderbolt network.

If Thunderbolt is like a freeway/highway then that is the freeway speed not the on/off ramp speed. If can't get on the highway it doesn't make a difference what the highway speed is. At the end points the PCI-e data is not going to travel any faster than it can be taken off and put back onto the native protocols.

1500MB/s adds up to 3 PCIe v2 lanes, not 4.

Correct, because there is overhead in sharing/switching and transport. There are also isochronous constraints to meet. In TB v2 also have to demulitplex the now shared logical channel. There is likely hardware to make that lightweight but it isn't going to be free.

But the x4 is a theoretical max for which can easily dismiss these "additive port" claims without diving into the minutiae of Thunderbolt's specific switching infrastructure just by pointing out the violation of conservation of flow principle ( what goes in has to come out at around the same rate) through the network. You can have 10 Gb/s on one side and 8 Gb/s on what is logically a single connection.
 
No. The data on those channels is encoded as Thunderbolt data not PCIe data. The Thunderbolt data moves at 10Gb/s. That is only how fast the Thunderbolt data moves from device to device. It is not the arrival/departure rate of the native protocols. That isn't the only gating factor as to how fast the PCIe goes into/out of the Thunderbolt network.

Well, PCIe v2 uses 8b/10b encoding meaning that a singaling rate of 10Gb/s translates to 1GB/s of data rate. Does Thunderbolt encapsulate PCIe or re-encode it, I don't know, but I do know that a data rate of 1GB/s have already been proven over a single Thunderbolt wire in the Anandtech test. So, we do know that in practical terms 1GB/s of data rate is possible, including protocol overhead in one wire.

----------

Correct, because there is overhead in sharing/switching and transport. There are also isochronous constraints to meet.

Again, 1GB/s of data rate has already been demonstrated over a single cable Thunderbolt v1, including protocol overhead.
 
I'm not a pro. I do however need storage and am not rich.

hey, i'm not rich either.. over the years, i've accumulated 2.1TB storage (not counting the ssd or portable drive).. so 3 internals and one external drive..

i spent about $900 on those drives.

if i spent the same amount today, i could get 8TB storage via thunderbolt.. so, from where i'm sitting at least, this whole 'thunderbolt is crazystupidexpensive' thing is a fallacy.. i mean, i can -right this minute- go buy twice the storage for half the price of what i've already spent.
you see that, right?


I threw in a $120 sonnet tempo SSD PCIe card and an 840 pro ssd and it flies at full speed so that takes care of the link speed issue.

i've talked about this earlier in the thread as well as multiple times in the past on this forums but nobody ever seems to want to discuss it..

so you have some drive that flies and speeds and etc.. but this speed you're talking about-- does it benefit you, personally, in any (real) way?

does it speed up your work? does it make your computer life any easier? can you model a structure or layout a design or edit a photograph or whatever it is you may do any faster? i didn't think so

that type of speed is irrelevant when it comes to making your life easier and your workload less..

i mean, who cares if i can move a project to backup in 2 minutes instead of 3 minutes.. i sure as hell don't and neither does anybody that just spent 3 weeks designing a structure..

if the speed you're talking about would make my work take 2 weeks instead of 3 weeks then yes, of course without a doubt i need pcie (or whatever).. but it doesn't.. it doesn't speed up my work- at all.

do you understand what i'm saying?

----------

What???

I'm not talking about the iMac silly...
I have multiple MacPro here in the lab.

again, i'm not asking for you to admit to me that you're fos.. that would change nothing because i already know it. i was asking if you could admit it to yourself (but, unfortunately, i already know the answer to that question as well).. you'd have a better day if you did but oh well, i tried
 
hey, i'm not rich either.. over the years, i've accumulated 2.1TB storage (not counting the ssd or portable drive).. so 3 internals and one external drive..

i spent about $900 on those drives.

if i spent the same amount today, i could get 8TB storage via thunderbolt.. so, from where i'm sitting at least, this whole 'thunderbolt is crazystupidexpensive' thing is a fallacy.. i mean, i can -right this minute- go buy twice the storage for half the price of what i've already spent.
you see that, right?




i've talked about this earlier in the thread as well as multiple times in the past on this forums but nobody ever seems to want to discuss it..

so you have some drive that flies and speeds and etc.. but this speed you're talking about-- does it benefit you, personally, in any (real) way?

does it speed up your work? does it make your computer life any easier? can you model a structure or layout a design or edit a photograph or whatever it is you may do any faster? i didn't think so

that type of speed is irrelevant when it comes to making your life easier and your workload less..

i mean, who cares if i can move a project to backup in 2 minutes instead of 3 minutes.. i sure as hell don't and neither does anybody that just spent 3 weeks designing a structure..

if the speed you're talking about would make my work take 2 weeks instead of 3 weeks then yes, of course without a doubt i need pcie (or whatever).. but it doesn't.. it doesn't speed up my work- at all.

do you understand what i'm saying?

----------



again, i'm not asking for you to admit to me that you're fos.. that would change nothing because i already know it. i was asking if you could admit it to yourself (but, unfortunately, i already know the answer to that question as well).. you'd have a better day if you did but oh well, i tried

Better calm yourself buddy. You've been warned once already.

You're the one trying to tell us what a workstation should be when you've just posted that you have a woooping 2.1Tb of data on 3 drives... I have 12x as much hooked on my HTPC.

In any case I was talking about the nMP in regard to my work environment. You know real work... We use them to process cartographic data, not just to paint over picture in photoshop.
 
Better calm yourself buddy. You've been warned once already.
huh? you've call people fanboy,troll,one of those(?),kool-aid drinker(?).. and then threaten to narc me out for saying you're fos? go ahead-- i hope it makes you feel better :rolleyes:


You're the one trying to tell us what a workstation should be when you've just posted that you have a woooping 2.1Tb of data on 3 drives... I have 12x as much hooked on my HTPC.

i don't have 2.1TB data.. maybe around 1.5.. an entire project folder of mine usually weighs in under a gig..
that's concept AND working drawings, contracts, renderings, estimates, outsource files-- everything.. a few weeks of work.
and even then, a lot of my used storage is things like music and movies..
why would i want or need 25TB of blank disks?

and no, i'm not telling anybody what a workstation should be.. i'm talking about my own personal experience because that's what i know.. and i'm doing that as an example of what (i feel) other people in this thread should be doing because there would be a completely different tune if people talked honestly about their real world usage vs some elusive hypothetical pro who for whatever reason can't A)use a mac pro for their computational needs and B)talk about it themselves so some random interwebbers that they don't know must do it for them..

there was finally one guy in this thread (beaker) that claimed he has real world experience about how his work day will be cut in half by using a windows machine instead of this mac.. when asked for an explanation, well, of course there was no answer given because it's not true..

maybe you, with your real_work_lab_environment_experience, can enlighten me on how that is possible?
 
Better calm yourself buddy. You've been warned once already.

You're the one trying to tell us what a workstation should be when you've just posted that you have a woooping 2.1Tb of data on 3 drives... I have 12x as much hooked on my HTPC.

In any case I was talking about the nMP in regard to my work environment. You know real work... We use them to process cartographic data, not just to paint over picture in photoshop.

Can't reason with that one.

Put on your "ignore" list for a happier life.

He is still saying that stuff is "fast enough" and anyone wanting faster is, as he so eloquently puts it "fos". He also claims to be eager to spend $600 for a TB drive enclosure so he can move the drives out of his 2009 onto his desk in a 2013, just so MOST of his cables match.

When I pointed the silliness of his argument, he told me I had a personality disorder.

Walk away.
 
Can't reason with that one.

Put on your "ignore" list for a happier life.

He is still saying that stuff is "fast enough" and anyone wanting faster is, as he so eloquently puts it "fos". He also claims to be eager to spend $600 for a TB drive enclosure so he can move the drives out of his 2009 onto his desk in a 2013, just so MOST of his cables match.

When I pointed the silliness of his argument, he told me I had a personality disorder.

Walk away.

You are right... I've put two on ignore and somehow I already feel as if a terrible weight has been lifted from my shoulder... Aaaahhh Sweet liberty, free at last...
 
He is still saying that stuff is "fast enough" and anyone wanting faster is, as he so eloquently puts it "fos".

and, fwiw, no i'm not saying stuff is fast enough.. far far from it.. i can have an idea in a split second but it can take quite a few days to get it into something tangible enough to communicate that idea with another person.

computers, and more importantly, the way we interact with computers is still incredibly slow..

and i don't care how much bandwidth or whatever you may throw at that problem-- the problem will not be solved..

pcie is not even close to being a solution to any sort of meaningful speed improvements.. moving data around is way (way!) faster with regular usb when compared to the type of speed improvements i would like to see.
 
there was finally one guy in this thread (beaker) that claimed he has real world experience about how his work day will be cut in half by using a windows machine instead of this mac.. when asked for an explanation, well, of course there was no answer given because it's not true..

LOL.

I didn't reply because i forgot, and proving it to you isn't important. I've got a lot of work to do.

Here though, as I wait for a preview render I've got a couple of minutes, so here are some rough numbers. Most of my work time is spent setting up fluid simulations and Maxwell renders. Both of these scale incredibly well across cores, especially Maxwell. Both take a lot of iteration to get the correct settings. The speed with which this happens is directly correlated to CPU power. For fluids, the large internal storage arrays we use also play a big role.

Anyway, here are some benchmarks for Maxwell

The 12 core 3.06 Mac Pro currently on my desk: 755
The 16 core Sandy Bridge Windows machine on the other side of desk: 1112

Projected score of the 20 core * 3.4 Ghz Ivy Bridge workstation planning to buy: 1631
Projected score of 12 core * 2.7 Mac Pro: 777

So there you have it. Over twice as fast.

If we switch over to using Arion or Octane, as we do frequently, the difference is going to be comical.
 
Last edited:
You are right... I've put two on ignore and somehow I already feel as if a terrible weight has been lifted from my shoulder... Aaaahhh Sweet liberty, free at last...

you know people can see when they've been put on an ignore list by another user, right?
or maybe you're talking about two other people besides me? :/
 
Most of my work time is spent setting up fluid simulations and Maxwell renders.

well cool, you're talking my language though i mostly use indigo as opposed to maxwell but they're both unbiased render engines so..

but the problem with you saying that is that i do in fact no what it's really like and there's no way that you're doing half the work on windows maxwell vs. mac maxwell.. in fact, you're doing the same amount of work.

copy/paste from a similar subject earlier this year:
https://forums.macrumors.com/posts/16887587/

yours truly said:
look.. i admit, i'm not the best at getting a thought out of my head -> into the keyboard -> out to the webz -> and into your brain but...

you're talking as if the computer is doing all the work and the only limiting factors are how fast it can add 2&2..

but if i calculate the time i spend creating a render, MOST of the time has nothing to do with pushing the make_me_pretty button..

it's the drawing/modeling and texturing which is 90% of the work.. or the work that i physically do while interacting with the computer.. 10% of the time is me setting up the lights etc in which i'll often need to run downscaled previews to make sure things are going to look right in the finals.. and sure, having a kickass computer during those times are surely welcome and i'll often network with a laptop or desktop or both just to get more cpus going..

but please try to recognize the point being made.. i could have 500 cores going during that phase in which my previews would come back instantaneously.. and what have i actually accomplished? not much.. i shaved 5% off my project time by being the proud owner of the worlds fastest hypothetical computer on earth.. on a more realistic level, say i have i 16 core computer which is seemingly so much better to some of you all, i just shaved 3% off the actual time..

most of the workload is simply unaddressed.. you just threw away a lot of effort & money at an attempt to increase efficiency but didn't really solve anything.. that's because clock speed and/or #of cpus are not the problem.. the problem lies elsewhere..


the actual time it takes to complete the final renders.. that's a different story.. while the computer is chugging along, i can be at the beach.. working on other things.. sleeping.. eating. whatever... and this is when i demand a 'pro' product.. that thing needs to go full speed for a week straight if i so desire and not break.. and in my experience, this is the type of performance i can expect out of a macpro.. they're relatively very well built and are generally of high quality throughout.. that's why they're called macpro


but do understand.. renders aren't my final product.. in fact, i personally don't even need them to arrive at my ends.. i use them more for client communication / sales purposes..

so in that effect, i'm by no means a 'professional renderer'

and if i were, i definitely wouldn't be sitting around daydreaming and/or whining about mac doesn't have 16 core machines but soandso does.. a 16 core machine for someone who makes their living strictly via producing computer renders or animations sounds like a horrible idea to me..

that would be like a carpenter who only has a handsaw.. it's just a stupid idea..

i mean, if it's your job to churn out renders day after day after day, you better have more than a couple of computers linked up.. and in that case, buying 5 16core macpros and assembling them as one just seems ridiculous to me and a waste of money.. you'd only need one macpro then build or buy the cpuasaurs in a non-workstation train of thought..

but like i said, i already knew that there was no way you were doing half-days simply because you use windows computers instead of macs
 
but like i said, i already knew that there was no way you were doing half-days simply because you use windows computers instead of macs

So you'd be fine using a 486 right? Because you'd still be at the office the same amount of time? Makes no difference? Even though the same task would take 100x longer or whatever?
 
So you'd be fine using a 486 right? Because you'd still be at the office the same amount of time? Makes no difference?
486 won't run my software so- no

Even though the same task would take 100x longer or whatever?

task? i mean i'm basically begging people in this thread to make the appropriate separation between taskS ..

are you talking about your task or the computer's task because they're two completely different things?


but if we're talking about rendering (which i think we are still) i could drive a rendering farm equally as well from my mbp as i do the desktop.. i could use a mac mini (assuming they don't still only have the intel graphics.. not even sure about that but...)..

it doesn't matter what drives the farm.. i still have to do the same amount of work and it will take me the same amount of time.

if you're complaining about only have 12cpus to render with then sure, that's understandable.. i can make do with 4 (and have even rendered on single core in the past).. but, yeah, i would love to have 100 cores at my disposal during rendering but i can't justify the cost.. it's available to me on a per project basis but even then, i don't use it because like i said, i'm not a professional renderer and those types of images are only a small portion of what i actually do..

but what i'm getting at is that a macpro in no tangible way cripples your ability to hook up a hundred nodes.. and having a windows machine in no way (unless the software/UI happens to be better on windows), speeds up the time you must spend creating a render.. there are hundreds of cpus at your disposal right now without even leaving your desk and all that power is available to you wirelessly even.. with the other hardware completely out of sight.

but you already know this..
 
.. and having a windows machine in no way (unless the software/UI happens to be better on windows), speeds up the time you must spend creating a render..

Windows has nothing to do with it.

But yes, having more than double the amount of CPU horsepower does speed up the software.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.