Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

amassoodapple

macrumors newbie
Original poster
Apr 3, 2011
2
0
Hello everyone, i just wanted to ask everyones opinion on something that i have been thinking about for a very long time. Right now i have a 2008 Mac pro 8 core w/ 8gb ram/ 2tb hd/ nvidia geforce 8800gt. I use my mac pro for normal use but also final cut studio using 1080p video editing.

I also have a MBP 15 inch i think 2007 or 8. It has 2.4ghz dual core w/ 4gb ram. The screen on it is currently broken and i haven't gotten around to replacing it yet because im not sure if it is worth it. I also used my laptop for final cut editing on the go.

With the release of the 2011 MBP and their amazing specs, im wondering if i should sell my current MP and MBP and get the 17' or 15' quad core mac book pros with the 750gb hdd and 1gb vram.

My question is-- IS IT WORTH IT??
Obviously it won't preform as well as my current 2008 mac pro but can it get close. A little less speed is fine but it cant take like 15 minutes more to render something. Thats what im concerned about.

Or should i just stick with my mac pro and get the screen on my MBP fixed? One thing i like about the MBP is portability obviously.
 
It would depend on how much editing you do (file sizes have a part in this as well).

If it's here and there, you probably don't really need the power of a MP. But if you're earning a living with it (or learning to do so), then keeping the MP is a good idea IMO, as it has additional RAM (can exceed what's possible on a laptop), as well as has the additional cores for software that utilizes them for increased performance (when time to complete jobs really matters).
 
If you get a top speced 17 inch with 8gb ram, it will probably perform just as good as your MP.

The macbook has faster ram, faster GPU, and although it has half the amount of cores, it has speedstep and hyperthreading.
 
The macbook has faster ram, faster GPU, and although it has half the amount of cores, it has speedstep and hyperthreading.
Not much software can utilize the faster memory or channel configurations of Nehalem or newer architecture though (most is written with backwards compatability in mind, so isn't optimized to utilize the newer architecture, if it could use it at all).

As per hyperthreading, it also requires the software support it. Speedstep is useful for single threaded applications, as those benefit from clock speed more than anything else (within the CPU and memory).

Disk I/O is another matter, and is the biggest bottleneck of any system as shipped (much slower than RAM or CPU). Fortunately, it can be addressed for desktop systems, and some laptops with sufficient funds.
 
Not much software can utilize the faster memory or channel configurations of Nehalem or newer architecture though (most is written with backwards compatability in mind, so isn't optimized to utilize the newer architecture, if it could use it at all).

This is totally incorrect. Software doesn't even know about the mem speeds. It's a black box from an app's point of view.
 
This is totally incorrect. Software doesn't even know about the mem speeds. It's a black box from an app's point of view.
This has been covered before in detail.

I'm talking about using flags to influence the order of operations during the compile process. But I've been surprised how little this gets utilized on applications that do benefit from the proper use of these settings, and backward compatibility + too little time are usually the culprits. And to a lesser extent (as it's extremely difficult), is doing it manually when possible if no such flag exists (extreme use of time, which wreaks havoc on deadlines).

Most of the time, the type of application doesn't benefit. But for say Symmetrical Multi-Processing, it can have a significant effect. Especially if the target CPU is something like 6+ years old (newer instructions in the CPU could make a difference with this much time between current CPU line at the time of deployment and minimum CPU requirements).

The biggest thing I see from a user standpoint however, is they don't understand what their software actually does (not that it's easy to find), and end up bottlenecking on disk I/O far more than RAM or CPU.

So I kept it extremely simple.
 
Personally, I wouldn't want to totally replace my desktop for a laptop as a main editing machine, but that's just me.

Some editors, depending on their workflows, need things like capture and I/O cards and you obviously can't do that on a laptop. With a laptop, you'd also be limited to two displays (built-in display + one external) on a GPU that you can never upgrade. And having 3 additional bays of internal storage is a big deal to me.

But, to each his own...
 
This is totally incorrect. Software doesn't even know about the mem speeds. It's a black box from an app's point of view.

This is correct (that software doesn't know about mem speeds, and can't even get that info).

I'm talking about using flags to influence the order of operations during the compile process. But I've been surprised how little this gets utilized on applications that do benefit from the proper use of these settings, and backward compatibility + too little time are usually the culprits. And to a lesser extent (as it's extremely difficult), is doing it manually when possible if no such flag exists (extreme use of time, which wreaks havoc on deadlines).

I've seen this happen before, but given how simple the process to do this is, I can pretty safely say that developers who do this probably should not be coding performance sensitive software. There are a lot of developers who optimized for code size for no apparent reason, instead of speed.

Most of the time, the type of application doesn't benefit. But for say Symmetrical Multi-Processing, it can have a significant effect. Especially if the target CPU is something like 6+ years old (newer instructions in the CPU could make a difference with this much time between current CPU line at the time of deployment and minimum CPU requirements).

It depends... Newer CPUs have better vector instructions. I've made some very good performance gains in code by using SSE instructions on Intel and the vector instructions on ARM. But these certainly aren't a one size fits all instruction set, and I could only use them in specific circumstances. In addition, the most recent major revision to SSE was SSE4, which was introduced on the 2008 Mac Pros. So not much has changed since. As far as I know, Nehalem introduced no new major special instructions. (There was SSE4.1, which was not huge in the grand scheme of things.)

-------

Anyway, more to the OP's point... The Macbook Pro may not have significant gains. You're gaining memory bandwidth, but losing half your cores, and you might take a hit in disk I/O.
 
I've seen this happen before, but given how simple the process to do this is, I can pretty safely say that developers who do this probably should not be coding performance sensitive software. There are a lot of developers who optimized for code size for no apparent reason, instead of speed.
This is part of my point. Between business decisions (too short a development schedule for example) to lazy/inept/burnt out programmers, the end product tends not to work all that well (vs. what it could be if it were optimized properly).

It depends... Newer CPUs have better vector instructions. I've made some very good performance gains in code by using SSE instructions on Intel and the vector instructions on ARM. But these certainly aren't a one size fits all instruction set, and I could only use them in specific circumstances. In addition, the most recent major revision to SSE was SSE4, which was introduced on the 2008 Mac Pros. So not much has changed since. As far as I know, Nehalem introduced no new major special instructions. (There was SSE4.1, which was not huge in the grand scheme of things.)
There's always dependencies (need to match the flags and programming techniques to what the application is meant to do).

As per flags however, I wasn't just thinking of SSE for floating point calculations, but others that would order the instructions and data in a manner that best suits the application (reduce/prevent starvation). If we were talking about an Itanium2 for example, you could use /Qopt-mem-bandwidth<n> (where n = 0 is for serial, n = 1 for parallel) in an Intel compiler and see if it improves matters vs. the baseline performance. Prefetching can be quite handy too when viable.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.