Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well, for that application, the "lesser"machine may be better, but to my point how bad was the 12C on the 2013 and before? I'm betting they were much worse. There will always be apps that find the weakness in a system, it's just a matter of what fits your workload best. Also why I cancelled my 10C order and got the base. For me, I don't think the Vega64, and extra cores would be worth the extra cash. The ram would be, but I can upgrade that later. Ive been working a 32GB footprint for a long time so no biggie there. I actually was close to getting the TOTL iMac5K fo many of the reasons you stated, but decided the extra TB3 ports, 10Gbe, Faster SSD, GPS and Proc were with the $1500 or so upgrade, but wouldn't scale as well to $3000 or so.
There is also the bonus that the brand-new W-Series Xeon chips in the iMP aren't on Intel's buggy CPU list. So maybe the i7 chips in everyone else's Macs/PCs will be gimped down to 66-75% of their previous performance with their next OS update. Making the iMP much more competitive when using mundane software. :)
 
There is also the bonus that the brand-new W-Series Xeon chips in the iMP aren't on Intel's buggy CPU list. So maybe the i7 chips in everyone else's Macs/PCs will be gimped down to 66-75% of their previous performance with their next OS update. Making the iMP much more competitive when using mundane software. :)
Apparently it seems that Xeon W is susceptible.
 
Lots of speculation there without confirmation, but I'll grant that the W-series not being on the list now doesn't mean that it won't be later. Conversely, this exploit has been known about since at least June, and this is the first new series of chips that Intel have brought to market since. So it's not inconceivable that they added encryption or AMD-style randomization to the speculation cache, which would stop "meltdown" at least. –maybe.

Apple's inclusion of the T2 chip (and the T1 in the new MBP) could also have been partly due to a distrust of Intel's CPUs. The T2 handles all onboard disk I/O, and stores the encryption keys within itself. So even if the Intel Management Engine is compromised, it may not have access to the unencrypted data the an attacker would want to access on an iMP or MBP. Any stolen data would be encrypted(?).

The T2 chip originally seemed like paranoid overkill at best, and I wrote it off as a way to eventually kill off the Hackintosh community (which it obviously is). But Apple may also (since meltdown/Speculator was known about around the time they announced the iMP) have had this particular set of problems in mind. If so, they've quietly rolled out a very nice security feature that will set them apart from their competition over the next year, while every other manufacturer is still trying to figure out what to do. If the T2 wasn't already planned for inclusion in every single Mac model released this year, it will be now.

This also explains the bugs I've seen with sleep/wake, and networking. Since the T2 may still have some problems in how it interacts with the Intel Management Engine that it's attempting to firewall.
 
Lots of speculation there without confirmation, but I'll grant that the W-series not being on the list now doesn't mean that it won't be later. Conversely, this exploit has been known about since at least June, and this is the first new series of chips that Intel have brought to market since. So it's not inconceivable that they added encryption or AMD-style randomization to the speculation cache, which would stop "meltdown" at least. –maybe.

Apple's inclusion of the T2 chip (and the T1 in the new MBP) could also have been partly due to a distrust of Intel's CPUs. The T2 handles all onboard disk I/O, and stores the encryption keys within itself. So even if the Intel Management Engine is compromised, it may not have access to the unencrypted data the an attacker would want to access on an iMP or MBP. Any stolen data would be encrypted(?).

The T2 chip originally seemed like paranoid overkill at best, and I wrote it off as a way to eventually kill off the Hackintosh community (which it obviously is). But Apple may also (since meltdown/Speculator was known about around the time they announced the iMP) have had this particular set of problems in mind. If so, they've quietly rolled out a very nice security feature that will set them apart from their competition over the next year, while every other manufacturer is still trying to figure out what to do. If the T2 wasn't already planned for inclusion in every single Mac model released this year, it will be now.

This also explains the bugs I've seen with sleep/wake, and networking. Since the T2 may still have some problems in how it interacts with the Intel Management Engine that it's attempting to firewall.
Not speculation, direct from Intel:

https://www.intel.com/content/www/us/en/support/articles/000025619/software.html

I'm certainly not knowledgable enough to know, but I doubt that T2 will have anything to do with possible mitigation.
 
  • Like
Reactions: SecuritySteve
Or just sell it when you need to upgrade. Those machines are going to be great second hand units in a couple of years time assuming the insides don't cook themselves.

I just bought a top of the line i7 2014 IMac and it's got plenty of life left in it speed wise for what I do (music production).

The problem is its still wishful thinking. Apple has been making multi core multi CPU systems for over a decade. For example I bought the 2.8 2008 mac pro fro graphic design and photography thinking that would be future proof with being 8 cores.

In reality by the time programs had caught up it was way out of date and software was being written for much more efficient processors with hyperthreading and codecs support built in so the 8 cores weren't really taken advantage of in ordinary tasks apart from very specific tasks like computation, rendering etc where multicore power could be used.

Things are getting better but things still arent great. Many programs still benefit from high clock single core performance. Even most of CC is still that way which is such a ball ache. Most machines still only use multi cores when rendering.

I would buy for now rather than worry about the future. If you are editing 4k, creating motion graphics, computation etc this will be a good machine.

On the other hand if you are just transcoding video the i series may be better as it has quick sync video encoding and has already been shown to be about 25% faster in the 5K i7 imac vs the 10 core iMac pro. But if you are stabilizing footage adding multiple filters and colour correction the imac pro hugely faster.

Depends what you are doing.

Things will move on and you cant upgrade anything in these things making them paper weights much quicker than modular systems. Even TB3, TB2 was hailed to be incredible when it came out, now TB3 is here nobody makes TB2 peripherals etc etc.

Im still on the bench. Its an incredible machine and its not bad money on paper. Its just the BS of an all in one. You are spending your money on a beautiful 5k display with the computer bolted to it and once its done its done you cant use the screen again.

Buy a 5k monitor and you can put it on any machine you like, same with the computer it just seems mad especially when you cant buy a monitor that is the same as a secondary.

The mac pro is worth waiting for, if people have been getting by another 6 months isnt going to make a huge difference. Hopefully it will hit a similar price point to the current mac pro. I think there will be a lot of annoyed iMac pro buyers.
 
I guess my answer, from everything else expressed in this thread, is that you should assume that multi-core performance in most apps won't meaningfully improve within the useful life of this computer. For that to happen, a significant number of normal Mac and Windows users will have buy computers with 6-8 core chips. Otherwise only those companies making (very expensive) software for niche industries like 3D animation/rendering will bother to do the optimizations required.

But you don't need much beyond 2-4 cores for office software, web browsing, etc. Not even really for games right now.

So look at the software that you actually use and make a decision to buy (or not) based on whether more cores will let you get your current or near-future work done faster than a computer with a higher clock speed would. Or if you need a better GPU for WORK, and not just playing games.
Your general point about multi-core performance is correct, but your conclusion that "optimizing" of software for multi-core is mostly based on the developer's "will" to do the work to make that happen is much more complicated.

Most software can't and won't be "optimized" for multi-core simply because present logic and the development tools don't work that way.

Data that can easily be broken into separate chunks, processed individually on separate cores, and recomposed at the end into a final result are primed for multi-core. Audio/video encoding was one of the first major areas to benefit from multi-core because you're transcoding from a "known" (i.e. the source file).

Where multi-cores wouldn't be helpful is recording a single track of audio... if one core can easily handling the recording of the one track in real-time, additional cores are of no help - they can't process the "future" of the recording... only the real-time. It's an obvious example, but that's the concept of why everything isn't just magically "optimized" for multi-core.

Most data needs to be processed sequentially. If you're doing a math equation: a + b = c + d = e... there's no way to multi-core that... in order to get "e", you have to compute "c+d", and in order to get "c", you have to computer "a+b" first... if you give one core "a+b", and a second core "c+d", then that second core is going to be waiting for "c". There is no getting around this. This is a very simplified concept of the issue, but that's been 90% of software since the beginning.

In the larger scheme of things, in most cases, the above equation example is unavoidable. In other cases, the developer can re-write the software to avoid that "equation" altogether, and find a different way to arrive at "e" that doesn't involve needing to compute "c" first. But that's often much easier said than done.

TL;DR: In most cases, it's not the developers fault that software doesn't use multi-core... it's literally just the way the universe works.

:)
 
Interesting post about the core based performance.
I was thinking about the 10 core but may opt for the 8 and put the savings toward an external drive.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.