Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
However, I have had no problem maxing out all 10 cores when compiling code or indexing a project in a JetBrains IDE. I am happy to pay another $400 for a 10% performance increase. Why limit myself over $400. If I was looking to cut costs I would have probably ordered a smaller SSD or perhaps less RAM.

I don’t use JetBrains (I use Xcode, Visual Studio for Mac (or Visual Studio if I’m in Windows), but primarily VS Code because most of what I do is web based) but I agree, $400 for a 10% performance boost for exporting video or compiling code is something I’ll pay for. Having said that, that wouldn't always be the decision I would make.

The only area I didn’t go all out on was the SSD. I had a 1TB on my 2017 iMac and that was pretty good — especially since so much stuff can be offloaded to the cloud or a NAS — but I like to have breathing room so I went to 2TB. I really considered going to 4TB but for $600, I’m just not convinced I’d get the usage out of it needed to make it worth it (especially when there is other stuff I could spend that $600 on, like my desk setup). That’s the only part that sort of gnaws at me, but again, given the NAS situation (and thus the 10GbE), the insane amounts of cloud storage I have personally and through work, and the fact that I’ve amassed a small collection of external 1TB SSDs (and more HDDs than I can count) if I really need to offload/access an occasional project or VM, I think I’m OK.

Whether I'll use the 10-cores was irrelevant to my own calculus. I wanted to go all out and the 0% interest and cash back on the Apple Card made it a no brainer (this also going to be a business expense), but I respect that every persons decision and needs are different.
 
  • Like
Reactions: scotttnz
That guy really needs to learn how to present data correctly. The graph he made at 2:20 showing the red raw export times looks like the difference between the i7 and i9 is around 20/25% going by the size of the bars and the scale along the horizontal axis in 0.25 increments. But upon closer inspection the data is measured in minutes and seconds so the difference is 1 minute 25 seconds vs 1 minute 1 second, which on a graph should look like the i7 bar is 42% longer then the i9 seeing as 25 seconds is approximately an extra 42% percent of 1 minute. He's presented 1 minute as 4 x 0.25 segments and 25 seconds as another 0.25 segment when it should be almost 0.5 on that scale of his.

In other words, the i7 5500xt will be 42% slower at exporting red raw 4k to pro res compared to the i9 5700 xt. Good to know and I would have preferred he ran similar tests using other codecs such as canon raw rather then post loads of graphs of h264 and h265 exports which are both handled by the intel integrated graphics and t2 chip respectively therefore negligible difference as both systems have those chips.

I agree that his graphs might not be the best in all situations, but the performance difference you see in those tests are down to the difference in GPU and not CPU. Which is the entire point.
 
However, I have had no problem maxing out all 10 cores when compiling code or indexing a project in a JetBrains IDE. I am happy to pay another $400 for a 10% performance increase. Why limit myself over $400. If I was looking to cut costs I would have probably ordered a smaller SSD or perhaps less RAM.

But you are not getting a 10% performance increase. That is the absolute best case scenario. In reality you might be seeing a few percent in certain scenarios. And again, unless your IDE re-indexes projects with millions of lines of code, then the reindex doesn't really take more than 10-30 seconds. If having a 30 seconds load take 28-29 seconds instead, is worth $500 to you - Then go ahead.
 
  • Like
Reactions: iemcj
a 10% performance boost for exporting video or compiling code is something I’ll pay for. Having said that, that wouldn't always be the decision I would make.

But you won't see a 10% difference in video encoding. Thats been shown in multiple reviews - They are identical if the GPU is the same. Compiling code MIGHT gave you a few percent, if the compiler is able to run across all cores.
 
Whether I'll use the 10-cores was irrelevant to my own calculus. I wanted to go all out and the 0% interest and cash back on the Apple Card made it a no brainer (this also going to be a business expense), but I respect that every persons decision and needs are different.

AMEN but it doesn't matter how many times or how rationally you try to explain your reasoning for getting the i9 someone with the i7 will call your decisions stupid as if they can know anything about your situation, work flow or needs. I think going forward I will just tell them "because 9 is my favorite number, its 2 higher than 7" :cool:

i7 user "Give me 1 good reason why you need the i9?"

i9 user "calmly explains their reasoning"

i7 user "Your stupid, give me 5 more reasons you need the i9?"
 
AMEN but it doesn't matter how many times or how rationally you try to explain your reasoning for getting the i9 someone with the i7 will call your decisions stupid as if they can know anything about your situation, work flow or needs. I think going forward I will just tell them "because 9 is my favorite number, its 2 higher than 7" :cool:

i7 user "Give me 1 good reason why you need the i9?"

i9 user "calmly explains their reasoning"

i7 user "Your stupid, give me 5 more reasons you need the i9?"

Calm down - Nobody is calling anybody stupid. Most people carefully consider which upgrades to get and which to pass on, but at the same time a lot of people worry that they might be missing out by not choosing the i9. This fear could force people to spend $500 without any reason. On the other hand you also have people that keep on insisting that the i9 is a lot faster than the i7 even though just about every single benchmark proves otherwise.
So instead of seeing a conflict, see a discussion with interested people trying to understand the different options and scenarios, while also trying to help people not waste their money.
 
Been a while since compilers are multithreaded. If they weren’t we wouldn’t be here discussing 😂😅

I'm well aware since I've been compiling code for 25+ years. But then you might also be aware that a LOT of software can run on multiple cores, but that a lot of it is not optimized for 8+ cores. Every time you add a core you are not getting a linear increase in performance in most situations.
On top of that the i7 runs with a higher base clock and in a few reviews have shown it to maintain a higher overall clock under load compared to the i9.

Performance drop-off with high amounts of cores are something that is discussed quite a lot among people who work with database optimization. Depending on the specific database, there is different points in cores where you only see a tiny increase in performance if you go above it. As an example of this see the attached graph which is from a benchmark of three different database software being tested. Notice how two of them not only don't gain any performance after 16 cores, but they actually get lower performance when adding more cores. This is just an example and not directly related to the iMac, but it does show that software needs to be highly optimized to really utilize large amounts of cores and threads. And most desktop software have never been optimized to run with the amount of cores we're starting to see in regular computers. Just a few years ago 8+ cores were only seen in servers or high-end workstations - Not a regular desktop computer such as the iMac.

db_benchmark.png


So the point still stands - It's not just a matter of more cores = better performance. Which is the entire reason why it's a point worth discussing and for people really to consider. Not only their specific workload, but also the specific software they use and whether they will get a benefit by having 10 cores at a lower base lock or fewer cores with a higher base clock.
 
  • Like
Reactions: pldelisle
But you won't see a 10% difference in video encoding. Thats been shown in multiple reviews - They are identical if the GPU is the same. Compiling code MIGHT gave you a few percent, if the compiler is able to run across all cores.

It’s really going to depend on the application and how it’s optimized. Some NLE’s have better GPU optimization, some are more geared towards CPUs. And increasingly, more apps are being optimized for multi-core (this is less of a macOS specific thing, but we’ve seen it with the explosion in core counts from AMD in the consumer/prosumer space, thanks to Ryzen and Threadripper — but I’m noting it anyway).

For code compilation, as you point out, it depends on the compiler, although most are multithreaded. Something like Rust or LLVM is going to take advantage of those extra cores even better than some other compilers might. And a few percentage points (I do think if correctly optimized it’s likely closer to 10) could be extremely meaningful if you’re talking about large builds.

We could make many of these same arguments about 6-core vs 8-core, in terms of relative performance improvements. Incremental improvements are, well, incremental. Intel CPUs in their current form are really hamstrung by their nanometer constraints and we’ve reached the limits of what we can do — this is one of the many reasons Apple is moving to Apple Silicon — so the real-world and theoretical differences are smaller, but that doesn’t mean they don’t exist.

Again, I don’t disagree with the general thesis that for most users — even power/heavy/pro users, the difference in core count between 8 and 10 isn’t going to have a meaningful difference, especially relative to the price — but I push back on the insinuation that there is no difference or that there are no advantages. For certain types of workloads, for apps that take advantage of multithreading, or even for just pure resale value, there are benefits. Whether those benefits are worth the $400 are up for each individual to decide. For me and for some others — it was worth it — whether because we needed it (I’d like to think I can get another VM or two out of it — or at least a few more containers) or because we have gadget lust and always try to buy the most we can (also me), just because. But my decisions are mine and yours are yours.
 
  • Like
Reactions: filmak
Every time you add a core you are not getting a linear increase in performance in most situations.
Sure. Basics of parallel computing :)


On top of that the i7 runs with a higher base clock and in a few reviews have shown it to maintain a higher overall clock under load compared to the i9.
Normal, it draws power out of 8 cores instead of 10.

Performance drop-off with high amounts of cores are something that is discussed quite a lot among people who work with database optimization. Depending on the specific database, there is different points in cores where you only see a tiny increase in performance if you go above it. As an example of this see the attached graph which is from a benchmark of three different database software being tested. Notice how two of them not only don't gain any performance after 16 cores, but they actually get lower performance when adding more cores. This is just an example and not directly related to the iMac, but it does show that software needs to be highly optimized to really utilize large amounts of cores and threads. And most desktop software have never been optimized to run with the amount of cores we're starting to see in regular computers. Just a few years ago 8+ cores were only seen in servers or high-end workstations - Not a regular desktop computer such as the iMac.
Totally true!
 
Calm down - Nobody is calling anybody stupid. Most people carefully consider which upgrades to get and which to pass on, but at the same time a lot of people worry that they might be missing out by not choosing the i9. This fear could force people to spend $500 without any reason. On the other hand you also have people that keep on insisting that the i9 is a lot faster than the i7 even though just about every single benchmark proves otherwise.
So instead of seeing a conflict, see a discussion with interested people trying to understand the different options and scenarios, while also trying to help people not waste their money.

I don’t think anyone in this thread is trying to scare people into spending more money than they need to spend. On the contrary, I think with few well-documented exceptions, most of us who went i9 are fast to admit it’s probably overkill for our needs. (The same is true for me and RAM. I had 48GB in my last iMac and upgrading to 64GB was probably overkill, at least right now, but I did it anyway and an Amazon screw-up left me when 128GB of RAM. It’s complete overkill for what I need but I’m delighted to have it as a geek.)

But I do see an awful lot of posters with FOMO rationalizations for why they didn’t upgrade and some of it (not saying you specifically), seems overly-defensive. And look, I get it. I do the same thing. I arguably did the same thing when I talked myself out of getting a Mac Pro and opted for the iMac instead. That doesn’t mean my rationalizations or reasons for making my purchasing decision to not spend $9000 on a Mac Pro are wrong — in fact, I’m pretty confident in my decision making process, no matter how great that Mac Pro chassis would look on my desk or how many Thunderbolt 3 ports I could have. But I’m not going to argue with someone who did decide to spend the money on a Mac Pro — for whatever reason — that their decision was wrong. Offer my perspective to someone asking for advice, sure! But if someone has made a decision to buy something, for whatever reason, I’m going to do my best to not let my personal FOMO override the discussion.
 
But you are not getting a 10% performance increase. That is the absolute best case scenario. In reality you might be seeing a few percent in certain scenarios. And again, unless your IDE re-indexes projects with millions of lines of code, then the reindex doesn't really take more than 10-30 seconds. If having a 30 seconds load take 28-29 seconds instead, is worth $500 to you - Then go ahead.
First off, a best case linear scaling would be 25% when going from 8 to 10 cores. That's actually quite a bit. There are a lot of scenarios that DO scale pretty closely to that. Compiling, static analysis, running large test suites, simulation and software dependent media processing among them for me.

My main worries about 8 to 10 core in the iMac revolved around their previous power limits and thermals. I don't think the thermals have improved that much, but the power limits in the 2020 seem to unlock more of the 10 cores potential. So I think it can be easily worth it under specific pro "time is money" circumstances.
 
But you are not getting a 10% performance increase. That is the absolute best case scenario. In reality you might be seeing a few percent in certain scenarios. And again, unless your IDE re-indexes projects with millions of lines of code, then the reindex doesn't really take more than 10-30 seconds. If having a 30 seconds load take 28-29 seconds instead, is worth $500 to you - Then go ahead.

As pointed out above, the absolute best case scenario is much closer to a 25% performance increase for a workload that can run completely independently across 10 cores. Your database example above doesn't really apply to compilation workloads. Database engines have limits to scalability because they have share data structures.

A build process with hundreds of independent compilation units does not need to share state between compilation processes. I have met some of the people working on the LLVM toolchain used for Swift, C++, Objective-C etc. They spend a lot of time thinking about ways to optimize the code their complier generates and a lot of time optimizing build performance for multiple cores.

Thermals are of course a limitation but software development usually involved quite a bit of thinking on the part of the developer so the machine has plenty of time to cool down.
 
  • Like
Reactions: rkuo and pldelisle
You also intend to do some serious gaming on your iMac? 😂😂😂

Keep in mind there are many, many people who use Mac's for work or for personal use, and then spend the extra money on GPU upgrades in order to game -as well as their Mac can- because they don't want to spend the money and space buying a separate PC and monitor and peripherals just to play games a few hours a week.

We (people like the poster you're referring to and myself) don't buy Mac's -to- play games, we buy Macs -and- play games. 🕹
 
Keep in mind there are many, many people who use Mac's for work or for personal use, and then spend the extra money on GPU upgrades in order to game -as well as their Mac can- because they don't want to spend the money and space buying a separate PC and monitor and peripherals just to play games a few hours a week.

We (people like the poster you're referring to and myself) don't buy Mac's -to- play games, we buy Macs -and- play games. 🕹

I don't know why it is so hard for some people to understand that other people like to play games in addition to whatever else they use their Mac for.
 
Keep in mind there are many, many people who use Mac's for work or for personal use, and then spend the extra money on GPU upgrades in order to game -as well as their Mac can- because they don't want to spend the money and space buying a separate PC and monitor and peripherals just to play games a few hours a week.

We (people like the poster you're referring to and myself) don't buy Mac's -to- play games, we buy Macs -and- play games. 🕹
I understand your point and agree with you up until the poster said serious gaming. The amount of money that guy was spending to do serious gaming on an iMac was just ridiculous, not to mention the slow 60hz panel for serious gaming. That was mainly my reference point.
 
First off, a best case linear scaling would be 25% when going from 8 to 10 cores. That's actually quite a bit.

Only if those cores where 100% identical - They are not. And raw benchmarks have shown around a 10% difference - So thats going to be your maximum in an optimal situation where the software is highly optimized to run with 10 cores/20 threads. In practice it's most likely going to be lower than that. A benchmark of Logic gave around 3% difference between the i7 and the i9 and what is a purely CPU driven load. Other tests have shown 100% identical performance in a bunch of tasks as well.

Thats the entire point of this discussion. A lot of people are still claiming or expecting the performance to be much better on the i9, just like you are talking about 25% which is very far from the actual numbers. This causes people to spend money on something and not getting what they expect.
If an increase of, most likely, 2-5% in some very specific cases is of great importance to you, then go with the i9. This makes sense if the iMac is your bread and butter machine and those few percent will end up saving a substantial amount of time during your workday. And if you want the i9 just to say you got the highest spec possible, then go ahead. But for 99,9% of even power users, the i9 won't make any difference.
 
  • Like
Reactions: BuCkDoG
I’m also just going to drop this video for anyone that is still debating this topic of the i7 versus the i9. Like I’ve been saying all along, get the damn i7 and save yourself the money!
 
For an idea of a certain use-case that would seem to greatly benefit from the two extra cores (and correct me if I'm wrong):

I use two pieces of software for a work-related task; one is single-threaded and one is multi-threaded. Due to the nature of the way I use the software, it benefits my productivity greatly to open multiple instances of these programs and automate each of them to "run on their own" for a period of time — in that case, I theoretically should see a much greater performance difference than "3-5%" on a CPU-based task when I'm running X instances of it at one time.

I just got my new machine yesterday, still setting some portions of it up and migrating things from the old machine – I'll report back with some benchmark stuff in this thread as I start working with that automated multi-instance workflow.
 
I use two pieces of software for a work-related task; one is single-threaded and one is multi-threaded. Due to the nature of the way I use the software, it benefits my productivity greatly to open multiple instances of these programs and automate each of them to "run on their own" for a period of time — in that case, I theoretically should see a much greater performance difference than "3-5%" on a CPU-based task when I'm running X instances of it at one time.

If the processes are being divided equally among the available cores then you will see an increase. But since the raw benchmarks so far seem to indicate roughly a 10% difference, thats about the maximum you would see in your process, if it utilizes the cores fully.
The i7 has a higher base clock so if your workload uses the cores at 100% load and the i9 can't sustain the high turbo clock, then the i7 might be faster. It's really difficult to be very specific without testing the specific software, but you will not see 25% difference no matter what.

But it would be awesome if you report back with some benchmarks and test. This helps us all understand the specific differences for many different types of workload. Hopefully I should be getting my i7 specced machine soon and then I'll do a lot of benchmarking and testing on it. Comparing it to an i9 directly would be awesome. :)
 
  • Like
Reactions: ZacksWorld
If the processes are being divided equally among the available cores then you will see an increase. But since the raw benchmarks so far seem to indicate roughly a 10% difference, thats about the maximum you would see in your process, if it utilizes the cores fully.
The i7 has a higher base clock so if your workload uses the cores at 100% load and the i9 can't sustain the high turbo clock, then the i7 might be faster. It's really difficult to be very specific without testing the specific software, but you will not see 25% difference no matter what.

But it would be awesome if you report back with some benchmarks and test. This helps us all understand the specific differences for many different types of workload. Hopefully I should be getting my i7 specced machine soon and then I'll do a lot of benchmarking and testing on it. Comparing it to an i9 directly would be awesome. :)

Where people are saying 25% is beyond me, please tell me how that's possible, anyone?
 
Where people are saying 25% is beyond me, please tell me how that's possible, anyone?

I mean this as softly and non-argumentatively as possible, but we aren't all rocket scientists and it being "beyond you" to understand how some people could think that in certain situations 10-cores could be 25% faster than 8-cores should be cleared up by:

8 x 1.25 = 10

So, yes, there are users who aren't educated on the subject and do not understand the intricacies or even the fundamentals of how CPU's and cores, threading, hyperthreading, etc. work, (such as myself) but it's not a stretch to see how people could think that a 10-core CPU will be 25% faster than an 8-core CPU in a multi-core utilization given that there are 25% more cores...

Again, and separate from the rest of this post, my calculation (and many others') is: I have workflows that use multiple instances of single-threaded and multi-threaded programs so for $400 or however much it cost –since I'm using this machine to produce work that generates an income– I just pay for more cores and call it a day. Whether I'm wrong or right about how much faster this will "make my stuff go" is offset by the $400 "insurance" that I probably didn't leave performance and efficiency on the table.

I only recently started posting on here because of the situation I was in with the Nano-textured glass and thought I could help some other people who were on the fence by giving them more information, and then I chimed in a time or two about other 'on the fence' things regarding the 2020 iMac, but my natural process of buying computers is to do a little research on the machine before I buy it, check out some benchmarks related to my workflows between different spec options, make a decision based on the information out there and the limited knowledge I have, and then never think about it again and get back to working.

(rambling, 🤮)
 
  • Like
Reactions: filmgirl
I mean this as softly and non-argumentatively as possible, but we aren't all rocket scientists and it being "beyond you" to understand how some people could think that in certain situations 10-cores could be 25% faster than 8-cores should be cleared up by:

8 x 1.25 = 10

So, yes, there are users who aren't educated on the subject and do not understand the intricacies or even the fundamentals of how CPU's and cores, threading, hyperthreading, etc. work, (such as myself) but it's not a stretch to see how people could think that a 10-core CPU will be 25% faster than an 8-core CPU in a multi-core utilization given that there are 25% more cores...

Again, and separate from the rest of this post, my calculation (and many others') is: I have workflows that use multiple instances of single-threaded and multi-threaded programs so for $400 or however much it cost –since I'm using this machine to produce work that generates an income– I just pay for more cores and call it a day. Whether I'm wrong or right about how much faster this will "make my stuff go" is offset by the $400 "insurance" that I probably didn't leave performance and efficiency on the table.

I only recently started posting on here because of the situation I was in with the Nano-textured glass and thought I could help some other people who were on the fence by giving them more information, and then I chimed in a time or two about other 'on the fence' things regarding the 2020 iMac, but my natural process of buying computers is to do a little research on the machine before I buy it, check out some benchmarks related to my workflows between different spec options, make a decision based on the information out there and the limited knowledge I have, and then never think about it again and get back to working.

(rambling, 🤮)

I viewed them as having a 20% difference but guess I'm wrong.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.