It's been a grey or black flicker for me while in Windows. It has only happened a few times while running a game and switching apps. I only came to assume this was the very same bug becuase when looking into the best 5700XT settings for a game, I saw like a year old post from an early 5700XT buyer complaining about the random flcker. Seems AMD dont give a crap unfortunately and a year later it's now on MacOS too.
Maybe a little late to the part, by for what it’s worth, here are some benchmark numbers for the i7 and the 5700XT in comparison to an older high-end iMac from late 2015.
TLDR; Perfectly happy with the new iMac. Three-fold improvements in graphics and 3D rendering performance over late 2015 model. Gaming in 1440p resolution at decent frame rates is definitely possible on the new iMac. Native 5K gaming might even be within reach depending on the game and graphic settings and personal preferences. Fan noise seems to be slightly more elevated compared to older model, but the temperatures are also a but lower.
These are the specifications of the late 2015 and the new 2020 iMac used in the benchmarks:
iMac 2015: Intel Core i7-6700K quad-core at 4.0GHz base clock, 32GB RAM, 512GB PCI-E SSD, and a AMD Radeon R9 M395X with 4GB VRAM graphics card.
iMac 2020: Intel Core i7-10700K octa-core at 3.8GHz base clock, 32GB RAM, 1TB PCI-E SSD, and a AMD Pro 5700XT with 16GB VRAM graphics card.
In summary, the CPU performance has doubled thanks to 8 cores instead of 4 cores. Although often neglected in other reviews, I consider the 8-core processor in an all-in-one solution quite a technological feat and a welcome boost in performance for computationally demanding tasks. On the other hand, single core performance only sees a marginal 10% to 20% improvement in the Geekbench scores. In contrast, computational power of the GPU has doubled in the OpenCL and Metal benchmarks, and rendering performance in Blender and the Unigine benchmarks has nearly tripled.
I don’t know if the extra VRAM on the 5700XT will pay off in the future, but at a marginal increase in cost (+€250) over the 5700, it is nice to have in case games released in the next five years will increase their VRAM requirements. The extra VRAM will definitely pay off in 3D modelling with high detailed models and textures. I have considered a low-end iMac with eGPU solution, but from what I’ve read there are still some woes surrounding eGPU support in macOS and Windows, and personally I prefer the integrated all-in-one solution of the iMac over an additional and bulky eGPU enclosure (that is typically several times bigger than the dedicated ZOTAC gaming PC I had but unfortunately died last month due to heat stroke ).
Geekbench
Initial run (#1 in the table below), out of the box, with the standard 2x4GB RAM, showed significantly lower multicore score (6614) than expected from Geekbench scores reported by other users. Note that some iCloud processes were still running in the background, consuming about 15% of the total CPU power. However, the second run (#2), after upgrading to 4x16GB RAM, followed closely after the first run and despite the iCloud processes still running in the background saw a remarkable increase in multiscore (8356). A third run (#3) confirmed the increase multiscore (8575). The third-party RAM modules were confirmed to run at 2666MHz, so not faster than the standard memory. After iCloud synchronisation had settled, the Geekbench runs (#4 and #5) reported a consistent single core score of 1251, and a multicore score 8896 and 8675. Much more in line with what has already been reported for the core i7.
Across the five runs, the OpenCL and Metal scores were quite consistent, irrespective of CPU load due to iCloud processes running in the background. OpenCL scored on average 55745 (range from 53430 to 57004), and Metal scored on average 58722 (range from 55608 to 60434).
Fan noise during benchmarking was barely audible, but these benchmarks don’t really stress the system long enough for temperature to rise and cause the fans to speed up much.
Table Geekbench on new iMac (mid 2020)
Run #1
Run #2
Run #3
Run #4
Run #5
Avg run 5&6
Single core
1129
1208
1194
1251
1251
1251
Multi core
6614
8359
8575
8896
8675
8786
OpenCL
55833
57004
56367
56090
53430
54760
Metal
60335
60434
59755
55608
57480
56544
Table Geekbench on old iMac (late 2015)
Run #1
Run #2
Run #3
Average
Single core
1043
1129
1088
1087
Multi core
4342
4392
4289
4341
OpenCL
27689
27501
27815
27668
Metal
30726
31837
31548
31370
For single core performance, there is only +10 to +20% improvement, where as multicore performance doubles thanks to 8 instead of 4 cores. The OpenCL and Metal computational throughput also double in performance.
Here is the list of Geekbench reports for anyone interested in the details:
My aim is to be able to play games at 1440p with decent frame rate (at least 30, but preferably anything stable and v-synced at 60 to reduce visual tearing, power consumption, heat and fan noise). For the Unigine Valley benchmark, dropping full-screen anti-aliasing to 2x or even zero provides better performance without much noticeable loss in visual quality, a perfect trade-off for the increased resolution. For science, I also ran the benchmark at low quality without FSAA just to see what kind of performance would be possible if you prefer FPS over visual quality. Note that the benchmark reports a minimum FPS, not a 99 percentile, unfortunately because there is one brief scene where the camera turns around a couple of rocks and lightning flashes that drops the FPS dramatically for a split second. But otherwise the average FPS is generally quite a good indicator of the performance for most scenes. For comparison, I have added the same benchmarks run on the older high-end late 2015 iMac in a second table.
Notation used in tables: “average fps (minimum fps, maximum fps), benchmark score”
Table Unigine Valley benchmark (2013) on new iMac (mid 2020)
Resolution and quality settings
macOS, OpenGL
Windows 10, OpenGL
Windows 10, DirectX11
1600x900 8xAA windowed, ultra
108.9 FPS (40.2, 179.1), score 4556
92.7 (46.0, 221.2), score 3880
120.9 (47.0, 225.0), score 5059
2560x1440 no AA fullscreen, low
137.8 (43.1, 201.7), score 5766
145.0 (56.4, 261.7), score 6065
186.5 (36.0, 300.9), score 7802
2560x1440 no AA fullscreen, ultra
83.3 (11.1, 140.8), score 3483
81.5 (40.1, 138.6), score 3410
98.2 (42.7, 161.1), score 4111
2560x1440 2xAA fullscreen, ultra
67.9 (10.9, 117.1), score 2840
67.0 (34.6, 112.1), score 2801
78.7 (33.3, 134.4), score 3293
2560x1440 4xAA fullscreen, ultra
61.5 (11.0; 110.0), score 2572
59.9 (29.7, 104.1), score 2508
69.9 (33.3, 124.5), score 2920
2560x1440 8xAA fullscreen, ultra
56.8 (10.7; 98.6), score 2375
50.3 (22.9, 87.0), score 2105
58.4 (34.1, 109.2), score 2445
3840x2160 no AA fullscreen, low
81.3 (39.4, 142.1), score 3400
79.9 (33.0, 140.5), score 3342
102.5 (46.5, 182.4), score 4289
Table Unigine Valley benchmark (2013) on older iMac (late 2015)
Resolution and quality settings
macOS, OpenGL
Windows 10, OpenGL
Windows 10, DirectX11
1600x900 8xAA windowed, ultra
37.5 (21.6, 66.0), score 1570
36.8 (18.2, 80.6), score 1834
43.8 (25.1, 80.6), score 1834
2560x1440 no AA fullscreen, low
49.4 (25.4, 87.7), score 2066
60.3 (28.0, 127.9), score 2522
72.4 (28.4, 143.1), score 3030
2560x1440 no AA fullscreen, ultra
26.5 (7.5, 44.9), score 1108
31.4 (15.7, 54.2), score 1312
36.2 (21.1, 61.9), score 1513
2560x1440 2xAA fullscreen, ultra
22.6 (7.2, 38.5), score 945
25.9 (9.0, 44.8), score 1083
29.6 (7.5, 53.1), score 1237
2560x1440 4xAA fullscreen, ultra
19.9 (7.0, 35.2), score 834
23.4 (8.7, 41.8), score 978
26.8 (7.4, 49.7), score 1120
2560x1440 8xAA fullscreen, ultra
17.5 (9.9, 30.5), score 731
19.3 (7.8, 35.5), score 809
22.1 (12.9, 41.1), score 924
3840x2160 no AA fullscreen, low
24.5 (12.2, 44.0), score 1026
30.4 (13.3, 61.0), score 1271
34.2 (17.3, 65.8), score 1431
Some additional remarks: the Unigine Valley benchmark detects the 5700XT with 16GB VRAM on macOS, but on Windows it reports only 4GB. It is uncertain if this affected the performance of the benchmarks on Windows. However, the Unigine Valley benchmark only requires about 2.3GB VRAM, and since the numbers between macOS and Windows OpenGL are quite similar, I don’t suspect the lower detected VRAM has had any influence on the performance. Finally, the Unigine Valley benchmark cannot be run in 5K resolution (maximum resolution width is 4096), hence only tested up to 4K (3840x2160).
The new iMac has nearly up to 3x higher frame rate than the old iMac. Scores between OpenGL on macOS and Windows 10 were quite similar. But the DirectX11 version has ~16% higher FPS than OpenGL, and up to ~30% higher when windowed or in low quality mode, so definitely some benefit to run a game on Windows if it can take advantage of one of the newer DirectX api.
During benchmarking, the fan were quite noticeable as they occasionally run at presumably max speed. In general, the fans would flare up to high RPM, then settle at some medium speed for a while before the pattern repeated itself. The macOS benchmark reported a temperate for the GPU. The temperature would quickly rise from around 60˚C at idle to 100˚C. At this point, the fans would kick in at high speed, before settling with a temperature around 90˚C. On Windows, GPU-Z reported similar temperatures, with GPU hotspot temperature around 90˚C, and maximum of 98˚C. Highest GPU clock was 1470MHz, and highest memory clock was 1500MHz.
Unigine Superposition benchmarks
The newer Unigine Superposition benchmark is only available on Windows, with a few more options to customise graphics settings. Texture quality (low, medium, high) had negligible influences on performance on the new iMac (<1fps difference). Motion blur effect has similar low impact on frame rate. Depth of field effect had a small impact (~5fps difference). Most impact on performance was due to shader quality and resolution, so these two parameters were varied in the tables below. Quality settings notation are: shader quality [low | extr(eme) | 4Kopt(imized) | 8Kopt(imized)] / texture quality [low | high] / depth of field effect [disabled | enabled] | motion blur effect [disabled | enabled]. I have no clue what the difference in between the extreme and 4K optimised shader quality, couldn't really see much difference, but the optimised shaders definitely run a lot faster.
Notation used in tables: “average fps (minimum fps, maximum fps), benchmark score”
Table Unigine Superposition benchmark (2017) on new iMac (mid 2020)
Resolution and quality settings
Windows 10, DirectX
Windows 10, OpenGL
1600x900 windowed, extr/high/enabled/enabled
40.30 (30.78, 48.37), score 5388
34.32 (26.41, 41.09), score 4588
1920x1080 fullscreen, extr/high/enabled/enabled
29.02 (23.66, 33.46), score 3880
25.50 (20.22, 29.88), score 3409
2560x1440 fullscreen, low/low/disabled/disabled
142.41 (105.16, 179.34), score 19040
102.78 (79.81, 127.29), score 13741
2560x1440 fullscreen, 4Kopt/high/enabled/enabled
80.70 (67.10, 96.17), score 10789
62.78 (53.24, 74.41), score 8393
2560x1440 fullscreen, extr/high/enabled/enabled
17.22 (14.46, 19.66), score 2302
14.59 (12.37, 16.99), score 1951
3840x2160 fullscreen, 4Kopt/high/enabled/enabled
42.80 (36.90, 50.11), score 5722
33.47 (29.67, 39.17), score 4474
5120x2880 fullscreen, low/low/disabled/disabled
49.78 (39.90, 58.30) score 6655
37.24 (30.68, 44.20), score 4979
5120x2880 fullscreen, 4Kopt/high/enabled/enabled
25.92 (22.57, 29.59) score 3465
20.29 (17.89, 23.36), score 2712
7680x4320 fullscreen, 8Kopt/high/disabled/enabled
14.40 (12.42, 16.29) score 1925
14.48 (12.44, 16.41), score 1935
Table Unigine Superposition benchmark (2017) on older iMac (late 2015)
Resolution and quality settings
Windows 10, DirectX
Windows 10, OpenGL
1600x900 windowed, extr/high/enabled/enabled
15.29 (11.64, 18.57), score 2043
11.56 (9.66, 13.02), score 1546
1920x1080 fullscreen, extr/high/enabled/enabled
11.47 (9.04, 13.74), score 1533
8.34 (7.16, 9.51), score 1115
2560x1440 fullscreen, low/low/disabled/disabled
27.70 (22.55, 33.40), score 3703
21.75 (17.76, 26.91), score 2908
2560x1440 fullscreen, 4Kopt/high/enabled/enabled
29.22 (24.96, 36.84), score 3906
23.24 (19.94, 28.08), score 3107
2560x1440 fullscreen, extr/high/enabled/enabled
5.99 (5.12, 6.86), score 801
4.26 (3.89, 4.87), score 569
3840x2160 fullscreen, 4Kopt/high/enabled/enabled
15.70 (13.66, 18.86), score 2099
11.98 (9.19, 14.67), score 1601
5120x2880 fullscreen, low/low/disabled/disabled
17.33 (14.03, 20.49), score 2317
13.62 (11.15, 16.74), score 1820
5120x2880 fullscreen, 4Kopt/high/enabled/enabled
7.85 (6.93, 10.08), score 1049
3.97 (3.51, 4.43), score 530
7680x4320 fullscreen, 8Kopt/high/disabled/enabled
3.88 (3.22, 5.42), score 518
would not run
Some additional remarks: there is a warning running the benchmark at 4k resolution at high quality exceeding 4GB VRAM of the older iMac, but actually run just below the limit of the older iMac. Although 5K resolution at high quality ran on the older iMac despite warnings about exceeding the available VRAM, it was not pretty to watch. Especially OpenGL starts to show a lot of artefacts (alternating between rendering the left and right half of the screen).
Again, up to 3x higher frame rate than the older iMac, and 5K gaming averaging around 25 fps at high quality and 50 at low quality. Depending on the type of game that sounds very interesting. Though I would probably still opt for 2560x1440 at high quality for ~80fps, probably v-synced to a stable 60fps!
Blender benchmark
The Blender benchmarks were run on Windows 10 in blender version 2.90.
Table Blender benchmarks
Scene
new iMac (mid 2020)
older iMac (late 2015)
bmw27
1m50s
5m28s
classroom
3m39s
12m37s
fishy_cat
3m10s
11m46s
koro
2m12s
14m16s
pavillon_barcelona
9m36s
27m8s
victor
would not run
would not run
Overal, a three-fold improvement in rendering speeds. The victor scene could not be run, resulting in an unexpected error on both iMacs. The Koro scene is an odd one out for the late 2015, taking quite a bit longer to run than the other scenes of comparable render times on the newer iMac, but this could not have been an issue of limited VRAM (~2.8GB total used).
NovaBench benchmark
An alternative to Geekbench, nice to report some RAM and disk speeds in addition to CPU and GPU performance. However, the GPU test is v-synced to the 60Hz of the display, and their computational test does not show the extra processing power of the 5700XT over the M395X.
Table NovaBench benchmark
Score
new iMac (mid 2020)
older iMac (late 2015)
Total
2618
1858
CPU
1536
864
RAM
365 [29803 MB/s)
304 [24459 MB/s]
GPU
558 [Metal score: 3739 GFLOPS]
558 [Metal score: 3342 GFLOPS
Disk
159 [528 MB/s write, 1763 MB/s read]
132 [442 MB/s write, 1312 MB/s read]
Not sure what to think of this benchmark suite, it doesn't seem to stress the system to its maximum, especially not the GPU. Double performance on the CPU, again probably due to 8 instead of 4 cores. RAM speeds are as expected, since the newer iMac runs only slight faster RAM. Disk speeds look accurate in typical usage scenarios.
Gaming
I haven’t had much time to play games on the new iMac yet, but my wife has been playing the Sims 4 on macOS at 1440p, everything ultra or maxed, with pretty smooth frame rate. Compared to the older iMac, where the game would be running with slightly lower graphics settings at, I would guess, around 20 to 25 fps and the occasional stuttering, the new iMac provides a very pleasant and smooth gameplay experience, maybe even hitting the 60 fps (v-synced). Fan noise is audible on both iMacs, but in acceptable range.
Impression
I'm pretty excited for the weekend, as I hope to have some time to finally be able play a game on the new iMac. I don't think I have any games with an internal benchmark that is going to tax the hardware a lot. I might try the Witcher 3 on 5K to see how it runs. Although I am thrilled to see what Apple will bring with their Apple Silicon chips, I am happy to upgrade at this time because most games I play require Windows after all. Many of the games will now likely run at a smooth 50 to 60 fps at high quality in 1440p on this amazing machine instead of barely managing 20 to 25 fps on the older iMac. There is plenty of overhead for future, more demanding, games to run a decent frame rates before the next upgrade in hopefully no sooner than another five years from now. Fan noise is, and always has been, an issue is compact devices. My solution is to play with noise-cancelling head phones. Using any type of head phones will already give a more immersive gaming experience. The extra CPU power is a nice addition for my work, and overall make it feels like the systems runs faster and smoother.
second monitor is 1440p resolution. Nothing is displayed on the monitor at all just the background. But common 20 FPS for just having a second monitor connected. I just tried it in bootcamp did the same benchmark with and without second monitor. There I have no FPS lost.
Maybe a little late to the part, by for what it’s worth, here are some benchmark numbers for the i7 and the 5700XT in comparison to an older high-end iMac from late 2015.
TLDR; Perfectly happy with the new iMac. Three-fold improvements in graphics and 3D rendering performance over late 2015 model. Gaming in 1440p resolution at decent frame rates is definitely possible on the new iMac. Native 5K gaming might even be within reach depending on the game and graphic settings and personal preferences. Fan noise seems to be slightly more elevated compared to older model, but the temperatures are also a but lower.
These are the specifications of the late 2015 and the new 2020 iMac used in the benchmarks:
iMac 2015: Intel Core i7-6700K quad-core at 4.0GHz base clock, 32GB RAM, 512GB PCI-E SSD, and a AMD Radeon R9 M395X with 4GB VRAM graphics card.
iMac 2020: Intel Core i7-10700K octa-core at 3.8GHz base clock, 32GB RAM, 1TB PCI-E SSD, and a AMD Pro 5700XT with 16GB VRAM graphics card.
In summary, the CPU performance has doubled thanks to 8 cores instead of 4 cores. Although often neglected in other reviews, I consider the 8-core processor in an all-in-one solution quite a technological feat and a welcome boost in performance for computationally demanding tasks. On the other hand, single core performance only sees a marginal 10% to 20% improvement in the Geekbench scores. In contrast, computational power of the GPU has doubled in the OpenCL and Metal benchmarks, and rendering performance in Blender and the Unigine benchmarks has nearly tripled.
I don’t know if the extra VRAM on the 5700XT will pay off in the future, but at a marginal increase in cost (+€250) over the 5700, it is nice to have in case games released in the next five years will increase their VRAM requirements. The extra VRAM will definitely pay off in 3D modelling with high detailed models and textures. I have considered a low-end iMac with eGPU solution, but from what I’ve read there are still some woes surrounding eGPU support in macOS and Windows, and personally I prefer the integrated all-in-one solution of the iMac over an additional and bulky eGPU enclosure (that is typically several times bigger than the dedicated ZOTAC gaming PC I had but unfortunately died last month due to heat stroke ).
Geekbench
Initial run (#1 in the table below), out of the box, with the standard 2x4GB RAM, showed significantly lower multicore score (6614) than expected from Geekbench scores reported by other users. Note that some iCloud processes were still running in the background, consuming about 15% of the total CPU power. However, the second run (#2), after upgrading to 4x16GB RAM, followed closely after the first run and despite the iCloud processes still running in the background saw a remarkable increase in multiscore (8356). A third run (#3) confirmed the increase multiscore (8575). The third-party RAM modules were confirmed to run at 2666MHz, so not faster than the standard memory. After iCloud synchronisation had settled, the Geekbench runs (#4 and #5) reported a consistent single core score of 1251, and a multicore score 8896 and 8675. Much more in line with what has already been reported for the core i7.
Across the five runs, the OpenCL and Metal scores were quite consistent, irrespective of CPU load due to iCloud processes running in the background. OpenCL scored on average 55745 (range from 53430 to 57004), and Metal scored on average 58722 (range from 55608 to 60434).
Fan noise during benchmarking was barely audible, but these benchmarks don’t really stress the system long enough for temperature to rise and cause the fans to speed up much.
Table Geekbench on new iMac (mid 2020)
Run #1
Run #2
Run #3
Run #4
Run #5
Avg run 5&6
Single core
1129
1208
1194
1251
1251
1251
Multi core
6614
8359
8575
8896
8675
8786
OpenCL
55833
57004
56367
56090
53430
54760
Metal
60335
60434
59755
55608
57480
56544
Table Geekbench on old iMac (late 2015)
Run #1
Run #2
Run #3
Average
Single core
1043
1129
1088
1087
Multi core
4342
4392
4289
4341
OpenCL
27689
27501
27815
27668
Metal
30726
31837
31548
31370
For single core performance, there is only +10 to +20% improvement, where as multicore performance doubles thanks to 8 instead of 4 cores. The OpenCL and Metal computational throughput also double in performance.
Here is the list of Geekbench reports for anyone interested in the details:
My aim is to be able to play games at 1440p with decent frame rate (at least 30, but preferably anything stable and v-synced at 60 to reduce visual tearing, power consumption, heat and fan noise). For the Unigine Valley benchmark, dropping full-screen anti-aliasing to 2x or even zero provides better performance without much noticeable loss in visual quality, a perfect trade-off for the increased resolution. For science, I also ran the benchmark at low quality without FSAA just to see what kind of performance would be possible if you prefer FPS over visual quality. Note that the benchmark reports a minimum FPS, not a 99 percentile, unfortunately because there is one brief scene where the camera turns around a couple of rocks and lightning flashes that drops the FPS dramatically for a split second. But otherwise the average FPS is generally quite a good indicator of the performance for most scenes. For comparison, I have added the same benchmarks run on the older high-end late 2015 iMac in a second table.
Notation used in tables: “average fps (minimum fps, maximum fps), benchmark score”
Table Unigine Valley benchmark (2013) on new iMac (mid 2020)
Resolution and quality settings
macOS, OpenGL
Windows 10, OpenGL
Windows 10, DirectX11
1600x900 8xAA windowed, ultra
108.9 FPS (40.2, 179.1), score 4556
92.7 (46.0, 221.2), score 3880
120.9 (47.0, 225.0), score 5059
2560x1440 no AA fullscreen, low
137.8 (43.1, 201.7), score 5766
145.0 (56.4, 261.7), score 6065
186.5 (36.0, 300.9), score 7802
2560x1440 no AA fullscreen, ultra
83.3 (11.1, 140.8), score 3483
81.5 (40.1, 138.6), score 3410
98.2 (42.7, 161.1), score 4111
2560x1440 2xAA fullscreen, ultra
67.9 (10.9, 117.1), score 2840
67.0 (34.6, 112.1), score 2801
78.7 (33.3, 134.4), score 3293
2560x1440 4xAA fullscreen, ultra
61.5 (11.0; 110.0), score 2572
59.9 (29.7, 104.1), score 2508
69.9 (33.3, 124.5), score 2920
2560x1440 8xAA fullscreen, ultra
56.8 (10.7; 98.6), score 2375
50.3 (22.9, 87.0), score 2105
58.4 (34.1, 109.2), score 2445
3840x2160 no AA fullscreen, low
81.3 (39.4, 142.1), score 3400
79.9 (33.0, 140.5), score 3342
102.5 (46.5, 182.4), score 4289
Table Unigine Valley benchmark (2013) on older iMac (late 2015)
Resolution and quality settings
macOS, OpenGL
Windows 10, OpenGL
Windows 10, DirectX11
1600x900 8xAA windowed, ultra
37.5 (21.6, 66.0), score 1570
36.8 (18.2, 80.6), score 1834
43.8 (25.1, 80.6), score 1834
2560x1440 no AA fullscreen, low
49.4 (25.4, 87.7), score 2066
60.3 (28.0, 127.9), score 2522
72.4 (28.4, 143.1), score 3030
2560x1440 no AA fullscreen, ultra
26.5 (7.5, 44.9), score 1108
31.4 (15.7, 54.2), score 1312
36.2 (21.1, 61.9), score 1513
2560x1440 2xAA fullscreen, ultra
22.6 (7.2, 38.5), score 945
25.9 (9.0, 44.8), score 1083
29.6 (7.5, 53.1), score 1237
2560x1440 4xAA fullscreen, ultra
19.9 (7.0, 35.2), score 834
23.4 (8.7, 41.8), score 978
26.8 (7.4, 49.7), score 1120
2560x1440 8xAA fullscreen, ultra
17.5 (9.9, 30.5), score 731
19.3 (7.8, 35.5), score 809
22.1 (12.9, 41.1), score 924
3840x2160 no AA fullscreen, low
24.5 (12.2, 44.0), score 1026
30.4 (13.3, 61.0), score 1271
34.2 (17.3, 65.8), score 1431
Some additional remarks: the Unigine Valley benchmark detects the 5700XT with 16GB VRAM on macOS, but on Windows it reports only 4GB. It is uncertain if this affected the performance of the benchmarks on Windows. However, the Unigine Valley benchmark only requires about 2.3GB VRAM, and since the numbers between macOS and Windows OpenGL are quite similar, I don’t suspect the lower detected VRAM has had any influence on the performance. Finally, the Unigine Valley benchmark cannot be run in 5K resolution (maximum resolution width is 4096), hence only tested up to 4K (3840x2160).
The new iMac has nearly up to 3x higher frame rate than the old iMac. Scores between OpenGL on macOS and Windows 10 were quite similar. But the DirectX11 version has ~16% higher FPS than OpenGL, and up to ~30% higher when windowed or in low quality mode, so definitely some benefit to run a game on Windows if it can take advantage of one of the newer DirectX api.
During benchmarking, the fan were quite noticeable as they occasionally run at presumably max speed. In general, the fans would flare up to high RPM, then settle at some medium speed for a while before the pattern repeated itself. The macOS benchmark reported a temperate for the GPU. The temperature would quickly rise from around 60˚C at idle to 100˚C. At this point, the fans would kick in at high speed, before settling with a temperature around 90˚C. On Windows, GPU-Z reported similar temperatures, with GPU hotspot temperature around 90˚C, and maximum of 98˚C. Highest GPU clock was 1470MHz, and highest memory clock was 1500MHz.
Unigine Superposition benchmarks
The newer Unigine Superposition benchmark is only available on Windows, with a few more options to customise graphics settings. Texture quality (low, medium, high) had negligible influences on performance on the new iMac (<1fps difference). Motion blur effect has similar low impact on frame rate. Depth of field effect had a small impact (~5fps difference). Most impact on performance was due to shader quality and resolution, so these two parameters were varied in the tables below. Quality settings notation are: shader quality [low | extr(eme) | 4Kopt(imized) | 8Kopt(imized)] / texture quality [low | high] / depth of field effect [disabled | enabled] | motion blur effect [disabled | enabled]. I have no clue what the difference in between the extreme and 4K optimised shader quality, couldn't really see much difference, but the optimised shaders definitely run a lot faster.
Notation used in tables: “average fps (minimum fps, maximum fps), benchmark score”
Table Unigine Superposition benchmark (2017) on new iMac (mid 2020)
Resolution and quality settings
Windows 10, DirectX
Windows 10, OpenGL
1600x900 windowed, extr/high/enabled/enabled
40.30 (30.78, 48.37), score 5388
34.32 (26.41, 41.09), score 4588
1920x1080 fullscreen, extr/high/enabled/enabled
29.02 (23.66, 33.46), score 3880
25.50 (20.22, 29.88), score 3409
2560x1440 fullscreen, low/low/disabled/disabled
142.41 (105.16, 179.34), score 19040
102.78 (79.81, 127.29), score 13741
2560x1440 fullscreen, 4Kopt/high/enabled/enabled
80.70 (67.10, 96.17), score 10789
62.78 (53.24, 74.41), score 8393
2560x1440 fullscreen, extr/high/enabled/enabled
17.22 (14.46, 19.66), score 2302
14.59 (12.37, 16.99), score 1951
3840x2160 fullscreen, 4Kopt/high/enabled/enabled
42.80 (36.90, 50.11), score 5722
33.47 (29.67, 39.17), score 4474
5120x2880 fullscreen, low/low/disabled/disabled
49.78 (39.90, 58.30) score 6655
37.24 (30.68, 44.20), score 4979
5120x2880 fullscreen, 4Kopt/high/enabled/enabled
25.92 (22.57, 29.59) score 3465
20.29 (17.89, 23.36), score 2712
7680x4320 fullscreen, 8Kopt/high/disabled/enabled
14.40 (12.42, 16.29) score 1925
14.48 (12.44, 16.41), score 1935
Table Unigine Superposition benchmark (2017) on older iMac (late 2015)
Resolution and quality settings
Windows 10, DirectX
Windows 10, OpenGL
1600x900 windowed, extr/high/enabled/enabled
15.29 (11.64, 18.57), score 2043
11.56 (9.66, 13.02), score 1546
1920x1080 fullscreen, extr/high/enabled/enabled
11.47 (9.04, 13.74), score 1533
8.34 (7.16, 9.51), score 1115
2560x1440 fullscreen, low/low/disabled/disabled
27.70 (22.55, 33.40), score 3703
21.75 (17.76, 26.91), score 2908
2560x1440 fullscreen, 4Kopt/high/enabled/enabled
29.22 (24.96, 36.84), score 3906
23.24 (19.94, 28.08), score 3107
2560x1440 fullscreen, extr/high/enabled/enabled
5.99 (5.12, 6.86), score 801
4.26 (3.89, 4.87), score 569
3840x2160 fullscreen, 4Kopt/high/enabled/enabled
15.70 (13.66, 18.86), score 2099
11.98 (9.19, 14.67), score 1601
5120x2880 fullscreen, low/low/disabled/disabled
17.33 (14.03, 20.49), score 2317
13.62 (11.15, 16.74), score 1820
5120x2880 fullscreen, 4Kopt/high/enabled/enabled
7.85 (6.93, 10.08), score 1049
3.97 (3.51, 4.43), score 530
7680x4320 fullscreen, 8Kopt/high/disabled/enabled
3.88 (3.22, 5.42), score 518
would not run
Some additional remarks: there is a warning running the benchmark at 4k resolution at high quality exceeding 4GB VRAM of the older iMac, but actually run just below the limit of the older iMac. Although 5K resolution at high quality ran on the older iMac despite warnings about exceeding the available VRAM, it was not pretty to watch. Especially OpenGL starts to show a lot of artefacts (alternating between rendering the left and right half of the screen).
Again, up to 3x higher frame rate than the older iMac, and 5K gaming averaging around 25 fps at high quality and 50 at low quality. Depending on the type of game that sounds very interesting. Though I would probably still opt for 2560x1440 at high quality for ~80fps, probably v-synced to a stable 60fps!
Blender benchmark
The Blender benchmarks were run on Windows 10 in blender version 2.90.
Table Blender benchmarks
Scene
new iMac (mid 2020)
older iMac (late 2015)
bmw27
1m50s
5m28s
classroom
3m39s
12m37s
fishy_cat
3m10s
11m46s
koro
2m12s
14m16s
pavillon_barcelona
9m36s
27m8s
victor
would not run
would not run
Overal, a three-fold improvement in rendering speeds. The victor scene could not be run, resulting in an unexpected error on both iMacs. The Koro scene is an odd one out for the late 2015, taking quite a bit longer to run than the other scenes of comparable render times on the newer iMac, but this could not have been an issue of limited VRAM (~2.8GB total used).
NovaBench benchmark
An alternative to Geekbench, nice to report some RAM and disk speeds in addition to CPU and GPU performance. However, the GPU test is v-synced to the 60Hz of the display, and their computational test does not show the extra processing power of the 5700XT over the M395X.
Table NovaBench benchmark
Score
new iMac (mid 2020)
older iMac (late 2015)
Total
2618
1858
CPU
1536
864
RAM
365 [29803 MB/s)
304 [24459 MB/s]
GPU
558 [Metal score: 3739 GFLOPS]
558 [Metal score: 3342 GFLOPS
Disk
159 [528 MB/s write, 1763 MB/s read]
132 [442 MB/s write, 1312 MB/s read]
Not sure what to think of this benchmark suite, it doesn't seem to stress the system to its maximum, especially not the GPU. Double performance on the CPU, again probably due to 8 instead of 4 cores. RAM speeds are as expected, since the newer iMac runs only slight faster RAM. Disk speeds look accurate in typical usage scenarios.
Gaming
I haven’t had much time to play games on the new iMac yet, but my wife has been playing the Sims 4 on macOS at 1440p, everything ultra or maxed, with pretty smooth frame rate. Compared to the older iMac, where the game would be running with slightly lower graphics settings at, I would guess, around 20 to 25 fps and the occasional stuttering, the new iMac provides a very pleasant and smooth gameplay experience, maybe even hitting the 60 fps (v-synced). Fan noise is audible on both iMacs, but in acceptable range.
Impression
I'm pretty excited for the weekend, as I hope to have some time to finally be able play a game on the new iMac. I don't think I have any games with an internal benchmark that is going to tax the hardware a lot. I might try the Witcher 3 on 5K to see how it runs. Although I am thrilled to see what Apple will bring with their Apple Silicon chips, I am happy to upgrade at this time because most games I play require Windows after all. Many of the games will now likely run at a smooth 50 to 60 fps at high quality in 1440p on this amazing machine instead of barely managing 20 to 25 fps on the older iMac. There is plenty of overhead for future, more demanding, games to run a decent frame rates before the next upgrade in hopefully no sooner than another five years from now. Fan noise is, and always has been, an issue is compact devices. My solution is to play with noise-cancelling head phones. Using any type of head phones will already give a more immersive gaming experience. The extra CPU power is a nice addition for my work, and overall make it feels like the systems runs faster and smoother.
Maybe a little late to the part, by for what it’s worth, here are some benchmark numbers for the i7 and the 5700XT in comparison to an older high-end iMac from late 2015.
TLDR; Perfectly happy with the new iMac. Three-fold improvements in graphics and 3D rendering performance over late 2015 model. Gaming in 1440p resolution at decent frame rates is definitely possible on the new iMac. Native 5K gaming might even be within reach depending on the game and graphic settings and personal preferences. Fan noise seems to be slightly more elevated compared to older model, but the temperatures are also a but lower.
These are the specifications of the late 2015 and the new 2020 iMac used in the benchmarks:
iMac 2015: Intel Core i7-6700K quad-core at 4.0GHz base clock, 32GB RAM, 512GB PCI-E SSD, and a AMD Radeon R9 M395X with 4GB VRAM graphics card.
iMac 2020: Intel Core i7-10700K octa-core at 3.8GHz base clock, 32GB RAM, 1TB PCI-E SSD, and a AMD Pro 5700XT with 16GB VRAM graphics card.
In summary, the CPU performance has doubled thanks to 8 cores instead of 4 cores. Although often neglected in other reviews, I consider the 8-core processor in an all-in-one solution quite a technological feat and a welcome boost in performance for computationally demanding tasks. On the other hand, single core performance only sees a marginal 10% to 20% improvement in the Geekbench scores. In contrast, computational power of the GPU has doubled in the OpenCL and Metal benchmarks, and rendering performance in Blender and the Unigine benchmarks has nearly tripled.
I don’t know if the extra VRAM on the 5700XT will pay off in the future, but at a marginal increase in cost (+€250) over the 5700, it is nice to have in case games released in the next five years will increase their VRAM requirements. The extra VRAM will definitely pay off in 3D modelling with high detailed models and textures. I have considered a low-end iMac with eGPU solution, but from what I’ve read there are still some woes surrounding eGPU support in macOS and Windows, and personally I prefer the integrated all-in-one solution of the iMac over an additional and bulky eGPU enclosure (that is typically several times bigger than the dedicated ZOTAC gaming PC I had but unfortunately died last month due to heat stroke ).
Geekbench
Initial run (#1 in the table below), out of the box, with the standard 2x4GB RAM, showed significantly lower multicore score (6614) than expected from Geekbench scores reported by other users. Note that some iCloud processes were still running in the background, consuming about 15% of the total CPU power. However, the second run (#2), after upgrading to 4x16GB RAM, followed closely after the first run and despite the iCloud processes still running in the background saw a remarkable increase in multiscore (8356). A third run (#3) confirmed the increase multiscore (8575). The third-party RAM modules were confirmed to run at 2666MHz, so not faster than the standard memory. After iCloud synchronisation had settled, the Geekbench runs (#4 and #5) reported a consistent single core score of 1251, and a multicore score 8896 and 8675. Much more in line with what has already been reported for the core i7.
Across the five runs, the OpenCL and Metal scores were quite consistent, irrespective of CPU load due to iCloud processes running in the background. OpenCL scored on average 55745 (range from 53430 to 57004), and Metal scored on average 58722 (range from 55608 to 60434).
Fan noise during benchmarking was barely audible, but these benchmarks don’t really stress the system long enough for temperature to rise and cause the fans to speed up much.
Table Geekbench on new iMac (mid 2020)
Run #1
Run #2
Run #3
Run #4
Run #5
Avg run 5&6
Single core
1129
1208
1194
1251
1251
1251
Multi core
6614
8359
8575
8896
8675
8786
OpenCL
55833
57004
56367
56090
53430
54760
Metal
60335
60434
59755
55608
57480
56544
Table Geekbench on old iMac (late 2015)
Run #1
Run #2
Run #3
Average
Single core
1043
1129
1088
1087
Multi core
4342
4392
4289
4341
OpenCL
27689
27501
27815
27668
Metal
30726
31837
31548
31370
For single core performance, there is only +10 to +20% improvement, where as multicore performance doubles thanks to 8 instead of 4 cores. The OpenCL and Metal computational throughput also double in performance.
Here is the list of Geekbench reports for anyone interested in the details:
My aim is to be able to play games at 1440p with decent frame rate (at least 30, but preferably anything stable and v-synced at 60 to reduce visual tearing, power consumption, heat and fan noise). For the Unigine Valley benchmark, dropping full-screen anti-aliasing to 2x or even zero provides better performance without much noticeable loss in visual quality, a perfect trade-off for the increased resolution. For science, I also ran the benchmark at low quality without FSAA just to see what kind of performance would be possible if you prefer FPS over visual quality. Note that the benchmark reports a minimum FPS, not a 99 percentile, unfortunately because there is one brief scene where the camera turns around a couple of rocks and lightning flashes that drops the FPS dramatically for a split second. But otherwise the average FPS is generally quite a good indicator of the performance for most scenes. For comparison, I have added the same benchmarks run on the older high-end late 2015 iMac in a second table.
Notation used in tables: “average fps (minimum fps, maximum fps), benchmark score”
Table Unigine Valley benchmark (2013) on new iMac (mid 2020)
Resolution and quality settings
macOS, OpenGL
Windows 10, OpenGL
Windows 10, DirectX11
1600x900 8xAA windowed, ultra
108.9 FPS (40.2, 179.1), score 4556
92.7 (46.0, 221.2), score 3880
120.9 (47.0, 225.0), score 5059
2560x1440 no AA fullscreen, low
137.8 (43.1, 201.7), score 5766
145.0 (56.4, 261.7), score 6065
186.5 (36.0, 300.9), score 7802
2560x1440 no AA fullscreen, ultra
83.3 (11.1, 140.8), score 3483
81.5 (40.1, 138.6), score 3410
98.2 (42.7, 161.1), score 4111
2560x1440 2xAA fullscreen, ultra
67.9 (10.9, 117.1), score 2840
67.0 (34.6, 112.1), score 2801
78.7 (33.3, 134.4), score 3293
2560x1440 4xAA fullscreen, ultra
61.5 (11.0; 110.0), score 2572
59.9 (29.7, 104.1), score 2508
69.9 (33.3, 124.5), score 2920
2560x1440 8xAA fullscreen, ultra
56.8 (10.7; 98.6), score 2375
50.3 (22.9, 87.0), score 2105
58.4 (34.1, 109.2), score 2445
3840x2160 no AA fullscreen, low
81.3 (39.4, 142.1), score 3400
79.9 (33.0, 140.5), score 3342
102.5 (46.5, 182.4), score 4289
Table Unigine Valley benchmark (2013) on older iMac (late 2015)
Resolution and quality settings
macOS, OpenGL
Windows 10, OpenGL
Windows 10, DirectX11
1600x900 8xAA windowed, ultra
37.5 (21.6, 66.0), score 1570
36.8 (18.2, 80.6), score 1834
43.8 (25.1, 80.6), score 1834
2560x1440 no AA fullscreen, low
49.4 (25.4, 87.7), score 2066
60.3 (28.0, 127.9), score 2522
72.4 (28.4, 143.1), score 3030
2560x1440 no AA fullscreen, ultra
26.5 (7.5, 44.9), score 1108
31.4 (15.7, 54.2), score 1312
36.2 (21.1, 61.9), score 1513
2560x1440 2xAA fullscreen, ultra
22.6 (7.2, 38.5), score 945
25.9 (9.0, 44.8), score 1083
29.6 (7.5, 53.1), score 1237
2560x1440 4xAA fullscreen, ultra
19.9 (7.0, 35.2), score 834
23.4 (8.7, 41.8), score 978
26.8 (7.4, 49.7), score 1120
2560x1440 8xAA fullscreen, ultra
17.5 (9.9, 30.5), score 731
19.3 (7.8, 35.5), score 809
22.1 (12.9, 41.1), score 924
3840x2160 no AA fullscreen, low
24.5 (12.2, 44.0), score 1026
30.4 (13.3, 61.0), score 1271
34.2 (17.3, 65.8), score 1431
Some additional remarks: the Unigine Valley benchmark detects the 5700XT with 16GB VRAM on macOS, but on Windows it reports only 4GB. It is uncertain if this affected the performance of the benchmarks on Windows. However, the Unigine Valley benchmark only requires about 2.3GB VRAM, and since the numbers between macOS and Windows OpenGL are quite similar, I don’t suspect the lower detected VRAM has had any influence on the performance. Finally, the Unigine Valley benchmark cannot be run in 5K resolution (maximum resolution width is 4096), hence only tested up to 4K (3840x2160).
The new iMac has nearly up to 3x higher frame rate than the old iMac. Scores between OpenGL on macOS and Windows 10 were quite similar. But the DirectX11 version has ~16% higher FPS than OpenGL, and up to ~30% higher when windowed or in low quality mode, so definitely some benefit to run a game on Windows if it can take advantage of one of the newer DirectX api.
During benchmarking, the fan were quite noticeable as they occasionally run at presumably max speed. In general, the fans would flare up to high RPM, then settle at some medium speed for a while before the pattern repeated itself. The macOS benchmark reported a temperate for the GPU. The temperature would quickly rise from around 60˚C at idle to 100˚C. At this point, the fans would kick in at high speed, before settling with a temperature around 90˚C. On Windows, GPU-Z reported similar temperatures, with GPU hotspot temperature around 90˚C, and maximum of 98˚C. Highest GPU clock was 1470MHz, and highest memory clock was 1500MHz.
Unigine Superposition benchmarks
The newer Unigine Superposition benchmark is only available on Windows, with a few more options to customise graphics settings. Texture quality (low, medium, high) had negligible influences on performance on the new iMac (<1fps difference). Motion blur effect has similar low impact on frame rate. Depth of field effect had a small impact (~5fps difference). Most impact on performance was due to shader quality and resolution, so these two parameters were varied in the tables below. Quality settings notation are: shader quality [low | extr(eme) | 4Kopt(imized) | 8Kopt(imized)] / texture quality [low | high] / depth of field effect [disabled | enabled] | motion blur effect [disabled | enabled]. I have no clue what the difference in between the extreme and 4K optimised shader quality, couldn't really see much difference, but the optimised shaders definitely run a lot faster.
Notation used in tables: “average fps (minimum fps, maximum fps), benchmark score”
Table Unigine Superposition benchmark (2017) on new iMac (mid 2020)
Resolution and quality settings
Windows 10, DirectX
Windows 10, OpenGL
1600x900 windowed, extr/high/enabled/enabled
40.30 (30.78, 48.37), score 5388
34.32 (26.41, 41.09), score 4588
1920x1080 fullscreen, extr/high/enabled/enabled
29.02 (23.66, 33.46), score 3880
25.50 (20.22, 29.88), score 3409
2560x1440 fullscreen, low/low/disabled/disabled
142.41 (105.16, 179.34), score 19040
102.78 (79.81, 127.29), score 13741
2560x1440 fullscreen, 4Kopt/high/enabled/enabled
80.70 (67.10, 96.17), score 10789
62.78 (53.24, 74.41), score 8393
2560x1440 fullscreen, extr/high/enabled/enabled
17.22 (14.46, 19.66), score 2302
14.59 (12.37, 16.99), score 1951
3840x2160 fullscreen, 4Kopt/high/enabled/enabled
42.80 (36.90, 50.11), score 5722
33.47 (29.67, 39.17), score 4474
5120x2880 fullscreen, low/low/disabled/disabled
49.78 (39.90, 58.30) score 6655
37.24 (30.68, 44.20), score 4979
5120x2880 fullscreen, 4Kopt/high/enabled/enabled
25.92 (22.57, 29.59) score 3465
20.29 (17.89, 23.36), score 2712
7680x4320 fullscreen, 8Kopt/high/disabled/enabled
14.40 (12.42, 16.29) score 1925
14.48 (12.44, 16.41), score 1935
Table Unigine Superposition benchmark (2017) on older iMac (late 2015)
Resolution and quality settings
Windows 10, DirectX
Windows 10, OpenGL
1600x900 windowed, extr/high/enabled/enabled
15.29 (11.64, 18.57), score 2043
11.56 (9.66, 13.02), score 1546
1920x1080 fullscreen, extr/high/enabled/enabled
11.47 (9.04, 13.74), score 1533
8.34 (7.16, 9.51), score 1115
2560x1440 fullscreen, low/low/disabled/disabled
27.70 (22.55, 33.40), score 3703
21.75 (17.76, 26.91), score 2908
2560x1440 fullscreen, 4Kopt/high/enabled/enabled
29.22 (24.96, 36.84), score 3906
23.24 (19.94, 28.08), score 3107
2560x1440 fullscreen, extr/high/enabled/enabled
5.99 (5.12, 6.86), score 801
4.26 (3.89, 4.87), score 569
3840x2160 fullscreen, 4Kopt/high/enabled/enabled
15.70 (13.66, 18.86), score 2099
11.98 (9.19, 14.67), score 1601
5120x2880 fullscreen, low/low/disabled/disabled
17.33 (14.03, 20.49), score 2317
13.62 (11.15, 16.74), score 1820
5120x2880 fullscreen, 4Kopt/high/enabled/enabled
7.85 (6.93, 10.08), score 1049
3.97 (3.51, 4.43), score 530
7680x4320 fullscreen, 8Kopt/high/disabled/enabled
3.88 (3.22, 5.42), score 518
would not run
Some additional remarks: there is a warning running the benchmark at 4k resolution at high quality exceeding 4GB VRAM of the older iMac, but actually run just below the limit of the older iMac. Although 5K resolution at high quality ran on the older iMac despite warnings about exceeding the available VRAM, it was not pretty to watch. Especially OpenGL starts to show a lot of artefacts (alternating between rendering the left and right half of the screen).
Again, up to 3x higher frame rate than the older iMac, and 5K gaming averaging around 25 fps at high quality and 50 at low quality. Depending on the type of game that sounds very interesting. Though I would probably still opt for 2560x1440 at high quality for ~80fps, probably v-synced to a stable 60fps!
Blender benchmark
The Blender benchmarks were run on Windows 10 in blender version 2.90.
Table Blender benchmarks
Scene
new iMac (mid 2020)
older iMac (late 2015)
bmw27
1m50s
5m28s
classroom
3m39s
12m37s
fishy_cat
3m10s
11m46s
koro
2m12s
14m16s
pavillon_barcelona
9m36s
27m8s
victor
would not run
would not run
Overal, a three-fold improvement in rendering speeds. The victor scene could not be run, resulting in an unexpected error on both iMacs. The Koro scene is an odd one out for the late 2015, taking quite a bit longer to run than the other scenes of comparable render times on the newer iMac, but this could not have been an issue of limited VRAM (~2.8GB total used).
NovaBench benchmark
An alternative to Geekbench, nice to report some RAM and disk speeds in addition to CPU and GPU performance. However, the GPU test is v-synced to the 60Hz of the display, and their computational test does not show the extra processing power of the 5700XT over the M395X.
Table NovaBench benchmark
Score
new iMac (mid 2020)
older iMac (late 2015)
Total
2618
1858
CPU
1536
864
RAM
365 [29803 MB/s)
304 [24459 MB/s]
GPU
558 [Metal score: 3739 GFLOPS]
558 [Metal score: 3342 GFLOPS
Disk
159 [528 MB/s write, 1763 MB/s read]
132 [442 MB/s write, 1312 MB/s read]
Not sure what to think of this benchmark suite, it doesn't seem to stress the system to its maximum, especially not the GPU. Double performance on the CPU, again probably due to 8 instead of 4 cores. RAM speeds are as expected, since the newer iMac runs only slight faster RAM. Disk speeds look accurate in typical usage scenarios.
Gaming
I haven’t had much time to play games on the new iMac yet, but my wife has been playing the Sims 4 on macOS at 1440p, everything ultra or maxed, with pretty smooth frame rate. Compared to the older iMac, where the game would be running with slightly lower graphics settings at, I would guess, around 20 to 25 fps and the occasional stuttering, the new iMac provides a very pleasant and smooth gameplay experience, maybe even hitting the 60 fps (v-synced). Fan noise is audible on both iMacs, but in acceptable range.
Impression
I'm pretty excited for the weekend, as I hope to have some time to finally be able play a game on the new iMac. I don't think I have any games with an internal benchmark that is going to tax the hardware a lot. I might try the Witcher 3 on 5K to see how it runs. Although I am thrilled to see what Apple will bring with their Apple Silicon chips, I am happy to upgrade at this time because most games I play require Windows after all. Many of the games will now likely run at a smooth 50 to 60 fps at high quality in 1440p on this amazing machine instead of barely managing 20 to 25 fps on the older iMac. There is plenty of overhead for future, more demanding, games to run a decent frame rates before the next upgrade in hopefully no sooner than another five years from now. Fan noise is, and always has been, an issue is compact devices. My solution is to play with noise-cancelling head phones. Using any type of head phones will already give a more immersive gaming experience. The extra CPU power is a nice addition for my work, and overall make it feels like the systems runs faster and smoother.
I cancelled my iMac 2020 5700XT (3.8GHz 8-core i7) order as soon as I read about this graphic glitch issue, which I've noticed has since been reported on many tech websites! It's so frustrating! Yes, it may be a minor issue (to some), but it eradicated my excitement and total enthusiasm of getting an iMac for the very first time - knowing it was likely to turn up with the issue. I won't be re-ordering until there's some acknowledgement from Apple and some traction over what is being done to resolve it.
I am tempted to order the 5700 graphics instead - after reviewing some benchmarks comparison videos on YouTube which tend to indicate an 8-15% performance drop over the 5700XT - but then I'm also giving Apple my money for a less powerful machine, especially as I want to play games like Microsoft Flight Simulator 2020 and Cities Skylines in bootcamp (Windows), and also feel like it's letting Apple off the hook over a QC issue.
I'm also annoyed to read many new iMac 2020 owners saying they are hearing the fan even when idle, as apposed to the iMac 2019 which was dead silent during idle. I'm used to my quiet MacBook Pro 13" Early 2015 which is silent 99% of the time! Someone mentioned the fan sounds like someone breathing with their mouth open, even during idle. Can anyone confirm if that seems correct?
I'm also annoyed to read many new iMac 2020 owners saying they are hearing the fan even when idle, as apposed to the iMac 2019 which was dead silent during idle. I'm used to my quiet MacBook Pro 13" Early 2015 which is silent 99% of the time! Someone mentioned the fan sounds like someone breathing with their mouth open, even during idle. Can anyone confirm if that seems correct?
Thanks for your reply. Does the fan noise bother you? I suppose you get used to it, right? I'm guessing I could put up with it as long as it doesn't take attention away from what I'm working on.
Coincidentally, it's rated at 13 dB (idle) on the iMac 2020 tech specs page!?
Thanks for your reply. Does the fan noise bother you? I suppose you get used to it, right? I'm guessing I could put up with it as long as it doesn't take attention away from what I'm working on.
Coincidentally, it's rated at 13 dB (idle) on the iMac 2020 tech specs page!?
It's hard to say what's acceptable from one person to another. Personally it doesn't bother me. I alternate between coding (no headphones, quiet), music production and gaming. In any case it's always way quieter than my previous 2014 iMac which would overheat and die in zoom calls.
[automerge]1599611230[/automerge]
especially as I want to play games like Microsoft Flight Simulator 2020 and Cities Skylines in bootcamp (Windows), and also feel like it's letting Apple off the hook over a QC issue.
I'd imagine it would run even faster and smoother in bootcamp (Windows) via DirectX 12?
If only Microsoft Flight Simulator 2020 would run natively on macOS (not needing Parallels that is!!!) In that case, I'd be tempted to rethink my whole iMac strategy and wait for Apple Silicon because running MSFS 2020 and City Skylines in bootcamp (Windows) is one of the few reasons for considering an Intel-based iMac.
It's hard to say what's acceptable from one person to another. Personally it doesn't bother me. I alternate between coding (no headphones, quiet), music production and gaming. In any case it's always way quieter than my previous 2014 iMac which would overheat and die in zoom calls.
[automerge]1599611230[/automerge]
I'd also suggest it depends from which experience someone is coming from. For those upgrading from a louder iMac, it's no doubt a better experience. For those coming from a "silent" MacBook Pro (in my case) it's a serious consideration!
Mostly the cost, its a €500 premium to get two extra cores. It was tempting though, but the eight cores will do for most of the work I do. Otherwise the €500 premium would probably be better spent on getting a small compute node with a AMD ThreadRipper at home, since I am not necessarily dependent on macOS to run the computations. If I need to run an analysis that requires a lot of computational power, I am better off using the HPC cluster at work for massive parallel computations.
Another argument I've seen is that games do not alway take advantage of multicore, and that it is better to get a processor with highest base frequency and turboboost. Both the i7 and i9 will turboboost to 5.0GHz (according to Apple), so I didn't see the benefit of moving to an i9, but also no downside except for the extra cost. Interestingly, according to Intel ARK website [link], the i7-10700k can actually turboboost to 5.1GHz, and will occasionally do so in my tests below, but alternates a lot between 5.0 and 5.1 GHz.
The heat dissipation of the two additional cores, when used, may also have played a role in my decision. Intel has not been very transparant about the power consumption of their processors. Both are listed as max 125W TDP, but I have heard of Intel desktop processors consuming well beyond their maximum TDP rating. However, in my tests below the maximum power consumption of the processor package is indeed limited to max 125W.
Intel Power Gadget tests
The recent versions of Intel Power Gadget have a built in test options for 1, 2, 4, and all threads load. Based on the numbers, I assume that 'all threads' means a number of threads equal to the total core count, because loading additional tasks in the background on top of that will trigger and increase in power consumption and temperature.
EDIT: so, upon closer inspection of the figures, it appears that 'all threads' is 4x the CPU load of the '4 threads' test. That suggests 16 threads are running and should fully saturate all available CPU power, as is reported by a near 100% core utilisation. So why 'full load', running the 'all threads' test with additional processes in the background, consumes 1.5x more power when the CPU should already be fully saturated is a mystery to be solved in another episode.
Table Intel Power Gadget test
Load
Frequency
Power consumption
Temperature
Fan noise
1 thread
5.0 GHz
21 Watt
54˚C (core max 57˚C)
Inaudible
2 threads
4.9 GHz
30 Watt
62˚C (core max 68˚C)
Inaudible
4 threads
4.8 GHz
48 Watt
70˚C (core max 72˚C)
Inaudible
all threads
4.7 GHz
82 Watt
91˚C (core max 92˚C)
Barely audible
full load
4.7 GHz
120 Watt
97˚C (core max 100˚C)
Very audible (max RPM?)
Numbers in the table are based on the figures below. For the 'full load test', numbers are based on the 'full load #2' in the figure below, when the processor had been running on full load for a little while.
Figure Intel Power Gadget test
The small peaks at the end of the graphs are me using Spotlight to open Grab to take a screenshot, but I waited a couple seconds before taking the screenshot to get an accurate current reading of the statistics.
Personally, I have never been too worried about very high temperatures on the individual cores. Silicon can handle temperatures much higher than this, so it operates with a large safety margin. It is probably limited to protect other components in the system to prevent reaching boiling temperatures (e.g. liquid capacitors comes to mind). I like to compare core temperatures to lighting a match inside a room. The flame of the match will reach high temperatures, but the room temperature is not affected much. During these tests, the SSD temperature (the only temperature currently reported by iStat Menus) reached a maximum of 36˚C, up from ~28˚C at idle.
Fan noise was basically inaudible during the regular Intel tests (up to 'all threads'). I had to move to the back of the iMac and put my ear near the exhaust fan to clearly hear the fans running during the 'all threads' test. I don't live in a soundproofed house though; the door to the balcony is ajar, and I can hear birds singing outside and the occasional car driving by, but not the fans of the iMac an arm's reach away from me. Ambient temperature is currently about 20˚C, so that may help a little as well to keep the iMac cooled down.
The quiet operation of the iMac has been my experience thus far, also with the late 2015 model. It operates quietly up till some critical point, for the CPU that seems to be >80˚C, then starts to ramp up the fan speed as late as possible, when the CPU nears 90-100˚C, before settling down at moderate fan speeds and stable temperatures. Only when the CPU is fully loaded to the max do the fans keep spinning at maximum speed. Based in this observation, I wonder if it is possible to enable/disable hyper-threading on demand with a tool, similar to how it is possible to limit turbo boost. Personally, I hope that Apple will bring the performance mode (or 'Pro Mode' [link]) with this kind of behavior to all Macs, including the iMac, for silent operation when performance is not necessary. Power consumption, and the accompanying temperatures due to heat dissipation, scales quadratically with frequency, so limiting the turbo boost can drastically reduce temperatures and noise without affecting performance a lot. I can only assume the same holds true for hyper-threading, since I doubt performance will double with hyper threading enabled (quick Google reveals ~15%, max 30% boost in performance), there are only physical eight cores after all, but it does require 1.5x more power (~120 Watt running 'full load' compared to ~80 Watt running the 'all threads' test).
Very strange that you get much worse results in Valley than Heaven when Heaven is heavier. My iMac 2011 gets better numbers in Valley at max settings than in Heaven.
Very strange that you get much worse results in Valley than Heaven when Heaven is heavier. My iMac 2011 gets better numbers in Valley at max settings than in Heaven.
I haven't run the Unigine Heaven benchmark. The two tables are comparing the Unigine Valley benchmark on the 2020 and 2015 iMac, later on in the post there is a similar comparison using the Unigine Superposition benchmark.
For a quick comparison using the Heaven benchmark at 2560x1440, 8xFFAA, fullscreen, extreme tessellation, ultra quality on macOS in OpenGL:
Again, about 3x the fps on the new 2020 iMac compared to the older 2015 iMac. Seems to be quite a trend with the Unigine benchmarks, hope this improvement translates to games as well.
And yes, the Valley benchmark obtains slightly higher fps on both iMacs than the Heaven benchmark.
I haven't run the Unigine Heaven benchmark. The two tables are comparing the Unigine Valley benchmark on the 2020 and 2015 iMac, later on in the post there is a similar comparison using the Unigine Superposition benchmark.
For a quick comparison using the Heaven benchmark at 2560x1440, 8xFFAA, fullscreen, extreme tessellation, ultra quality on macOS in OpenGL:
Again, about 3x the fps on the new 2020 iMac compared to the older 2015 iMac. Seems to be quite a trend with the Unigine benchmarks, hope this improvement translates to games as well.
And yes, the Valley benchmark obtains slightly higher fps on both iMacs than the Heaven benchmark.
As with many of these benchmarking videos, there's no baseline. Sure it shows the fully specced version is much more powerful than the base model, but I don't need to watch a YouTube video to know that.
By not providing a baseline, there's no context to the figures. He mentions the 16" MBP, so why not provide benchmarks for that too?
Without comparing to some baseline, the figures don't mean a great deal.
As with many of these benchmarking videos, there's no baseline. Sure it shows the fully specced version is much more powerful than the base model, but I don't need to watch a YouTube video to know that.
By not providing a baseline, there's no context to the figures. He mentions the 16" MBP, so why not provide benchmarks for that too?
Without comparing to some baseline, the figures don't mean a great deal.
You realize there is a lot of other videos that will cover basically any question you have in regards to any comparison with these models right? I know no single video or YouTuber can cover 100% of everyone's needs and testing but the information is out there and available.
As with many of these benchmarking videos, there's no baseline. Sure it shows the fully specced version is much more powerful than the base model, but I don't need to watch a YouTube video to know that.
By not providing a baseline, there's no context to the figures. He mentions the 16" MBP, so why not provide benchmarks for that too?
Without comparing to some baseline, the figures don't mean a great deal.
The figures makes perfect sense. It's a comparison between the base model and the fully specced version. So in that regard you can consider the base-model as the baseline. If you want to know how it performs compared to an old computer you might already own, that is not the point of the test. And comparing the 2020 iMac to say a Macbook Pro, only makes sense for people that might own or be interested in that specific computer.
You will have to do your own benchmarks on your current machine and then compare it to the results from this and many other videos to see how it compares for your specific situation.
I still don't understand why this generations Geekbench 5 differ so much from reviewer to reviewer. I have seen single score numbers as low as the 1000's and as high as the 1300s and multicore scores from the 8000s to the 10000's. The only time I have been able to get my scores as low as 1080SC was when I was running a bunch of apps in Mac OS X and Win 10 in Parallels with several MS office open inn Windows. Most of the time my SC scores are in the 1350-1390 range and my MC was in the 9900s (I don't have dual channel RAM yet so I have never broken 10K). I am sure a lot of it has to do with how much care is taken when they run their benchmarks but if everyone runs them differently then how much stock should we put in them to be accurate?