Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Bohemien

macrumors regular
Original poster
Mar 28, 2019
136
81
Germany
So, after more testing and installing LR6 on the Mojave side of things too (noticing this was possible), I can conclude:

For photo editing, the 5K screen of the iMac seems to pose a real challenge when doing local adjustments. Drawing in masks (or the adjustments directly without displaying the masks) seems to be a task requiring huge amounts of processing power, which at the retina resolution makes even the newest machines showing a severe lag between moving the brush and seeing the effect.

This might be no surprise for someone coming from a fairly recent machine, but for me, upgrading from a 2011 MBP with 1920x1200 screen resolution, this is a bit disappointing: now I have this gorgeous machine with its oversized i9 processor and Vega 48 graphics, but the workflow when drawing masks is slower than on my 8-year old Mac. I completely understand the new iMac has to lug around 6 times the pixels of my old one, but still, from a purely "comfort of work" standpoint this is a step back. (Especially when editing my 24MP photos at nearly 100% view makes everything look just GORGEOUS, I mean, wow!)

There are workarounds which reduce the lag, e.g.:
  • open the app in "low res" mode
  • use a second monitor with lower resolution for these tasks
  • reduce the size of the app window on the screen
  • use a different screen scaling (towards "larger text") in the Monitors control panel

I really don't regret going with the i9/Vega now, at least my machine will go at the task the fastest way possible at the time being... on a side note: while drawing masks, neither CPU nor GPU operate at their limit, so maybe there is a way of reducing the lag a bit by optimizing the code? Not that I'm an expert in these things, just my observation.

Just posting this to let people who are in the same situation I was in (moving from an older machine to the gorgeous 27" screen) know what to expect. :)
 
Last edited:

PBMB

macrumors 6502
Mar 19, 2015
331
135
... on a side note: while drawing masks, neither CPU nor GPU operate at their limit
You mean that you watched Activity Monitor while performing the previous operations? And if yes, what was the CPU usage?
 

Bohemien

macrumors regular
Original poster
Mar 28, 2019
136
81
Germany
You mean that you watched Activity Monitor while performing the previous operations? And if yes, what was the CPU usage?

Correct. I repeated the process to show you: I went into C1 with one of the RAWs from the benchmark, window at full screen size, created a new layer with a setting of exposure and clarity, and brushed the adjustment in with a brush of size 1000, hardness 40, flow 100.

I let the system idle for a minute so you can clearly see where it ramps up when I start brushing. Took a screenshot in the middle of the process. Power Gadget showed a CPU utilization around 25-35%, Activity Monitor shows that all threads are active, but not at their maximum, while the Vega is at maybe 25% of its max. performance.
 

Attachments

  • PowerGadget MaskDrawing.jpg
    PowerGadget MaskDrawing.jpg
    179.8 KB · Views: 238
  • ActivityMonitor MaskDrawing.jpg
    ActivityMonitor MaskDrawing.jpg
    134.1 KB · Views: 240

PBMB

macrumors 6502
Mar 19, 2015
331
135
Correct. I repeated the process to show you: I went into C1 with one of the RAWs from the benchmark, window at full screen size, created a new layer with a setting of exposure and clarity, and brushed the adjustment in with a brush of size 1000, hardness 40, flow 100.

I let the system idle for a minute so you can clearly see where it ramps up when I start brushing. Took a screenshot in the middle of the process. Power Gadget showed a CPU utilization around 25-35%, Activity Monitor shows that all threads are active, but not at their maximum, while the Vega is at maybe 25% of its max. performance.
Well, this is interesting. I don't know if such operations should take much CPU/GPU power, but the fact that you observe lag while the computational units are under-utilised, points clearly to software optimisation issues.
 

smirking

macrumors 68040
Aug 31, 2003
3,917
3,992
Silicon Valley
EDIT #2: The lag when drawing masks seems to be a thing someone working on a 5K screen has to live with in 2019-see my conclusion below, with workarounds to make the lag bearable.

That's a very interesting summary. I'm a Capture One Pro 12 user with a 2018 Vega 20 MBP and I'm having some difficulties. When I'm connected to my LG 5K while using Capture One Pro, my MBP gets scorching hot to the point that the system will freeze and reboot. The issues that my 2018 has with Capture One Pro is significant enough that I judge that it is either no faster or barely faster than the 2016 MBP it replaced.

Now, my performance is only disappointing when connected to my LG 5K monitor. When I'm using the laptop without an external monitor, Capture One Pro performs admirably and is definitely faster on my 2018 than my 2016. It also doesn't overheat and struggle. There must be something hokey with the way Capture One Pro is trying to drive the 5K monitor.

There are workarounds which reduce the lag, e.g.:
  • open the app in "low res" mode
  • use a second monitor with lower resolution for these tasks
  • reduce the size of the app window on the screen
  • use a different screen scaling (towards "larger text") in the Monitors control panel
Great suggestions! I never thought of intentionally using a lower res monitor as a second display. If my issues continue, I just might acquire a lower res monitor for culling purposes. That's what causes my machine to catch fire. It edits just fine without any overheating issues.

I first started using Capture One Pro before I got an LG 5K monitor. I was just using a 2012 Unibody and a 30" Apple Cinema Display. Capture One Pro smoked on that setup. When I upgraded the 2012 Unibody to a 2016 MBP, culling in Capture One Pro was so fast that I wasn't even tempted to adopt Photo Mechanic into my workflow. Then came the 5K monitor and months of slowness. When they got the software quirks worked out, Capture One Pro was again a capable performer, but never as blazing fast as it was on a standard resolution monitor.

I've had odd unexpected performance issues with Capture One Pro before and they eventually fixed it. I'm keeping my fingers crossed that they'll eventually work out better performance on Macs using higher end GPUs.
 
Last edited:

Bohemien

macrumors regular
Original poster
Mar 28, 2019
136
81
Germany
If my issues continue, I just might acquire a lower res monitor for culling purposes.

The lower res monitor was proposed by @Mark_EL, I didn't try it myself, but if resizing the window or using a different screen scaling works, this should also do the trick.

FYI: For culling, I use FastRawViewer, which absolutely lives up to its name, even on the 5K display. Plus I could customize it exactly to my preferred workflow, and it comes at a very reasonable price.
 

priitv8

macrumors 601
Jan 13, 2011
4,078
660
Estonia
FYI: For culling, I use FastRawViewer, which absolutely lives up to its name, even on the 5K display. Plus I could customize it exactly to my preferred workflow, and it comes at a very reasonable price.
Another slim russian piece of software. They have another: RawDigger - if you are into the intricacies of your raw files and channel clipping and things like that.
https://www.rawdigger.com
 
  • Like
Reactions: Bohemien

PBMB

macrumors 6502
Mar 19, 2015
331
135
I want to buy an iMac i5/980X but this discussion (and other elements as well from benchmarks) make me think if it would better to get the Vega 48. I would like to comfortably "drive" this 5K display, although I am not sure the Vega would make a difference since it has the same video memory as the 580X. Will the fact that it is a considerably more powerful GPU make a difference, even on the same VRAM? Difference in everyday tasks through the macOS interface, some video editing (4K in Filmora), photo editing (Pixelmator), and a overpopulated screen with windows from several applications (web browsers, video players, terminal, text-photo-video editors, office applications and whatnot). Oh, and some gaming like the more recent Tomb Raider games (I remember having seen two in the last years but I did not have the opportunity to play any of them).

The benchmarks show little benefit overall, with the exception of specialized software that can leverage the power of Vega. I am wondering if macOS itself is such software too.

On the other hand I cannot justify a 540 euros hike in price just for the GPU o_O :eek:.
 

tomscott1988

macrumors 6502a
Apr 14, 2009
709
683
UK
Lightroom doesn't seem to work perfectly on any machine but running it in a VM is going to make it worce.

Obviously people will disagree but i work as a photographer and get through 3-5000 images a week and have been through a lot of different machines.

From my testing experience the iMac pro essentially feels exactly the same as the base 2017 27" iMac in the dev modules. Exporting is the only place where lightroom seems to use hardware acceleration.

You have to remember that the 5K display is roughly 15mp and when you move an adjustment slider it tries to run at 60FPS. The fact lightroom is written so poorly ive found that 1920x1200 makes lightroom more snappy, the difference between using lightroom on my Mac pro with the 27" 2k cinema disaply and the old 23" is stark and the 5k is more than double that resolution again.

I had this running with an RX580 8gb on my mac pro I thought it would make a huge difference over the 10 year old 5770 but it was almost exactly the same performance in the dev module. Again my iMac as the 570 4gb and there is no difference. Lightroom doesn't really seem to use the GPU to accelerate the dev UI.

Lightroom 6 is also old and doesnt have the performance improvements, not that they have made much difference.

It depends on your workflow but do all the computational things last, sharpening, noise, lens corrections, clarity, texture etc

I have a preset set up that I add last to batch sharpening, noise etc.
 

mrvo

macrumors member
Nov 16, 2018
81
18
I want to buy an iMac i5/980X but this discussion (and other elements as well from benchmarks) make me think if it would better to get the Vega 48. I would like to comfortably "drive" this 5K display, although I am not sure the Vega would make a difference since it has the same video memory as the 580X. Will the fact that it is a considerably more powerful GPU make a difference, even on the same VRAM? Difference in everyday tasks through the macOS interface, some video editing (4K in Filmora), photo editing (Pixelmator), and a overpopulated screen with windows from several applications (web browsers, video players, terminal, text-photo-video editors, office applications and whatnot). Oh, and some gaming like the more recent Tomb Raider games (I remember having seen two in the last years but I did not have the opportunity to play any of them).

The benchmarks show little benefit overall, with the exception of specialized software that can leverage the power of Vega. I am wondering if macOS itself is such software too.

On the other hand I cannot justify a 540 euros hike in price just for the GPU o_O :eek:.

https://www.reddit.com/r/mac/commen..._vega_48/?utm_source=share&utm_medium=ios_app
 

PBMB

macrumors 6502
Mar 19, 2015
331
135
Thank you for posting this. It is a very interesting result. Although this holds for the i9-9900K iMac, I guess we can assume that the same holds for the i5-9600K iMac as well.

Next question is how hot these two GPU's run under load. I have not seen concrete comparison data so far. For example run the same game or GPU-intensive software on two iMacs, one with the Vega and the other one with the 580X, and monitor temperature and fan speed, all other parameters being kept equal of course.
 
Last edited:

mrvo

macrumors member
Nov 16, 2018
81
18
Thank you for posting this. It is a very interesting result. Although this holds for the i9-9900K iMac, I guess we can assume that the same holds for the i5-9600K iMac as well.

Next question is how hot these two GPU's run under load. I have not seen concrete comparison data so far. For example run the same game or GPU-intensive software on two iMacs, one with the Vega and the other one with the 580X, and monitor temperature and fan speed, all other parameters being kept equal of course.

Without numbers, the Vega definitely runs cooler due to the HBM2 memory.

Now, if you plan to use this for more than say 3-4 years, then just opt for the Vega, otherwise 580X and eGPU or upgrade whenever it refreshes next.
 

Bohemien

macrumors regular
Original poster
Mar 28, 2019
136
81
Germany
Will the fact that it is a considerably more powerful GPU make a difference, even on the same VRAM?

From what I've seen so far, my own testing and what people have posted in the various "580X or Vega" threads here, I'd venture to say it completely depends on how the software utilizes the GPU. Yes, the Vega is potentially more powerful than the 580X, but it doesn't make all GPU-relevant tasks faster per se, as it seems.

Oh, and some gaming like the more recent Tomb Raider games (I remember having seen two in the last years but I did not have the opportunity to play any of them). On the other hand I cannot justify a 540 euros hike in price just for the GPU o_O :eek:.

Then I think you maybe shouldn't. With the 580X you already get a very powerful GPU, which you can always upgrade later with an eGPU that's still more powerful than the Vega 48. I knew I wouldn't do that and wanted the fastest out-of-the-box iMac I could afford to hopefully use it for 8-10 years without wanting anything different. We'll see if this plan works out. ;)

And for even half of 540€ you can get an XBOX One S or PS4 slim and all these new Tomb Raider games to play on your TV. It's definitely worth it, these games are fantastic (and while you're at it, the latest Assassin's Creed games and Horizon Zero Dawn are absolutely must-have-played titles too, among others :D).

Lightroom doesn't seem to work perfectly on any machine but running it in a VM is going to make it worce. Obviously people will disagree but i work as a photographer and get through 3-5000 images a week and have been through a lot of different machines.

I agree-for me it was just reassuring that I could still open and process my "old" LR-processed photos on my new machine, and will still be able to do so when Mac OS goes 64bit-only (that's when I'll probably have to run LR in the VM). After getting the Z6, which is not supported by LR 6.14, I switched over to Capture One anyway, as I'm not planning to give in to Adobe's forced subscription if I don't need to. I do miss some functions I had gotten used to in LR (e.g. IMHO the healing brush is implemented much better there than the healing layers in C1), but then C1 has other advantages.

Next question is how hot these two GPU's run under load. I have not seen concrete comparison data so far. For example run the same game or GPU-intensive software on two iMacs, one with the Vega and the other one with the 580X, and monitor temperature and fan speed, all other parameters being kept equal of course.

Did you notice this video? I posted it in the 580X vs Vega thread. It's focused on video editing but does exactly what you ask for:

 

PBMB

macrumors 6502
Mar 19, 2015
331
135
From what I've seen so far, my own testing and what people have posted in the various "580X or Vega" threads here, I'd venture to say it completely depends on how the software utilizes the GPU. Yes, the Vega is potentially more powerful than the 580X, but it doesn't make all GPU-relevant tasks faster per se, as it seems.
Yes, I understand. It is still not clear to me if macOS itself as operating system takes advantage of more GPU processing power.

Then I think you maybe shouldn't. With the 580X you already get a very powerful GPU, which you can always upgrade later with an eGPU that's still more powerful than the Vega 48. I knew I wouldn't do that and wanted the fastest out-of-the-box iMac I could afford
We are on the same boat I am afraid. :D Besides, an eGPU is not going to be cheap. But anyhow, an external GPU is not going to be limited by the port through which it exchanges data with the system? I thought that the internal connection is always faster. Or is it a tradeoff which, even with a speed cap, will result in better performance than the 580X?

And for even half of 540€ you can get an XBOX One S or PS4 slim and all these new Tomb Raider games to play on your TV. It's definitely worth it, these games are fantastic
Too much of a hassle for me. I am not a gamer and I don't have a TV, but I would like occasionally play something at good screen resolution and detail. I loved the Tomb Raider games in the past (under Mac OS, the classic one) and I was delighted to see that they are still around. I liked Wolfestein also but it seems that RtCW was the last one for the Mac.

(and while you're at it, the latest Assassin's Creed games and Horizon Zero Dawn are absolutely must-have-played titles too, among others :D).
Are those Mac or Windows games?

Did you notice this video? I posted it in the 580X vs Vega thread. It's focused on video editing but does exactly what you ask for:
Yes, I knew about it and thank you for bringing it up again because apparently I missed the few seconds where he talks about fan noise, clearly suggesting that the 580X produces considerably more heat than the Vega 48. This is an important point to me, not only regarding noise but heat stress in general.
[doublepost=1558638290][/doublepost]
Without numbers, the Vega definitely runs cooler due to the HBM2 memory.

Now, if you plan to use this for more than say 3-4 years, then just opt for the Vega, otherwise 580X and eGPU or upgrade whenever it refreshes next.
Indeed, 3-4 years is a minimum of usage for me. Ideally it will be more than 6-7 years if it does not break.

But why this GPU upgrade is so expensive? The benchmarks do not really justify it. Is it perhaps difficult (technically speaking) to implement it in an AIO like the iMac?
 

Bohemien

macrumors regular
Original poster
Mar 28, 2019
136
81
Germany
But anyhow, an external GPU is not going to be limited by the port through which it exchanges data with the system? I thought that the internal connection is always faster. Or is it a tradeoff which, even with a speed cap, will result in better performance than the 580X?

I'm really no expert in these things-as far as I got it from the various discussions here, of course the internal bus is faster, but if you put a much more powerful card in the external enclosure (like a Vega 64 or a Radeon VII), it will be faster than the internal Vega 48.

Too much of a hassle for me. I am not a gamer and I don't have a TV, but I would like occasionally play something at good screen resolution and detail. I loved the Tomb Raider games in the past (under Mac OS, the classic one) and I was delighted to see that they are still around. I liked Wolfestein also but it seems that RtCW was the last one for the Mac.

The Tomb Raider series had a complete re-boot, with the latest trilogy showing how Lara became the adventurer she is. A bit different from the older games with partly rather dark stories with a lot of fighting besides exploring, these games are definitely worth playing. Wolfenstein also had a re-boot a while ago, the new games also having a very interesting story with the main character being more than just a "killing machine".

All in all, for someone who likes games with a strong and interesting story, and multi-layered characters, in recent years there have been several really brilliant games with amazing settings, deep storytelling and stunningly beautiful sceneries to play in.

Are those Mac or Windows games?

I only play on consoles, so I don't really know-I'm pretty sure the Assassin's Creed games are also available for Windows, but Horizon Zero Dawn (probably THE best game I've played in a long time IMHO, with a mind-bending story and a huge, interesting world to explore) is a PS4 exclusive. BTW, you can plug these consoles into a monitor or beamer, too. :D (But each to his/her liking, of course!)

But why this GPU upgrade is so expensive? The benchmarks do not really justify it. Is it perhaps difficult (technically speaking) to implement it in an AIO like the iMac?

I'd suspect it's purely "Apple tax", I'm afraid. o_O
 
  • Like
Reactions: smirking

tomscott1988

macrumors 6502a
Apr 14, 2009
709
683
UK
An external GPU depends on what display you are running it through.

If you are accelerating the internal display you have to remember that the data must pass back and forth through the cable therefore limiting performance due to bandwidth, in this case the internal will be quicker.

If you are accelerating an external display it passes straight through meaning performance on a secondary would be greater but you loose about 10% due to bandwidth compared to plugging it into a port. In this case the point of the 5K iMac is its display and really the 56 and the 64 are more suitable for more demanding tasks. The RX580 and Vega 48 are still mid range cards roughly equivalent to a Nvidia 1060. A 1080 or 2080 is vastly more powerful... dependant on what your using it for.

If you are specifically using apple software - FCPX etc iMacs will run flawlessly with pretty much any spec even a base MacBook is pretty smooth as they are designed to take advantage of the hardware and usually the results are better than the sum of the parts.

If you are using software like Adobe CC things are far more blurry, acceleration is a massive mixed bag and in most cases performance is slower with the 5K display. Couple to mention, Premier, Lightroom, after effects even illustrator.

If you use Adobe on a PC again its a mixed bag you can take advantage of cuda if your using an Nvidia graphics card. In all honesty I have a PC at work and its a mixed bag compared to my iMac. Neither are particularly quick and the issue is optimisation. We are basically waiting for these companies to write better software that will use the hardware more efficiently.

In the mean time from my experience for what I do anyway which is full time photographer and graphic designer the base models seems to perform almost identically to the higher end models in the UI like Lightroom etc the difference is when exporting.

For me I don't have critical deadlines to meet when I export I go make a cup of tea so it doesn't really bother me.

It seems most of these options are there for specs sake... everything is within 10% then you hit throttling and the benefits are again moot.

For example my conclusion of the 2017 edition was the base 2017 iMac is the best bang for buck, its within 10% of the other CPUs and holds its turbo and doesn't throttle. On paper the 7700k destroys it but in real life inside the iMac chassis it throttles so quickly that it performs almost identically in my use case. The cpu was a £250 upgrade similar performance and made the machine sound like a plane most of the time.

The main difference is in the SSD speed, add an NVME and they fly the fusions are great to start with but slow down so quick that they are imo useless. The nvme is where I would spend my money.
 
Last edited:
  • Like
Reactions: kazmac and smirking

smirking

macrumors 68040
Aug 31, 2003
3,917
3,992
Silicon Valley
I'm really no expert in these things-as far as I got it from the various discussions here, of course the internal bus is faster, but if you put a much more powerful card in the external enclosure (like a Vega 64 or a Radeon VII), it will be faster than the internal Vega 48.

There's a significant penalty for moving the GPU into its own enclosure. When the first BlackMagic eGPU came out I bought it and paired it with my 2016 MBP in hopes of speeding up Capture One Pro. It actually ran slower with the eGPU so I returned it. The latency in having to make a round trip out to the eGPU was very noticeable for something like Capture One Pro when browsing photos. When exporting, the eGPU had a more notable benefit, but nowhere near what I'd expect from the card that was in it.

eGPUs are a nice option to boost an underpowered setup, but not ideal.
 

Bohemien

macrumors regular
Original poster
Mar 28, 2019
136
81
Germany
The RX580 and Vega 48 are still mid range cards roughly equivalent to a Nvidia 1060. A 1080 or 2080 is vastly more powerful... dependant on what your using it for.

In the Helicon Focus benchmark I posted above, the Vega 48 sits in the middle between the GeForce GTX1080Ti and RTX2080Ti, while the RX580 (non-X) is below the 1080. I realize there are different versions of each of these cards though. Not trying to argue, just for the sake of completeness! But I'd say this also shows processing time depends strongly on the software used?

In the mean time from my experience for what I do anyway which is full time photographer and graphic designer the base models seems to perform almost identically to the higher end models in the UI like Lightroom etc the difference is when exporting.

Yes, and that's what annoys me a bit-while the i9/Vega combination maybe could apply local adjustments faster than other machines, apparently it doesn't use its full potential here. And this is a task I do far more often than exporting, where I could just get another cup of coffee if it takes a minute longer. Of course, maybe the problem is down to the algorithm used for brushing, maybe that just can't be optimized any further.

When the first BlackMagic eGPU came out I bought it and paired it with my 2016 MBP in hopes of speeding up Capture One Pro. It actually ran slower with the eGPU so I returned it.

I'm more and more happy with my decision to just get the fastest I could in-machine. :D
 

nihil0

macrumors 6502
May 19, 2016
459
375
Yes, and that's what annoys me a bit-while the i9/Vega combination maybe could apply local adjustments faster than other machines, apparently it doesn't use its full potential here. And this is a task I do far more often than exporting, where I could just get another cup of coffee if it takes a minute longer. Of course, maybe the problem is down to the algorithm used for brushing, maybe that just can't be optimized any further.


So if we will apply it to ANY software which works on same base as Lr (being it Capture One or Luminar 3) - they will always have struggles with masking no matter what CPU and GPU will be used? Or is it simply lazy programming because devs do not use GPU acceleration?
 

Bohemien

macrumors regular
Original poster
Mar 28, 2019
136
81
Germany
So if we will apply it to ANY software which works on same base as Lr (being it Capture One or Luminar 3) - they will always have struggles with masking no matter what CPU and GPU will be used? Or is it simply lazy programming because devs do not use GPU acceleration?

Capture One does use GPU acceleration, but not for everything. I guess only a programmer could really answer that question... then again, masking as a technique has been around since last century ;), so I'd guess they could've optimized it by now, so maybe it's really down to how many pixels/second you're trying to process.

LR 6 under Mojave is similarly sluggish as C1 in that respect. Haven't really tried it with On1 yet, will report back if it's better.
 
Last edited:

tomscott1988

macrumors 6502a
Apr 14, 2009
709
683
UK
I think because it is all computational rather than pixel pushing is the reason for the speed.

That being said Lightroom CC is rapid in comparison but it doesnt have half the features.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.