Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
To keep it simple, would it be safe to say the upgrade from the m380 to the m390 will see the largest percentage increase in performance? As opposed to 390-->395 or 395-->395x
 
Is 380 the same or even a step down from the base 2014 model that had trouble rendering the UI smoothly? Or was that Yosemeti related?
In 2014, Apple introduced one iMac-- 3.5 Ghz i5, m290x, Fusion drive. You could upgrade it to an m295x, and an i7, and a SSD of you wanted to. The m290x had problems with expose stuttering, though it turned out that this was a software problem.
In Mid 2015, Apple introduced an additional iMac. Slower i5, m290, spinning rust, probably to entice institutional buyers with a firm $2000 budget.

In Late 2015, Apple tossed those two models and replaced them with the current three models.

3.2 Ghz i5, m380, hard drive (slower than m290)
3.2 Ghz i5, m390, baby fusion drive (probably on par with m290), can be upgraded to an i7
3.3 Ghz i5, m395, fusion drive (faster than m290x) can be upgraded to m395x(which is on par with a m295x), can be upgraded to an i7

Ignore the leading "3". It doesn't really mean much. Maybe AMD tweaked power consumption. Maybe it took old products and just rebranded them.

So. Yes. The m380 is not one, but two steps down from the 2014 base model. It's also several hundred dollars cheaper. `
 
To keep it simple, would it be safe to say the upgrade from the m380 to the m390 will see the largest percentage increase in performance? As opposed to 390-->395 or 395-->395x
Depends what you expect. I mean, there are games that require multiple top o the line, non mobile cards to look their best and still play at 60 fps. The m395x will only get you part way there, but it's still incrementally better than the m395. The other two cards? Not worth looking at.

And then there are games that play at 1440p at 60 frames per second on a m390, and anything more that that is essentially wasted, unless you hack the game to run at 2880p, in which case more power might actually be kind of nice... Those games are generally somewhat older.

I suppose you could find even older stuff that will run at 5k on a m380, but the visual payoff is somewhat limited because the textures are so low res.
 
Last edited:
To keep it simple, would it be safe to say the upgrade from the m380 to the m390 will see the largest percentage increase in performance? As opposed to 390-->395 or 395-->395x

I'd like to add to jerwin's reply above, that if you aren't a gamer, the m390 really is good choice IMHO.
 
  • Like
Reactions: colodane
It's interesting that on Apple's power consumption of iMacs page, the "role model" they chose for the 27" is "27-inch Retina 5K display, 4.0 GHz Intel quad-core Core i7, 32GB 1866 MHz DDR3L SDRAM, 3TB Fusion Drive, AMD Radeon R9 M390 with 2GB GDDR5".
I don't think that they just randomly selected this model or configuration, it seems to me that they think that it is a good value for the money and gives you the most for your dollars as a general purpose high performance computer.
Of course you can get the same quality 32GB RAM at OWC for far less cost.
Here is the link: https://support.apple.com/en-il/HT201918
 
Or that's the model where the m390 and i7 can run without throttling as they stay within the cooling constraints?
 
Last edited:
  • Like
Reactions: nlenz
Those games are generally somewhat older.

Just want to clarify and emphasize the word "generally". There are new games that are quite playable without monstrous GPU-performance. Rocket League and Cities Skylines are two examples of that. It all depends on which games, what level of detail in game settings and at which resolutions you wish to play at. 1440p isn't THAT demanding, especially if you dial down the settings and perhaps not play with everything set at "Ultra".

Note that this comment wasn't really directed towards you, it was just an addition to your comment.
 
  • Like
Reactions: nlenz
Most of this has been posted above, but to sum up:

2015 iMac GPUs 1.2.png


2015 iMac GPUs 2.2.png


Update: It looks like the R9 M390 is only Neptune PRO, not the full XT. I also added core clocks for the M380 and M390 from some LuxMark testing, thanks to Rob-ART over at BareFeats.

Update 2: I added a bunch more columns and it started to get unruly, so I broke it into two charts. The first one has all the performance data and the second is more informational. All the calculated columns are based on what we actually know about the GPUs in the new iMacs, not just copy / pasted from Wikipedia.

Update 3: Added Device and Revision IDs for R9 M380, thanks to huffy15, and note about lossless delta color compression.

Update 4: Added list price of R9 380X, now that it has been officially released, and memory clocks for R9 M390 from Notebookcheck.

Looks like the last blank left to fill in is the memory clock from GPU-Z for the R9 M380. Anyone?
 
Last edited:
Neptune XT in 8970M is OpenCL2.0 conformant which requires GCN1.1

It'd be far more accurate if someone can find the entries for Tobago and Trinidad for this table.

https://forum.beyond3d.com/posts/1877668/

Ah, but it doesn't. According to AMD's site:
The AMD OpenCL 2.0 driver is compatible with AMD graphics products based on GCN first generation products or higher.

And yes, I do realize that the language on that page previously read:
The AMD OpenCL 2.0 driver is compatible with AMD graphics products based on GCN 1.1 Architecture or higher.

I'm guessing that although AMD didn't support pre-GCN 1.1 GPUs at first, they were able to build in at least partial support for GPUs like Pitcairn during the 3.5 years since that chip's initial launch.

The table I came up with was based on a desire to be able to compare the relative capabilities of the GPU hardware in these new iMacs at a glance. The RADEON Rx M3xxx marketing names were completely meaningless when these machines launched because nobody knew what the heck they were referring to. However, if you know which chip you're dealing with, the core configuration, memory bus width, and clocks, you're pretty much all set. The performance on paper should be a fairly accurate indicator of what to expect from the benchmarks.
 
Ah, but it doesn't. According to AMD's site:


And yes, I do realize that the language on that page previously read:


I'm guessing that although AMD didn't support pre-GCN 1.1 GPUs at first, they were able to build in at least partial support for GPUs like Pitcairn during the 3.5 years since that chip's initial launch.

The table I came up with was based on a desire to be able to compare the relative capabilities of the GPU hardware in these new iMacs at a glance. The RADEON Rx M3xxx marketing names were completely meaningless when these machines launched because nobody knew what the heck they were referring to. However, if you know which chip you're dealing with, the core configuration, memory bus width, and clocks, you're pretty much all set. The performance on paper should be a fairly accurate indicator of what to expect from the benchmarks.

Dave Baumann confirmed it only recently. Driver being compatible doesn't mean much, the sdk runs even on VLIW cards.

And if your speculation was true, then we'd see other 2xx/7xxx cards alongside Hawaii based 290/X, which is confirmed GCN1.1, also on this list.


OpenCL™ 2.0 conformance logs submitted (pending ratification) for: AMD Radeon HD 7790, AMD Radeon HD 8770, AMD Radeon HD 8500M/8600M/8700M/8800M/8900M Series, AMD Radeon R5 M240, AMD Radeon R7 200 Series, AMD Radeon R9 290, AMD Radeon R9 290X, A-Series AMD Radeon R4 Graphics, A-Series AMD Radeon R5 Graphics, A-Series AMD Radeon R6 Graphics, A-Series AMD Radeon R7 Graphics, AMD FirePro W5100, AMD FirePro W9100, AMD FirePro S9150

http://developer.amd.com/tools-and-...sdk/system-requirements-driver-compatibility/

See here too,

https://forums.macrumors.com/thread...the-nvidia-750m.1883810/page-12#post-21358516

So it's at least GCN1.1, I've strong suspicions that it's GCN1.2 too.
 
Back when I was considering a MacPro, I learned something interesting about how the Graphics Core Next cards implement double precision math. Certain cards were much more efficient--as wikipedia states in its coverage of the AMD Radeon 200 cards:


and this is apparently why the D500, otherwise practically indistinguishable from the D300 was sold, better performance on double precision.

So, anybody up for a round of gputest benchmarking?

http://www.geeks3d.com/gputest/download/

I suggest posting the results from pixmark julia fp32 and pixmark julia fp64. These settings are the default.
Screen Shot 6.png

Nothing too fancy.
Screen Shot 7.png
Screen Shot 8.png

As you can see, the fp64 performance is one fourth that of fp32
 
Last edited:
You can break up longer blocks of frames and have a CPU thread work on each one, but within a group of frames they must be done sequentially.

The x.264 developers have been working on opencl h.264 compression with some success, though the design of the codec doesn't make it easy. I've heard that h.265 is much more amenable to parallel computing environments...

 
So it's at least GCN1.1, I've strong suspicions that it's GCN1.2 too.

I'm not sure you mean the same thing as I do when I refer to GCN 1.x. I read AnandTech pretty regularly, and they started using those terms as a way to differentiate the GCN generations, but pointed out that they were just making it up due to the lack of any official nomenclature from AMD. Most tech blogs took a similar approach, but might not always agree on what distinguishes a particular GCN generation.

Pitcairn was released in March of 2012 as the Radeon HD 7850 and Radeon HD 7870 GHz Edition. The Radeon R9 M390 is the same TSMC 28nm part with the same hardware features. AMD did not tape out a whole new chip with an identical layout but a newer microarchitecture, or add in fixed function hardware that wasn't in the original. They've been using that same 0x6819 device ID the whole time. Pitcairn, despite the half-dozen other names AMD has given it, is still Southern Islands and is first generation GCN.
 
The x.264 developers have been working on opencl h.264 compression with some success, though the design of the codec doesn't make it easy. I've heard that h.265 is much more amenable to parallel computing environments...

There are dozens of academic papers where the smartest researchers have tried to apply GPU acceleration to H264 encoding. In general they have not been successful. The lead X264 developer in the video referred to this:

"...Incredibly difficult...countless people have failed over the years. nVidia engineers, Intel engineers...have all failed....Existing GPU encoders typically have worse compression...and despite the power of the GPU, still slower....
The main problem is [H264] video encoding is an inherently linear process. The only way to make it not linear is using different algorithms [which entail] significant sacrifices."

The X264 team only used GPU acceleration on one component of X264 called Lookahead, which comprises only about 10-25% of the total H264 runtime. This was the only part which could be feasibly accelerated, and the benefits were limited (though useful in some cases). Their benchmarks tested the GPU-boosted version of X264 on a fast AMD machine against a middle-tier mobile Intel CPU. It did fairly well against that mediocre competition, but it would not rank even that well against a current high-end desktop Intel CPU. If you study the academic literature this is a common scenario. The various researchers working on GPU acceleration compare their results to a lower-end Intel CPU, not a higher-end version typically used in video editing.

I downloaded and tested the latest release and nightly builds of Handbrake on both OS X and Windows. The OS X version apparently does not have GPU or Quick Sync acceleration and those features also don't work in the Windows version on a MacBook, despite running Windows.

X264 is very fast for a software algorithm, and I laud that team for doing such a great job. However my tests show it is nearly twice as slow as FCP X at H264 encoding on an iMac and 50% slower in a 2015 MacBook Pro at similar quality levels. This is likely due to FCP X supporting Quick Sync.

I'd like to test the GPU accelerated version of Handbrake but it apparently doesn't work on a MacBook, even running Windows.

If anyone is interested in this area I highly recommend watching the video you posted, it is very good. The lead X264 developer conveys very well the complexity and challenges.
 
I'm not sure you mean the same thing as I do when I refer to GCN 1.x. I read AnandTech pretty regularly, and they started using those terms as a way to differentiate the GCN generations, but pointed out that they were just making it up due to the lack of any official nomenclature from AMD. Most tech blogs took a similar approach, but might not always agree on what distinguishes a particular GCN generation.

Pitcairn was released in March of 2012 as the Radeon HD 7850 and Radeon HD 7870 GHz Edition. The Radeon R9 M390 is the same TSMC 28nm part with the same hardware features. AMD did not tape out a whole new chip with an identical layout but a newer microarchitecture, or add in fixed function hardware that wasn't in the original. They've been using that same 0x6819 device ID the whole time. Pitcairn, despite the half-dozen other names AMD has given it, is still Southern Islands and is first generation GCN.

The official nomenclature(again from Dave Baumann) is GCN2 and GCN3. They are most probably using the 'gfx' part from those IP table to differentiate between different GCN chips.

It's not Pitcairn and it's GCN1.1/2 if it's OpenCL2.0 conformant. What has changed specifically in these chips would be accurately known if we had more information with those IP tables.

I'm not sure of the device id part, though both Nepture XT and Venus XT have different device IDs than what they're thought to be rebrands of.

https://pci-ids.ucw.cz/read/PC/1002

The best question for answering whether it is 1.1 or 1.0 is this: does the M390 has Freesync?

If it has it MUST be GCN 1.1.

Edit: http://products.amd.com/en-ca/searc...Radeon™-R9-M300-Series/AMD-Radeon™-R9-M390/53

Nope, its 1.0.

Not really, that's the programmable display controller and it doesn't have to do much with the underlying architecture.
 
I've been playing with a $3500 configuration in my basket with the i7 + m395x + 1TB SSD to replace my 2012 for the last week and I can't seem to pull the trigger as I'm realizing I'm paying this much for a 3+ year old GPU...

Go ahead and pull the trigger. The Radeon R9 M395X is Tonga, which was first released September 2, 2014. That GPU is only 1 year old.

@koyoot & @jerwin: I have no idea what you all are on about. Bonaire was the first GCN 1.1 GPU, and Tonga heralded in GCN 1.2.

@gamervivek Yeah, we're talking about totally different things. Dave Baumann's GCN2 and GCN3 have no particular correlation to Ryan Smith's GCN 1.0, 1.1 and 1.2.
 
Last edited:
Bonaire was the first GCN 1.1 GPU, and Tonga heralded in GCN 1.2.

Merely pointing out that the Bonaire mobile gpu (m385, not used in any mac, also does not support freesync). Really, you shouldn't put much stock in numbers, particularly marketing numbers, unless you interrogate the actual hardware.

Luxmar tells me that my own video card (m290x) is a OpenCL 1.2 device.
And this site
http://www.mac4ever.com/dossiers/105298_test-des-imac-4k-et-imac-5k-2015

says that the m390 and m395 are also OpenCL 1.2 devices.

A driver issue or a basic hardware issue? Post a screenshot of a mac luxmark test that claims OpenCL 2.0 and we'll talk. Yeah, I'm using Luxmark as a mac version of "GPU-Z." It seems to work.
 
It's not Pitcairn and it's GCN1.1/2 if it's OpenCL2.0 conformant. What has changed specifically in these chips would be accurately known if we had more information with those IP tables.

I'm not sure of the device id part, though both Nepture XT and Venus XT have different device IDs than what they're thought to be rebrands of.

Here's the R9 M390 from the teardown photos on OWC's blog:

iMac27inch-5k-late2015-114.jpg


Compare to any photo of Pitcairn or its many rebadges. That is, beyond a shadow of a doubt, Pitcairn.

You can give similar hardware new device IDs as you please, but if you give dissimilar hardware the same device ID, things go pear shaped.

Really, you shouldn't put much stock in numbers, particularly marketing numbers, unless you interrogate the actual hardware.

That was the whole point of this exercise for me. Everything in the table I posted is based on what we can conclusively determine about the underlying hardware. The core clocks and compute units as reported by LuxMark appear to be reliable and fill in most of the blanks for the 2 devices we don't have GPU-Z data for yet. Initially, all I had to go by were the Device and Revision IDs reported by OS X System Information for the models I could get my hands on. When I asked the friendly folks at the local Apple Store more about the AMD GPUs in the new iMacs, they suggested I should try inquiring at the Microsoft Store down the way since they didn't really deal with AMD processors much.
 
When I asked the friendly folks at the local Apple Store more about the AMD GPUs in the new iMacs, they suggested I should try inquiring at the Microsoft Store down the way since they didn't really deal with AMD processors much.

roflmao

next time, bring a copy of luxmark on a flash drive.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.