Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Some examples of working with Nick and my wx 4150 mxm card.


example of the loop to check ioreg (without fan speeds since mxm has no fans)
Code:
$ while sleep 2; do ioreg -l |grep \"PerformanceStatistics\" | cut -d '{' -f 2 | tr '|' ',' | tr -d '}' | tr ',' '\n'|grep 'Temp\|Clock'; done|paste - - -
"Core Clock(MHz)"=1016    "Memory Clock(MHz)"=300    "Temperature(C)"=63
"Core Clock(MHz)"=964    "Memory Clock(MHz)"=1500    "Temperature(C)"=63
"Core Clock(MHz)"=998    "Memory Clock(MHz)"=1500    "Temperature(C)"=63
"Core Clock(MHz)"=1025    "Memory Clock(MHz)"=1500    "Temperature(C)"=63
"Core Clock(MHz)"=1008    "Memory Clock(MHz)"=1500    "Temperature(C)"=63


Tried a few of his roms, some of which resulted in GPU clock being stuck at 214MHz (original rom idle speed was 300MHz) but Memory was able to hit 1750MHz (vs previous cap of 1500MHz) this of course killed metal bench back to 4k from nearly 14k.
Found windows compute numbers for the WX 4150 ... seems they have OpenCL around 15k-17k, so getting 13k/14k seems like a decent number considering 2x PCIe bus and a slow E5520 2.2Ghz CPU

Based on all of that i'd believe that it may get truly stuck at 300MHz depending on what was tweaked / not loaded properly.
We used the very same script to check the performance of the cards. And we had the same problem with the WX4150/WX4170 cards sticking at 214MHz which some WX7100 were locked to 300MHz unless we found the setting @roscho just published, again. We decided to boil down the RadeonBoost to a simple PolarisBoost just injecting this settings.
In May @Nick [D]vB sent me an optimised version for the WX4150/WX4170 running at 1250MHz core and 1750MHz memory, unfortunately it could not boot Catalina, only Mojave.
 
With ...FORCE... after Polarisboost the Core Clock rises to 1243MHz and OpenCL performance is significantly better (GB4 45000 vs 35000).

...

Code:
            <key>IOProviderMergeProperties</key>
            <dict>
                <key>ATY,EFIVersionB</key>
                <string>PolarisBoost</string>
                <key>CFG,CFG_NVV</key>
                <integer>2</integer>
                <key>CFG,CFG_PTPL2_TBL</key>
                <data>ggAAAHwAAAB2AAAAcAAAAGoAAABkAAAAXgAAAFgAAABSAAAATAAAAEYAAABAAAAAOgAAADQAAAAuAAAAKAAAAA==</data>
                <key>CFG,CFG_USE_CP2</key>
                <true/>
                <key>PP,PP_EnableLoadFalconSmcFirmware</key>
                <integer>1</integer>
                <key>PP,PP_Falcon_QuickTransition_Enable</key>
                <integer>1</integer>
                <key>PP,PP_WorkLoadPolicyMask</key>
                <integer>16</integer>
                <key>CFG,CFG_FORCE_MAX_DPS</key>
                <true/>
                <key>CFG,CFG_FORCEMAXDPM</key>
                <true/>
            </dict>

Please be all aware that the card consumes significantly more power = runs hotter = ages faster when forced to run at max SCLK&MCLK. So this should only be used as a workaround with cards/Bioses/ Configs/whatever where Powerplay does not work correctly OOTB...
 
We used the very same script to check the performance of the cards. And we had the same problem with the WX4150/WX4170 cards sticking at 214MHz which some WX7100 were locked to 300MHz unless we found the setting @roscho just published, again. We decided to boil down the RadeonBoost to a simple PolarisBoost just injecting this settings.
In May @Nick [D]vB sent me an optimised version for the WX4150/WX4170 running at 1250MHz core and 1750MHz memory, unfortunately it could not boot Catalina, only Mojave.
hmm makes me wonder if i could get mine running at 1250/1750, not sure i'll put catalina on the xserve but there has also been some other differences in the iMac vs xserve mxm card behavior so i'd wonder if this could do catalina with that rom
 
Please be all aware that the card consumes significantly more power = runs hotter = ages faster when forced to run at max SCLK&MCLK. So this should only be used as a workaround with cards/Bioses/ Configs/whatever where Powerplay does not work correctly OOTB...
These two settings make my card perform all the time at max. Without you have to send the system first once to sleep, thereafter the performance with GeekBench5 metal near 19k. Before I had all the time directly after boot only around 12k. Perfect!! Will add this to the config files on the Catalina Loader - we should move over to our thread, probably :) Thanks!
 
Some examples of working with Nick and my wx 4150 mxm card (note in xserve3,1).


example of the loop to check ioreg (without fan speeds since mxm has no fans)
Code:
$ while sleep 2; do ioreg -l |grep \"PerformanceStatistics\" | cut -d '{' -f 2 | tr '|' ',' | tr -d '}' | tr ',' '\n'|grep 'Temp\|Clock'; done|paste - - -
"Core Clock(MHz)"=1016    "Memory Clock(MHz)"=300    "Temperature(C)"=63
"Core Clock(MHz)"=964    "Memory Clock(MHz)"=1500    "Temperature(C)"=63
"Core Clock(MHz)"=998    "Memory Clock(MHz)"=1500    "Temperature(C)"=63
"Core Clock(MHz)"=1025    "Memory Clock(MHz)"=1500    "Temperature(C)"=63
"Core Clock(MHz)"=1008    "Memory Clock(MHz)"=1500    "Temperature(C)"=63


Tried a few of his roms, some of which resulted in GPU clock being stuck at 214MHz (original rom idle speed was 300MHz) but Memory was able to hit 1750MHz (vs previous cap of 1500MHz) this of course killed metal bench back to 4k from nearly 14k.
Found windows compute numbers for the WX 4150 ... seems they have OpenCL around 15k-17k, so getting 13k/14k seems like a decent number considering 2x PCIe bus and a slow E5520 2.2Ghz CPU

Based on all of that i'd believe that it may get truly stuck at 300MHz depending on what was tweaked / not loaded properly.
these ar my temperatures and clock speeds at the end of LuxMark test (28850):
Code:
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=83
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=81
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=83
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=81
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=83
"Core Clock(MHz)"=1238    "Memory Clock(MHz)"=2000    "Temperature(C)"=78
with:
Code:
<key>CFG,CFG_FORCE_MAX_DPS</key>
<false/>
<key>CFG,CFG_FORCEMAXDPM</key>
<false/>
What are these 2 settings for?
 
these ar my temperatures and clock speeds at the end of LuxMark test (28850):
Code:
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=83
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=81
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=83
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=81
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=83
"Core Clock(MHz)"=1238    "Memory Clock(MHz)"=2000    "Temperature(C)"=78
with:
Code:
<key>CFG,CFG_FORCE_MAX_DPS</key>
<false/>
<key>CFG,CFG_FORCEMAXDPM</key>
<false/>
What are these 2 settings for?
These two helps me with the existing BIOS version in the iMacs I have to get the same level of performance directly after boot. Otherwise I had to send the system to sleep once, thereafter the drivers were initialised in the correct way. This is just a description of a problem we had and the cure we found.

In the GeekBench5 metal language: 12k before first sleep, 19k after first sleep. With these settings 19k all the time with a WX4130 (which is a 2GB version of the WX4150).
 
Perfect!! Will add this to the config files on the Catalina Loader - we should move over to our thread, probably :) Thanks!

Absolutely - I just wanted to point out that apparently for some of the AMD9500Controller.kext variables (aty_config and aty_properties) the sequence / timepoint of injection seems to be important.

Note that RadeonBoost and its derivative PolarisBoost are virtually the same, the main point about the latter is that IOPCIPrimaryMatch now contains the WX7100 mobile(and likely, but I did not check also the smaller WX cards) ID e.g. 0x67C01002 and some cosmetical aspects as setting ATY,EFIVersionB. The other parameters under IOProviderMergeProperties are literally the same as CMMChris introduced for RadeonBoost and resemble the Orinoco FB parameters.

Sleep / Wake didn't work for my card anyway. Only by adding CFG_FORCE_MAX_DPS I literally forced my SCLK up. Mem Clocks went right before.

Anyway I wonder if it is the right long-term strategy to use our MXM cards with a PowerPlay table originally intended for the Radeon Pro 580X MPX-module in 2019 Mac Pro. If you look at the Bioses at Techpowerup the TDP etc is quite different. Unfortunately I had no success with cfg_ptpl2_min/max as this seems to work only for some newer cards supported by AMD10000controller.kext.

these ar my temperatures and clock speeds at the end of LuxMark test (28850):
Code:
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=83
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=81
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=83
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=81
"Core Clock(MHz)"=1266    "Memory Clock(MHz)"=2000    "Temperature(C)"=83
"Core Clock(MHz)"=1238    "Memory Clock(MHz)"=2000    "Temperature(C)"=78
with:
Code:
<key>CFG,CFG_FORCE_MAX_DPS</key>
<false/>
<key>CFG,CFG_FORCEMAXDPM</key>
<false/>
What are these 2 settings for?

For nothing, as long as you set them to <false/>.

If your card clocks up correctly under load, it wont help either, at least not in Summer.
During winter it might heat up your room, as the card will run with maximum Core und Mem Clocks all the time.

This setting is only useful for debugging or as a workaround in case of broken/unknown settings as Ausdauersportler wrote.
 
With only ioreg running, I get this with the rx 580:

Core Clock(MHz)"=1366
"Memory Clock(MHz)"=2000
"GPU Activity(%)"=0
"Fan Speed(RPM)"=775
"Temperature(C)"=54
"Fan Speed(%)"=17

Should I safely assume the GPU is not really stressed?
 
With only ioreg running, I get this with the rx 580:

Core Clock(MHz)"=1366
"Memory Clock(MHz)"=2000
"GPU Activity(%)"=0
"Fan Speed(RPM)"=775
"Temperature(C)"=54
"Fan Speed(%)"=17

Should I safely assume the GPU is not really stressed?
Stress the GPU. If the numbers change, then yes.
 
With only ioreg running, I get this with the rx 580:

Core Clock(MHz)"=1366
"Memory Clock(MHz)"=2000
"GPU Activity(%)"=0
"Fan Speed(RPM)"=775
"Temperature(C)"=54
"Fan Speed(%)"=17

Should I safely assume the GPU is not really stressed?
Keep the GPU clock speed at max with no loading won’t stress the GPU, but just make it draw much more power at idle and run warmer.

e.g. if the GPU only need 15W during idle (able to reduce the clock speed to 300MHz). Then now it may need 50W to stay at 1366MHz even at idle.

Also, the idle temperature may jump from 40C to 55C etc.

But that’s still far away from “under stress”. This can easily be observed from the GPU temperature.

If you run Furmark now, you will see the GPU temperature rocket up in few seconds.

And it will cool down again if you leave it idle for awhile.

It’s similar to connect multiple monitors to the same AMD GPU, the clock speed will stay high, but doesn’t mean the GPU is under stress.
 
joevt and h9826790

Thanks for your replies! Without modifying the plist the temperature when idle is at 49 C so I guess the jump to 54 (57 C now) is nothing to worry about. However these are the measurements with luxmark running:



"Core Clock(MHz)"=1366
"Memory Clock(MHz)"=2000
"GPU Activity(%)"=100
"Fan Speed(RPM)"=1424
"Temperature(C)"=74
"Fan Speed(%)"=33
 
Hi. Any particular reason I can only get around half the normal score running Radeonboost 1.6/OC .59? Seems all my geekbench scores are pretty dismal.
 

Attachments

  • Screenshot 2020-08-03 at 14.42.16.png
    Screenshot 2020-08-03 at 14.42.16.png
    232.4 KB · Views: 216
Hi. Any particular reason I can only get around half the normal score running Radeonboost 1.6/OC .59? Seems all my geekbench scores are pretty dismal.
make the test after a forced sleep/wake cycle.
 
what is the output from kextstat | grep -v com.apple

~ % kextstat | grep -v com.apple

Index Refs Address Size Wired Name (Version) UUID <Linked Against>
43 1 0xffffff7f84df6000 0x29000 0x29000 as.vit9696.Lilu (1.4.5) E42CE60E-EC0B-33AE-A513-5383B81BF165 <8 6 5 3 2 1>
44 0 0xffffff7f84e1f000 0x6f000 0x6f000 as.vit9696.WhateverGreen (1.4.0) 28229092-CBB6-30FD-8954-4E887FAD958D <43 13 8 6 5 3 2 1>
87 0 0xffffff7f80f10000 0x185000 0x185000 at.obdev.nke.LittleSnitch (5430) 7462BC7A-1330-3F92-A73F-3FBFE331C74A <8 6 5 3 1>
~ %

I guess no mention of Radeonboost? But in System Info it lists:

Radeon RX 580:

Chipset Model: Radeon RX 580
Type: GPU
Bus: PCIe
Slot: Slot-1
PCIe Lane Width: x16
VRAM (Total): 8 GB
Vendor: AMD (0x1002)
Device ID: 0x67df
Revision ID: 0x00e7
VBIOS Version: RadeonBoost
Metal: Supported, feature set macOS GPUFamily2 v1
 
~ % kextstat | grep -v com.apple

Index Refs Address Size Wired Name (Version) UUID <Linked Against>
43 1 0xffffff7f84df6000 0x29000 0x29000 as.vit9696.Lilu (1.4.5) E42CE60E-EC0B-33AE-A513-5383B81BF165 <8 6 5 3 2 1>
44 0 0xffffff7f84e1f000 0x6f000 0x6f000 as.vit9696.WhateverGreen (1.4.0) 28229092-CBB6-30FD-8954-4E887FAD958D <43 13 8 6 5 3 2 1>
87 0 0xffffff7f80f10000 0x185000 0x185000 at.obdev.nke.LittleSnitch (5430) 7462BC7A-1330-3F92-A73F-3FBFE331C74A <8 6 5 3 1>
~ %

I guess no mention of Radeonboost? But in System Info it lists:

Radeon RX 580:

Chipset Model: Radeon RX 580
Type: GPU
Bus: PCIe
Slot: Slot-1
PCIe Lane Width: x16
VRAM (Total): 8 GB
Vendor: AMD (0x1002)
Device ID: 0x67df
Revision ID: 0x00e7
VBIOS Version: RadeonBoost
Metal: Supported, feature set macOS GPUFamily2 v1
It is a codeless kext, so it won't show up there. Try these add-ons to the OpenCore config.plist. You may need to change CFG,CFG_FB_LIMIT to 05 or 06 depending on your card output ports and disable Radeonboost kext.
1596463712018.png

Code:
AMD Radeon RX 580:

  Chipset Model:    AMD Radeon RX 580
  Type:    GPU
  Bus:    PCIe
  Slot:    Slot-1
  PCIe Lane Width:    x16
  VRAM (Total):    8 GB
  Vendor:    AMD (0x1002)
  Device ID:    0x67df
  Revision ID:    0x00c7
  EFI Driver Version:    01.01.183
  Metal:    Supported
 
  AMD Radeon RX 580:

  Name:    ATY,Orinoco
  Type:    Display Controller
  Driver Installed:    Yes
  MSI:    Yes
  Bus:    PCI
  Slot:    Slot-1
  Vendor ID:    0x1002
  Device ID:    0x67df
  Subsystem Vendor ID:    0x1028
  Subsystem ID:    0x1701
  Revision ID:    0x00c7
  Link Width:    x16
  Link Speed:    5.0 GT/s
 

Attachments

  • Orinoco.plist.zip
    994 bytes · Views: 168
Thanks startergo. I think things just got a bit out of my depth there! Not sure how to add that to the config and what you mean by "You may need to change CFG,CFG_FB_LIMIT to 05 or 06 depending on your card output ports" - I am using the mdisplayport (single monitor). Does that code just go at the end of the config file? I assume from the <key> to the second last </dict> or is it from <dict> to the last </dict>.

I will meantime remove the radeonboost.
 
You need to add this part. Save your file and try to open it with ProperTree. There should not be any errors.
This part:
Code:
<key>DeviceProperties</key>
    <dict>
        <key>Add</key>
        <dict>
            <key>PciRoot(0x0)/Pci(0x3,0x0)/Pci(0x0,0x0)</key>
            <dict>
                <key>@0,name</key>
                <data>QVRZLE9yaW5vY28=</data>
                <key>ATY,EFIVersion</key>
                <data>MDEuMDEuMTgz</data>
                <key>CFG,CFG_FB_LIMIT</key>
                <data>BA==</data>
                <key>CFG,CFG_PTPL2_TBL</key>
                <data>ggAAAHwAAAB2AAAAcAAAAGoAAABkAAAAXgAAAFgAAABSAAAATAAAAEYAAABAAAAAOgAAADQAAAAuAAAAKAAAAA==</data>
                <key>PP,PP_PowerPlayEnabled</key>
                <data>AQAAAA==</data>
                <key>PP,PP_WorkLoadPolicyMask</key>
                <data>CA==</data>
                <key>agdpmod</key>
                <data>cGlrZXJhAA==</data>
                <key>model</key>
                <data>QU1EIFJhZGVvbiBSWCA1ODA=</data>
                <key>rebuild-device-tree</key>
                <data>AA==</data>
                <key>shikigva</key>
                <data>kA==</data>
            </dict>
        </dict>
        <key>Delete</key>
        <dict>
            <key>PciRoot(0x0)/Pci(0x3,0x0)/Pci(0x0,0x0)</key>
            <array>
                <string>ATY,EFIVersion</string>
            </array>
        </dict>
    </dict>
Not sure how to add that to the config and what you mean by "You may need to change CFG,CFG_FB_LIMIT to 05 or 06 depending on your card output ports"
It means how many video outputs you have on your video card i.e. how many monitors can you theoretically connect at once to the card.
 
OK. Thanks very much for the explanations. I did all that, config file opened with ProperTree ok but with or without the the Radeonboost kext (and the corresponding entry in confiq) the result is pretty much identical - 27641 :(
 
OK. Thanks very much for the explanations. I did all that, config file opened with ProperTree ok but with or without the the Radeonboost kext (and the corresponding entry in confiq) the result is pretty much identical - 27641 :(
Under PCI in System Report is it loading Orinoco?
 
Hi startergo - it responded with:

nvram 4D1FDA02-38C7-4A6A-9CC6-4BCCA8B30102:eek:pencore-version
4D1FDA02-38C7-4A6A-9CC6-4BCCA8B30102:eek:pencore-version REL-059-2020-06-01

/$HOME/Downloads/gfxutil/gfxutil -v -f GFX0
0b:00.0 1002:67df /PCI0@0/IOU0@3/GFX0@0 = PciRoot(0x0)/Pci(0x3,0x0)/Pci(0x0,0x0)
 
Last edited:
Hi startergo - it responded with:

nvram 4D1FDA02-38C7-4A6A-9CC6-4BCCA8B30102:eek:pencore-version
4D1FDA02-38C7-4A6A-9CC6-4BCCA8B30102:eek:pencore-version REL-059-2020-06-01

/$HOME/Downloads/gfxutil/gfxutil -v -f GFX0
0b:00.0 1002:67df /PCI0@0/IOU0@3/GFX0@0 = PciRoot(0x0)/Pci(0x3,0x0)/Pci(0x0,0x0)
Let me look at your config file.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.