Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

f54da

macrumors 6502a
Original poster
Dec 22, 2021
503
185
Here's some investigation into my attempts to see what it would take to be able to effectively run in an "open" clamshell mode. That is, connected to an external display, with the laptop open, but the internal display disabled. And before you say "iog=0x0", I have a section about that, why you're wrong (it's actually 0x2) and about how it's not a perfect solution.

## Motivation

Laptop run hot, fan go brr.

More formally, connecting to an external display requires use of the discrete GPU, because in an absolutely boneheaded decision they decided to save some money on wiring the gmux to the external ports as well. If I'd known this before I'd have gotten the BTO iGPU only version, but oh well. Though this saving makes sense from a power-standpoint (because if you're using an external display you have AC power available) it is madness from a temperature standpoint where the nvidia gpu has a much higher power draw than the iGPU. And this higher power draw translates to about 20F higher temperatures, as quantatively shown in this amazing [1] regression analysis.

So what can we do to prevent the laptop from becoming a space heater? You can play with fan controls, but that's just patching over the symptom (and I hate fan noise). You can disable turbo boost and enforce CPU power limits, but that only really gets you so far since it just buys you more headroom, since the GPU is still cranking away.

## Control GPU P-State

The hackintosh guys (and 2010 mbp owners) are probably familiar with AppleGraphicsPowerManagement kext, which has some configurations for gpu p-state and vp-state. There are entries for the retina mbps as well, they look something like for the discrete gpu

Code:
<key>BoostPState</key>
    <array>
        <integer>6</integer>
        <integer>14</integer>
        <integer>14</integer>
        <integer>6</integer>
    </array>
    <key>BoostTime</key>
    <array>
        <integer>1</integer>
        <integer>1</integer>
        <integer>1</integer>
        <integer>1</integer>
    </array>


followed by some "heuristics" that probably define when to switch to which p-state

Code:
<key>MinP0P1</key>
    <integer>10</integer>
    <key>MinP5</key>
    <integer>14</integer>
    <key>MinP8</key>
    <integer>15</integer>
    <key>MinVP0</key>
    <integer>10</integer>
    <key>MinVP1</key>
    <integer>21</integer>
    <key>MinVP5</key>
    <integer>28</integer>
    <key>MinVP8</key>
    <integer>29</integer>



and then a table of numbers 0-29 which I think define the possible values for VPState and PState.

It's conceivable that by modifying these you could force it to remain at the lowest power-level (which is actually the highest numerical value for PState. I.e. P0 is actuall max power consumption), but I'm really not sure what the 6/14/14/6 corresponds to, nor what the actual values in MinVP* represent.

Maybe if someone has ideas here, please chime in?

## Disable Internal Display

So then an orthogonal approach to reduce temps is to run with the laptop open to allow for better dissipation. Normally, opening the display will undo "clamshell mode" and result in the internal display activating as well, but as some are aware, there is the supposed boot-args "iog=0x0" trick to restore the pre-Lion behavior that allows opening the laptop when in "clamshell mode".

Now the actual invocation for 10.9 should actually be "iog=0x2", because if you read the IOFramebuffer source [2], you see that lid open behavior is controlled by the lowest bit, and the default is 0x3. Setting to 0x0 also unsets FBVBLThrottle, whatever that does (probably not good?).

What does "unsetting" gIOFBLidOpenMode actually do? Again, reading the IOFramebuffer source we see that it influences whether "desktopMode" is set, and also influences whether a reprobe is issued upon "clamshell reset". "DesktopMode" seems to be used to set an IOPM bit that controls whether closing a laptop allows it go into sleep. It should be noted that I think there are kexts (Insomnia) that use the same mechanism to prevent the laptop from sleeping on lid close.

This is great and almost what we want, except when waking from sleep the internal display gets re-enabled. Now, you may be aware of tools like SwitchResX that are able to disable internal display. These make use of the private CGSSetDisplayEnabled call (same as open source tool on github), but these are not 100% complete solution as you can see from switchresx faq. I haven't checked with IORegistryExplorer, but I suspect the display will still be attached in the tree. I guess it satisfies the condition that no resources are wasted on rendering to it, but it is too much hassle when unplugging displays since it's not seamless.

By contrast when the display is disabled in clamshell mode, it is completely gone from the ioreg hierarchy. I'm still not sure how exactly this works, since as far as I can see the call to disable internal display when closing laptop is done via `fb->deliverFramebufferNotification(kIOFBNotifyClamshellChange, clamshellProperty);` and then in IOBacklitDisplay it changes power state to 0. I also need to explore and see whether this is reflected in gIOFBBacklightDisplayCount being 0.

Upon wake, we a kIOFBEventResetClamshell event is posted, which is handled via



Code:
if ((gIOFBLidOpenMode && (gIOFBCurrentClamshellState != gIOFBLastReadClamshellState))
        // clamshell changed since last probe
     || (!gIOFBLidOpenMode && (gIOFBBacklightDisplayCount && gIOFBLastReadClamshellState && !gIOFBCurrentClamshellState)))
        // clamshell was closed during last probe, now open => reprobe
    {
        DEBG1("S", " clamshell caused reprobe\n");
        events |= kIOFBEventProbeAll;
        OSBitOrAtomic(kIOFBEventProbeAll, &gIOFBGlobalEvents);
    }
    else
    {
        AbsoluteTime deadline;
        clock_interval_to_deadline(kIOFBClamshellEnableDelayMS, kMillisecondScale, &deadline );
        thread_call_enter1_delayed(gIOFBClamshellCallout,
                                    (thread_call_param_t) kIOFBEventEnableClamshell, deadline );
    }


so if it's the case that IOFramebuffer still retains a reference (via gIOFBBacklightDisplayCount) to the display then it's the re-probe causing the internal display to be re-activated. Of course, maybe (even most likely) upon a wake from sleep the internal display is activated regardless of whether or not a reprobe is issued (I'm not sure where this would be handled, but I assume it's treated similar to a new display being powered and coming online) in which case patching this wouldn't do anything.

The other way to do this would be to get a reference to gAllFramebuffers in kernel memory and then just issue issue a kIOFBNotifyClamshellChange for all the active framebuffers. That should "simulate" the laptop going to sleep, and allow you to manually disable the internal display.


[1] https://apple.stackexchange.com/que...s-a-mid-2015-retina-macbook-pro-with-only-int


[2] https://github.com/st3fan/osx-10.9/blob/master/IOGraphics-468/IOGraphicsFamily/IOFramebuffer.cpp
 

f54da

macrumors 6502a
Original poster
Dec 22, 2021
503
185
Ok, I was able to play around with this and merely patching out the kIOFBEventProbeAll is not sufficient, the internal screen still lights up upon a wake from sleep. I wonder what was the purpose of the resetClamshell in the postwake handler then, since it seems completely useless if it's going to reprobe anyway.

I guess what you'd need to do is create a kext that maybe exposes some /dev/ file descriptor that you can write to (linux style) to toggle internal display, and then the kext would issue the kIOFBNotifyClamshellChange after finding the address of the global FB array in kernel memory. I'm not sure what would be the best way to find the address of the IOGraphics kext in kernel memory, looking at Lilu seems it sort of relies on hooking some kernel kmod load functions that tell it when a new kext is loaded or not. Other approach I've seen relies on basically punting this to userspace, which will run kextstat and communicate the address via a ioctl or something back to the kext.

Also note that if you click "detect displays" in syspref while using the iog=0x2 trick with lid open, you will see internal display comes back online. This shows that reprobing is indeed sufficient to activate the internal display, and I bet that reprobe is done on power/unpower any port.

Or you could just use the magnet trick... idk. Probably will just do that since I'm too lazy to write and test this.
 
Last edited:

f54da

macrumors 6502a
Original poster
Dec 22, 2021
503
185
Also unrelated, did something in external display rendering change between 10.9 and onwards? Trying to different machines, one running 10.9 and a newer m1 running whatever the latest is, the graphics on the external monitor under 10.9 are really terrible by comparison.

Neither is running at hidpi, and are both the same resolution, but on 10.9 the output just doesn't look crisp. Maybe newer OSs have better support for fractional scaling, or the combination of new font + grayscale AAing..
 

f54da

macrumors 6502a
Original poster
Dec 22, 2021
503
185
Aha, I think we could actually simplify all of this, and avoid hooking symbols by controlling the backlight power state directly. Since setPowerState seems to be a public method for all IOServices, it seems like all we need is to attach ourselves to the AppleBacklightDisplay, and we'll be given a handle to it that we can use for our purposes.

Edit: This was simple enough that I couldn't resist trying it. I first tried to create an IOKit extension, but I had no idea how to do the matching (matching on the provider being "AppleBacklightDisplay" didn't work. And of course all this has 0 documentation so idk).

But luckily after some digging I found that the kernel gives you a way to get a handle to any IOService based on just the class name itself. And using this I was able to get a pointer to AppleBacklightDisplay. I tried setting power to 0, and while that disabled the backlight (effectively turning the brightness to 0), the computer still recognized it as a display. I also tried terminating the AppleBacklightDisplay entirely, and that removes it from the ioregistry hierarchy entirely, but it didn't work either. It seems the "nub" is still present (I see a "display0" still present, even though it doesn't have anything dropped down). And the "nub" class is of type DisplayConnect, which is handled by DisplayWrangler. There's an IODisplayWrangler::destroyDisplayConnects which seems promising, let me try tracing that.

So I suspect it's actually somewhere in the framebuffer that is handling this, and the AppleBacklightDisplay class really only exists to do some niceties like set brightness and whatnot. I guess I can read more into IOFramebuffer... I still don't know where it decides whether or not to create a framebuffer for the internal display or not. This must be done based on lid state (or desktopMode status), but I have no idea where.

----

Update: Ugh I tried poking at all the things in frame buffer, no luck. And I don't want to poke via private methods since I don't want to leave FB in an inconsistent state. The closest I got was doing something like

Code:
kern_return_t DisableInternalDisplay_start(kmod_info_t *ki, void *d) {
  IOLog("Enabled FOO BAR\n");
  OSDictionary *backlight = IOService::serviceMatching("AppleBacklightDisplay");
  if (backlight != NULL) {
    IOService *serv = IOService::copyMatchingService(backlight);
    if (serv != NULL) {
      IOLog("Got service %s\n", serv->getName());
      IODisplay *iodisp = OSDynamicCast(IODisplay, serv);
      IOLog("Got service %s\n", iodisp == NULL ? "NULL" : iodisp->getName());


      /* IODisplayWrangler::destroyDisplayConnects(
          iodisp->getConnection()->getFramebuffer()); */
      //      odisp->setPowerState(0, NULL);
      iodisp->getConnection()->getFramebuffer()->setPowerState(0, NULL);
      iodisp->getConnection()->getFramebuffer()->messageClients(
          kIOMessageServiceIsSuspended, (void *)true);
      // iodisp->terminate();
    }
  }
  return KERN_SUCCESS;
}

Where you basically set frame buffer power state to 0. This works, but userspace still thinks it's connected and you get a kpanic on connection switch, maybe because it's still mapped in memory and something tries to blit to it. Also tried disassembling the subclass of framebuffer, nvdaresman or whatever, and I couldn't find anything specific to clamshell state there. To avoid kpanic, you have to tell clients to update state via messageClients, but the issue is sending a

kIOMessageServiceIsSuspended which will normally tell clients to update also ends up reprobing the framebuffer, which ends up restoring power to it.

I'm beginning to think that maybe disabling internal display when the laptop is closed is not handled by IOFramebuffer. Could be either AppleMuxControl kext, AppleGraphicsControlBacklight, or maybe even in hardware? I see some methods for both taking IOFramebuffer* as parameter, but I couldn't find anything definitive.

===

Ok, last time noodling around. I'm fairly certain that sending \0igr (kConnectionIgnore) to the frame buffer and then sending message to the clients to update state works, but when the displays are mirrored it ends up giving black screen on both. Even user-space DisableMonitor which does it via CGSetDisplayEnable runs into the same issue, so I'm thinking that coregraphics doesn't like it when it tries to have mirrored monitors and one of them goes offline. But I don't want to retry to see if that will indeed work.

For future reference, what might possibly work is doing iodisp->getConnection()->getFrameBuffer()->getAttributeForConnectionExt(0, '\0igr', 1 << 31) followed by getting 'prob'. The 1 << 31 should "mute" the frame buffer. It's something that gets sent by AppleMuxControl, but I'm not sure if it sends 1 << 31 or not.

then do iodisp->getConnection()->getFramebuffer()->messageClients(kIOMessageServiceSuspended, (void*) true);

Another thing to try is to go down to the NDRV level, and there's some constant you can send to deactivate a connection. But I don't know if it's worth it, maybe just doing it in userspace like DisableMonitor is ok, since I think it achieves the same goal?
 
Last edited:

f54da

macrumors 6502a
Original poster
Dec 22, 2021
503
185
Btw, I figured out why the external display didn't look good, it appears to be a well known issue where laptop detects external monitor as a "TV" and doesn't output in RGB. There's an EDID override fix that works for this, and now things look great.

You can query the power draw of the GPU by looking PG0R smc key (istat menus didn't show this for me, I don't know why). It seems that the userspace DisableMonitor solution does not actually work to reduce power draw, maybe because it's still treated like display is connected for power control purposes? With the clamshell technique though the draw idles at 4W which is same as what you get with internal display only (on external gpu).

-----

But that aside, unfortunately I tried setting connection disable at NDRV level, and it didn't work (it exposes a SetMultiConnection, but not SetConnection. And only the non-multi methods work, since mac uses 1 FB for each separate output). And just changing power level of the frame buffer panics, even though that seems to be what AppleBacklightDisplay does during sleep. [Edit: This is not the case, event though the source has this surrounded by an ifdef LCM_HWSLEEP, disassembling iographics kext shows that there is no framebuffer power state change issued on sleep. So this makes the question of how exactly the frame buffer gets suspended even more confusing. Disassembling a newer radeon gpu kext shows that there are various things related to lid change and clamshell in there, so I guess it's possible that it's the one doing it?)

All this leads to me to think that the apple gmux is probably the one that kills the internal display when sleeping with lid closed? But then some hackintosh people report that they have clamshell working for them, which doesn't jive with this. That or maybe it's not done in IOFramebuffer superclass but handled directly in the nvidia/intel drivers?
 
Last edited:

f54da

macrumors 6502a
Original poster
Dec 22, 2021
503
185
@MultiFinder17 because then I could no longer use it as a laptop. If I did that I guess I might as well get a desktop mac? I think the better workaround is the magnet trick (use a pair of magnets to trick it into thinking lid is closed). That's clearly be what any sane person would do, but it's annoying to me that I can't figure out where exactly the codepath to disable internal display is done. My leading 2 guesses are that it's either implemented in the probe method for IOFramebuffer subclasses (e.g. IntelAzulFramebuffer or NVDAResman) or it's controlled by Apple GPU Control (AGC) kext. Both would match with all my observations so far. I've patched other annoyances in mac before, both in userspace and kernelspace, so I don't like being stopped by this..

But I've given up for now. If it's in the probe function of IOFramebuffer subclass then there's no easy way to enable/disable internal display (maybe by patching code you could always disable internal with external display connected, but my static analysis RE skills are not good enough to figure out where to patch). There is also some clamshell handling code in the kext (I at least see the hex sequence 'clam', and some log statement about clamshell sate).

If it's done by AGC then it's either implemented by talking to framebuffer and telling it to ignore connection, or the AGC sets some register in the mux to disconnect display at hardware level. If it's the latter then also too difficult for me to RE. If it's the former it's more promising since it implies that we can send the same message (probably via setAttribute), but I looked and the only message that seemed relevant was "\0igr" (kConnectionIgnore) which seems like what we want but when I sent the message nothing happened (I was able to set data payload to 1 << 31 to mute all FBs entirely, but that takes down external display as well). And besides, since I didn't find any clamshell handling code in the AGC kext, and I know that reprobing while display closed does not detect internal while reprobing while display open does, I'm less confident that it's being done by AGC.

Thread can be closed I suppose, unless anyone else has any ideas or wants to try something. As an orthogonal thing I'd still like to know more about how the AppleGraphicsPowerManagement P-state plist can be tweaked, but looks like even the hackintosh guys just copy existing plist so maybe no one outside Apple/Nvidia knows exactly what those keys represent? (I mean clearly I know they represent boost power states, with 0 being highest, but why specify 4 of them {6, 14, 14, 6}? If it corresponds to 4 boost powerstate levels, why is it not monotonically increasing?


=====

Edit: More interesting findings. I said I'd given up trying to patch it, but that doesn't mean I can't still black-box debug this to get more info :)

* I had said before that performing a reprobe while in the lid-open-clamshell-via-"iog=0x2" is sufficient to bring the internal display back online. The reverse is also true. Normally, (with the default "iog=0x3" closing the lid will cause you to go into clamshell mode. But with "iog=0x2", it puts the computer to sleep instead. If you prevent sleep on clamshell close (via "caffeinate -i -s") and then close the lid, the internal display remains online until you issue a reprobe, at which point it goes offline. (Note that by "issue reprobe" I mean clicking detect displays in sys pref. This is causes reprobe request to be issued the framebuffer, which is verified by manually sending kConnectionProbe. Importantly, it suffices to send the probe request to the framebuffer, NOT the gmux (even though probeAll code in IOFramebuffer.cpp also queries gmux).

* If you use a virtual framebuffer driver (for instance, ScreenRecycle^ by the JollyVNC guys) to simulate a connected display, clamshell mode does not work. This definitely rules out it being any userspace-side code that's responsible for taking displays offline.


Put together, this means I'm almost certainly confident that it's the framebuffer subclass that's handling this on a reprobe event. This gives me enough to being poking around a bit more.. Unfortunately NVidia's graphics drivers are a a mess. They're wrapped on top of legacy "NDRV" graphics drivers.


^ Incidentally, ScreenRecycler is neat in concept but it never really worked well since connection kept glitching out every half minute or so. But when it was working, the latency was close to imperceptible, definitely neat tech. There's a few open-source virtual framebuffers but I don't know if any of them work post 10.6
 
Last edited:

f54da

macrumors 6502a
Original poster
Dec 22, 2021
503
185
Alright I did it! Turned out to be really straightforward, guess all I needed was the assurance that it was done by the driver-specific IOFramebuffer subclasses after all:

If you platform uses NVDAResman kext (check in IOReg to see which provider is responsible for the framebuffer), then you should grep for 0x636c616d (that's the byte sequence for 'clam'). At least on my version of 10.9.5 (the latest), it was at

Code:
int sub_1ebc1e(int arg0) {
    rdi = arg0;
    if ((*(int8_t *)(rdi + 0x5ced) == 0x0) || (*(int8_t *)(rdi + 0x178) == 0x0)) goto loc_1ebc53;


loc_1ebc38:
    rax = _VSLGestalt(0x636c616d, &var_4);
    if (rax == 0x0) goto loc_1ebc5b;


loc_1ebc4a:
    *(int32_t *)_assert_count = *(int32_t *)_assert_count + 0x1;
    goto loc_1ebc53;


loc_1ebc53:
    rax = 0x0;
    return rax;


loc_1ebc5b:
    rax = var_4 & 0x1;
    return rax;
}

Code here is pretty straightforward, it just calls into _VSLGestalt to lookup the 'clam' attribute, does error checking, and returns lobyte. If you hardcode this to return 0x1, internal display will shut off whenever you connect an external display, regardless of whether the laptop is closed or not. Moreover, it will persist across sleep. Also after patching don't forget to set proper owner/permissions and rebuild kextcache, I always forget that step and wonder why things aren't working...

You might think that this would affect things when no display is connected, but luckily it seems they check for this case and don't shut off the display then. (I was able to verify this before doing the fix by VNC'ing when the laptop was in clamshell, and saw that internal display was still connected). (Additionally, usually iGPU will be used so it doesn't really matter).

Now tbh I didn't trace through all the callers of of sub_1ebc1e to see if this would have any bad side-effects. (In my few hasty minutes of playing around with it things seemed fine but I can't promise anything.) I suspect though that if you go up through all the callers you'll eventually arrive at something relating to probing though (the NDRV equivalent is handling "cscProbeConnection").

If you wanted to be able to toggle this on/off dynamically, I think the best option would be to make a kext to patch NDRV kext (similar to what Lilu does). Upon kextload, save bytes at that address and replace it to ret 1, upon kextunload, put back the saved bytes. Then you just have to issue a reprobe (I'm sure there's a cmd-line way to send "detect displays", or do it via CGSservices, or worst case just do it in the kext by sending kConnectionProbe).

I will let someone else do this though, because I don't have much use for it (and I can always just put back the old kext if I ever need to). Also sorry if you only have igpu, I don't know offhand exact place to patch (and even if I did, I have no way of testing). I see AzulFramebuffer has a similar check for 'clam' though in `AppleIntelFramebuffer::GetOnlineInfo` which is probably the right spot:

Code:
loc_d8a6:
    var_B0 = 0x0;
    var_E8 = r14;
    rax = *qword_70090;
    rax = (*(rax + 0xa40))(r13, 0x636c616d, &var_B0, 0x0, 0x80);
    COND = var_B0 == 0x0;
    *(int8_t *)(*(r13 + 0x1d0) + 0x1a62) = COND_BYTE_SET(NE);
    if (COND) goto loc_e0b5;

loc_d8e4:
    *0x79418 = *0x79418 + 0x1;
    if (rax != 0x0) goto loc_e0b5;

loc_d8f4:
    *0x79420 = *0x79420 + 0x1;
    *0x79410 = *0x79410 + 0x1;
    _kprintf("Clamshell is closed\n");
    rax = *(r13 + 0x1d0);
    if (*(int8_t *)(rax + 0xf8e) < 0x2) goto loc_e0b5;

I'm fairly certain var_B0 here represents the result of the clamshell check, while rax is just the status bit. So you could probably do the same.

Hope this is helpful, again as far as I can see this is the first instance of this being discussed, and previous solutions like DisableMonitor don't actually seem to kick the gpu into the lower power state.
 

f54da

macrumors 6502a
Original poster
Dec 22, 2021
503
185
Some more bonus things:

Note that there is one small glitch, where after a fresh reboot if the discrete gpu becomes active (either forced via gfxcardstatus or simply due to something needing the external gpu) when no external display is plugged in, your framebuffer will freeze (computer remains active [i.e. can ssh into it, and there's no message printed in the console, it's just that the frame buffer's frozen). However, if you plug into external display once and then oneplug, no issues occur (you can force discrete gpu on internal display without it freezing). Interestingly gfxcardstatus behaves differently in that it no longer receives the gpu switch callbacks, even though they do take place. Another hitch is that if you have display mirroring enabled and then try plugging in an external display, framebuffer will freeze. I saw this behavior with DisableMonitor as well so it's probably some unanticipated state here. I'll probably end up writing a more targetting hotpatch kext in the future to avoid these.

The other thing is that I dug into the AGPM (applegraphicspowermanagement) code and I think I have a decent idea what the config represents. Basically, from what I can see AGPM itself only concerns with switching between 4 power states: P0/P1, P5, and P8. I guess these roughly correspond to idle, low-work, medium-work, and high-work. It switches based on these based on heuristics, and there are various types: the older heuristic (as used in the 2010 mbp fix) is well known so I won't go into that here, it does so based on gpu utilization with thresholds set in the config file. The newer one (heuristic type "4") seems to map these AGPU p-states directly onto the GPU's p-states (I wonder if this is beacuse newer gpus have smarter internal power management?). For instance, MinP5 = 14 means that at the AGPU's state gets set to what it calls fP5, it will make sure the GPU's p-state is at least 14 (so either 14 or 15). So in other words it sets a floor on low the numerical gpu p-state can drop. The MinVP5 does equivalent for vpstate.

Another thing is the separate P0/P1 scale. It seems like whenw workload gets high enough it switches from P1 to P0 table for the vpstates. I guess it's because we need finer scale here?

Finally there's also the BoostPstate. Bosot can be manually requested by an IOUserClient of AGPM, and it will temporarily put the p-state to the numbers defined there. I still don't know if the exact ordering matters, but e.g. with {6, 14, 14, 6} you can observe that p-state is boosted from idle (15) to 6, then slowly drops to 14 then back to 15. There are different boost patterns (5 of them), I don't know what they all do. An example of when boost is done is during mission control launch, I will show you how to manually trigger and observe them in later paragraph.

This means that you can theoretically set MinP5=15 and MinVP5=29 as well to try to reduce temps a bit (same for P1 also). But I don't really think this will make a difference for moderate workloads beacuse as I will show you in a bit, P-State already stays at 15 most of the time.

Here is example code on how to connect to AGPM and query P-state


C++:
#import <Foundation/Foundation.h>
io_name_t name;
const char *getName(io_registry_entry_t entry) {
    IORegistryEntryGetName(entry, name);
    return name;
}


#define GET_PSTATE 0x1c87
#define GET_MAX_PSTATE 0x1c88
#define GET_CONTROL_STATE 0x1c8a
#define SET_LOG_BITS 0x1c8c
#define GET_VENDOR_POWER 0x1c8e
#define GET_PSTATE_OCCUPANCY 0x1c96
#define SET_BOOST  0x1c90


// Does not work
int setIntAttr(io_connect_t conn, uint32_t selector) {
    uint64_t outputScalarCnt = 1;
    uint64_t outputScalar[16] = {0, 0, 0, 0};


    uint64_t input[2] = {1, 1};


    kern_return_t result = IOConnectCallScalarMethod(
        conn,
        selector, // getPowerState_unsigned_int_ selector index found in Ghidra
        &input /* input scalar ptr*/,
        2 /* input scalar count*/,
        outputScalar,
        &outputScalarCnt);


    if (result != KERN_SUCCESS){
        printf("IOConnectCall error: %x\n", result);
        return -1;
    }


    printf("%d\n", outputScalarCnt);


    printf("%x %x %x %x", outputScalar[0], outputScalar[1], outputScalar[2], outputScalar[3]);


    return 0;


}




int getIntAttr(io_connect_t conn, uint32_t selector) {
    uint64_t outputScalarCnt = 1;
    uint64_t outputScalar[16];


    kern_return_t result = IOConnectCallScalarMethod(
        conn,
        selector, // getPowerState_unsigned_int_ selector index found in Ghidra
        NULL /* input scalar ptr*/,
        0 /* input scalar count*/,
        outputScalar,
        &outputScalarCnt);


    if (result != KERN_SUCCESS){
        printf("IOConnectCall error: %x\n", result);
        return -1;
    }


    return outputScalar[0];


}


int main(int argc, char *argv[]) {
    mach_port_t masterPort;
    io_iterator_t iterator;


    IOMasterPort(MACH_PORT_NULL, &masterPort);


    CFMutableDictionaryRef matchingDictionary = IOServiceMatching("AGPM");
    kern_return_t result = IOServiceGetMatchingServices(masterPort, matchingDictionary, &iterator);
    if (result != kIOReturnSuccess) {
        printf("Error: IOServiceGetMatchingServices() = %08x\n", result);
        return 0;
    }


    io_service_t device = IOIteratorNext(iterator);
    io_registry_entry_t parent;
    IORegistryEntryGetParentEntry(device, kIOServicePlane, &parent);
    if (strcmp(getName(parent), "NVDA") != 0) {
        IOObjectRelease(device);
        IOIteratorNext(iterator);
    }


    printf("%s\n", getName(parent));
    IOObjectRelease(iterator);


    io_connect_t conn;
    result = IOServiceOpen(device, mach_task_self(), 0, &conn);
    if (result != kIOReturnSuccess) {
        printf("Error: IOServiceOpen() = %08x\n", result);
    }
    printf("%s\n", getName(device));




    for (int i = 0; i < 100; i++) {
        printf("P-state %d, Vendor-Pstate %d, Max-pstate %d, Control state %d\n",
            getIntAttr(conn, GET_PSTATE),
            getIntAttr(conn, GET_VENDOR_POWER),
            getIntAttr(conn, GET_MAX_PSTATE),
            getIntAttr(conn, GET_CONTROL_STATE));
        sleep(1);
    }




    IOObjectRelease(conn);
    IOObjectRelease(device);


}

There are also SetPstate and SetLogBits, but these are unfortunately locked. What I mean is that for SetLogBits for example, if you set to anything other than 0 it will throw an error (it actually takes 2 args, and does "bits = (bits & ~arg2) | (arg1 & arg2)" If arg2 is set to anything than 0, it makes the extra verification call). I investigated why, and assuming I got the vtables index matched correctly for some reason it tries to make sure that the AGPM provider of the user client is not open when setting log bits (or p-state). I don't understand why this is, it doesnt make sense... maybe I didn't match vtable numbers correctly so IOService::isOpen isn't actually the function it tries to call (I'm not really used to statically reversing cpp code). And more interestingly this check is not there on newer versions of osx. On newer versions of osx you should be able to do the set* functions without an issue, but then again on newer versions you don't need it because you can just directly set nvram boot-args agpmloglevel=0x5 and agpmlogbits (unused).
 
  • Like
Reactions: Needleroozer
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.