Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ignatius345

macrumors 604
Aug 20, 2015
7,608
13,015
One important caveat for Onyx. Each OS will have its own version of the app, and they do not release the new version until after the latest OS has been officially released. In other words, the most recent version that was built for Sonoma will not work on either developer or public betas of Sequoia.
I do appreciate how careful he is with that. Looks like he does have a beta up to work with Sequoia. (Running beta utility software on a beta OS is too adventurous for my blood, but it's there...)
 
  • Like
Reactions: Geoff777

fibercut

Suspended
Aug 1, 2024
29
9
Technically the "cache" is cleared (including all temp files) on a power cycle. That being said, except for system updates I never shutdown my Mac. There probably is some "housekeeping" during the operation of a Mac, but I can't say for certain.

I say in you case maybe you could use something like the free Onyx and run that after a month to do oo that once in while!
 
  • Like
Reactions: Geoff777

JPBoney71

macrumors newbie
Oct 2, 2021
15
18
Ripon, CA
I've been using Macs for a long time, and back then the consensus was that it was a good idea to shut the Mac down about once a week.
The premise was that it did a more thorough tidying of files etc. in the system and put the system into better condition than when just putting to sleep.
I don't know if that was true, but it certainly seemed to me that when you shut it down, it took quite a bit longer to power off than when you did a restart, so I figured it probably was.

I'm now using a silicon Mac and to power down takes about 9 seconds during restart, and about 9 seconds for a full shutdown.

So is that old advice I've been following still valid, or completely out of date?
I don't believe that shutting it down religiously once a day or once a week is as crucial as it once was, but I do shut mine down for a few hours every single day in order to clear the cache and clear the RAM and all that (I shut it off before I leave the house, and then turn it back on once I get to work). Shutting it down once per day as I do is definitely not going to hurt the machine in anyway, and again, it's not as crucial as it once was, so you don't necessarily have to.
This is something I have chosen to do with each Mac I have owned, 2015 onward (FWIW, I now have an M3 max MBP).
I feel like it preemptively addresses a multitude of potential problems that may or may not happen down the road.
Maybe it actually does, or maybe I'm just superstitious. Maybe a blend of both… Lol.

As far as the timeline of how long it takes to actually shut down versus how long it takes to power back up, I don't notice any concerning issues in my experience with my machine.

I guess the long and short of it is, if you've grown comfortable with shutting the machine down once per week, and that is in your current habit to do, I don't see anything necessarily wrong with it.

If you feel like the time it takes to boot back up is taking longer than you feel it should, you could experiment with shutting it down once per day for about a week and see if there's a difference (good or bad) in both performance as well as time to boot up.
 
  • Like
Reactions: Geoff777

Bungaree.Chubbins

macrumors regular
Jun 7, 2024
171
287
Anecdotally, I seem to remember having to do a lot more restarts on Intel Macs than on the Apple Silicon ones out now.
My experience is mixed. My 2012 11” i7 MacBook Air was solid as a rock. I used iStat Menu to set more aggressive fan control as, left stock, it would throttle under load, but I never needed to restart it.

The 2011 i5 Mac mini though. That barely made it a day without something going glitchy and needing a restart. It is curious that since becoming a server, it hardly needs restarting at all, it just sits there, chugging along.

My M3 Pro is living up to the legend my MacBook Air set, so far, touch wood.
 

henrikhelmers

macrumors regular
Nov 22, 2017
179
276
The premise was that it did a more thorough tidying of files etc. in the system and put the system into better condition than when just putting to sleep.
I think that restarting once a week is ok. As far as I am aware there is no benefit to powering down over just restarting. Maybe the advice was related to HDDs.
 
  • Like
Reactions: Geoff777

bryo

macrumors regular
Apr 6, 2021
102
169
I basically never restart my Apple silicon Mac. The experience feels so similar to a phone nowadays that I don't really think it's that necessary. But I could see an argument for a weekly reboot if say, you use a lot of third party apps/services that constantly run in the background that may not be very optimized.
 
  • Like
Reactions: Geoff777

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
With due respect, can't we, at least for once, get some authoritative answer to this issue? lt surfaces too frequently to let it pass. I understand personal experience is what counts most. Still, I'd like to hear some expert opinion---please?
I've never been able to find any authoritative statement on this, nor any authoritative explanation for why a machine that can consistently run without issue when the uptime is kept <= 3 days will consistently run into a problem when the uptime is increased to 2 weeks. So I don't know if such an answer is available.

The closest I got was the following quasi-official opinion—and when I called back to get more info. the new AC+ person couldn't find any record of which AC+ engineer said it, because the original AC+ person didn't add it to their notes:

[I did once ask AppleCare about this, and they eventually escalated it to the enginnering team. The AC support person said the engineer recommended rebooting Macs about once per week to ensure stable operation.]

This is certainly not something Apple will ever officially address. There's no way they are going to recommend their machines be occasionally shut down to avoid instability.

Plus it's too system-dependent for anyone to recommend a meaningful interval. Each user needs to determine what shutdown interval is necessary for their needs. At the same time, an explanation of what is going on would be nice, but I've never heard anyone give one. It may be that the causes are multifarious.

In my case, the required interval is every few days. However, as I mentioned in my earlier post, this may be due to interactions between the OS and the various kernel extensions some of my apps install, rather than the OS itself. When Sequoia reaches ≈v3 I'll probably do a clean install where I endeavour to disallow any kernel extensions, and see if that changes things.
 
  • Like
Reactions: Geoff777

Confused-User

macrumors 6502a
Oct 14, 2014
850
984
I've never been able to find any authoritative statement on this, nor any authoritative explanation for why a machine that can consistently run without issue when the uptime is kept <= 3 days will consistently run into a problem when the uptime is increased to 2 weeks. So I don't know if such an answer is available.
Well, I can give you a *correct* answer. It's not the only possible answer, but it likely covers most cases.

First, as you pointed out, if you're running third-party kernel modules, they can be the source of all sorts of issues. But most people are not these days, and over time that practice will end entirely as Apple migrates more and more stuff into user space (and, who knows, maybe virtualized domains at some point in the future, like Xen's "stub domains"). So I'll assume there aren't any of those.

The thing that kills OSes over time - not just MacOS - is generally resource starvation, due to resource leaks. You could just say "memory" and leave it at that, but for the OS there are many more specific possibilities. I/O descriptors, network packet structs (mbuf/sk_buff/whatever your OS calls it), swap space, thread and process descriptors, etc. etc. ad nauseum. Ultimately these all come down to memory too, whether the resource has fixed initial limits, or can grow over time, but there are a lot of them.

There are similar but not identical issues- for example, in the old days, you could easily lock up an OS' network stack by flooding it with bogus SYNs. There the resource starvation wasn't due to a leak, but rather an attack that simply used up all available resources (half-open TCP connections, in this case), of which there were (as shipped by OS manufacturers, way back when) a stupidly low fixed number.

Anyway, most of the time, those leaks are triggered by user software. The more time it's actually running and calling the kernel, the more the leaks will happen. So for some Mac somewhere, it runs software that tickles those bugs often enough that after 3 days it's doing fine, but after 2 weeks... not so much. But there are also *many* Macs where uptime of months is easily achieved and sustainable. It depends on what's running.

Relatedly, you can have kernel bugs that produce a broken pointer into kernel space. Use-after-free, double-free, bad pointer math, bad boundary checks, and other less common possibilities. If these trigger only rarely, or only after a certain amount of resource allocation, that can explain why a system seems to crash in certain ranges of days of uptime.

And lastly, all this stuff is more or less true about some other parts of the OS, like the Windowserver. If that gets hosed you're not likely to be able to tell the difference between that and an actual crash/lockup, unless you have remote SSH logins set up and working, so you can figure it out and then tell it to kill and restart the windowserver or whatever else is dead.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
First, as you pointed out, if you're running third-party kernel modules, they can be the source of all sorts of issues. But most people are not these days, and over time that practice will end entirely as Apple migrates more and more stuff into user space (and, who knows, maybe virtualized domains at some point in the future, like Xen's "stub domains"). So I'll assume there aren't any of those.
My university requires I use an antivirus and encryption-checking software (from Palo Alto Networks), which has installed both a system extension and a kernel extension. And in order to program my Logitech mouse I need Logitech's software, which has installed numerous kernel extensions.

Other extensions, for apps I could probably do without, are a system extension from Malware Bytes (it runs a daily scan) (though the extension is not currently loaded), and kernel extensions for Parallels, my old Canon printer, Intel's Power Metrics app, and extensions from HP and Silicon Labs I don't recognize.

These have accumulated over the years as I've upgraded from computer to computer using Migration Assistant—you can see why I've decided it's time to do a clean install.

Is it possible that Sequoia will force all these apps to operate in a less system-invasive way?

The thing that kills OSes over time - not just MacOS - is generally resource starvation, due to resource leaks. You could just say "memory" and leave it at that, but for the OS there are many more specific possibilities. I/O descriptors, network packet structs (mbuf/sk_buff/whatever your OS calls it), swap space, thread and process descriptors, etc. etc. ad nauseum. Ultimately these all come down to memory too, whether the resource has fixed initial limits, or can grow over time, but there are a lot of them.

There are similar but not identical issues- for example, in the old days, you could easily lock up an OS' network stack by flooding it with bogus SYNs. There the resource starvation wasn't due to a leak, but rather an attack that simply used up all available resources (half-open TCP connections, in this case), of which there were (as shipped by OS manufacturers, way back when) a stupidly low fixed number.

Anyway, most of the time, those leaks are triggered by user software. The more time it's actually running and calling the kernel, the more the leaks will happen. So for some Mac somewhere, it runs software that tickles those bugs often enough that after 3 days it's doing fine, but after 2 weeks... not so much. But there are also *many* Macs where uptime of months is easily achieved and sustainable. It depends on what's running.

Relatedly, you can have kernel bugs that produce a broken pointer into kernel space. Use-after-free, double-free, bad pointer math, bad boundary checks, and other less common possibilities. If these trigger only rarely, or only after a certain amount of resource allocation, that can explain why a system seems to crash in certain ranges of days of uptime.

And lastly, all this stuff is more or less true about some other parts of the OS, like the Windowserver. If that gets hosed you're not likely to be able to tell the difference between that and an actual crash/lockup, unless you have remote SSH logins set up and working, so you can figure it out and then tell it to kill and restart the windowserver or whatever else is dead.
Can the resource starvation occur within any level of memory? E.g., could you have plenty of RAM, but run into issues because there is a leak that's using up, say, L2 cache?

And is it more fine-grained than the total amount of available memory of the type the OS is running out of? E.g., could you have a part of the OS that is only allocated a portion of the available RAM such that, when that runs out due to a leak, you can have issues even if the total amount of free RAM remains large?

I ask because if it's just about running out of total RAM, that wouldn't apply to me, since I've got 128 GB, and I run into the instability issues even when I'm only doing routine office work and my "Swap Used" remains at 0.

I note that you are writing with confidence about this, suggesting this is generally known within the CS field. If so, can you point to any published literature that discusses this?
 
Last edited:
  • Like
Reactions: Geoff777

apostolosdt

macrumors 6502
Dec 29, 2021
322
284
Well, I can give you a *correct* answer. It's not the only possible answer, but it likely covers most cases. [...] The thing that kills OSes over time - not just MacOS - is generally resource starvation, due to resource leaks. You could just say "memory" and leave it at that, but for the OS there are many more specific possibilities. I/O descriptors, network packet structs (mbuf/sk_buff/whatever your OS calls it), swap space, thread and process descriptors, etc. etc. ad nauseum. Ultimately these all come down to memory too, whether the resource has fixed initial limits, or can grow over time, but there are a lot of them. [...]

And lastly, all this stuff is more or less true about some other parts of the OS, like the Windowserver. If that gets hosed you're not likely to be able to tell the difference between that and an actual crash/lockup, unless you have remote SSH logins set up and working, so you can figure it out and then tell it to kill and restart the windowserver or whatever else is dead.
Beyond my level as regards details, but overall readable; thank you, appreciated.

If one looks into the preceding posts, one concludes that rebooting the Mac now and then is beneficial, as it makes the OS run more bug-free. OK, but one implied issue in the OP was whether we should shutdown/power-off the computer, not merely reboot it. To my eyes, that questions hasn't been answered yet.
 
  • Like
Reactions: Geoff777

Confused-User

macrumors 6502a
Oct 14, 2014
850
984
Can the resource starvation occur within any level of memory? E.g., could you have plenty of RAM, but run into issues because there is a leak that's using up, say, L2 cache?

And is it more fine-grained than the total amount of available memory of the type the OS is running out of? E.g., could you have a part of the OS that is only allocated a portion of the available RAM such that, when that runs out due to a leak, you can have issues even if the total amount of free RAM remains large?

I ask because if it's just about running out of total RAM, that wouldn't apply to me, since I've got 128 GB, and I run into the instability issues even when I'm only doing routine office work and my "Swap Used" remains at 0.

I note that you are writing with confidence about this, suggesting this is generally known within the CS field. If so, can you point to any published literature that discusses this?
It's not about "levels of memory". The operation of the various CPU memory caches (L1, L2, SLC) are generally transparent, though CPUs will generally allow you to interact with them in various ways if you choose to (like prefetching data from RAM, hinting that some memory should not be cached, etc.). Most code won't. You can't run out of cache, as data can always be evicted - things just get slower the more they get thrashed.

It's about software resources, like the kinds I mentioned previously. (mbufs/sk_buffs, various types of descriptors, etc.) They all are kernel data structures that live in memory, and the limits on those resources may in part depend on available memory, but they often have very strict or constrained limits regardless of overall system memory. Often that's because by design they can't get used up - but bugs, or unanticipated problems (like attacks), break those designs. Sometimes resources are not guaranteed to be available and you have to code for that possibility, but those code paths aren't sufficiently tested in all cases. There are many ways these things can go wrong, but what many of them have in common is creeping effects that grow over time. If you're leaking mbufs, you won't notice a problem... until you do, when your network stack totally locks up. If you're leaking FDs, your fd table (or equivalent) will eventually fill up and suddently you won't be able to open any more files - especially problematic if your error handler needs to open a file to log the issue (that particular mistake is unlikely, but illustrative).

All this is very old news in CS. I mean, it's barely a topic in CS, it's more like a working system administrator's bread and butter.

EDIT: Sorry, offhand I don't have any cites but you can probably find *hundreds* of articles related to this on lwn.net alone, if you spend enough time looking. For more general (non-Linux-specific) info you could try googling for something like "kernel resource exhaustion".
 
Last edited:

Confused-User

macrumors 6502a
Oct 14, 2014
850
984
Beyond my level as regards details, but overall readable; thank you, appreciated.

If one looks into the preceding posts, one concludes that rebooting the Mac now and then is beneficial, as it makes the OS run more bug-free. OK, but one implied issue in the OP was whether we should shutdown/power-off the computer, not merely reboot it. To my eyes, that questions hasn't been answered yet.
So... The answer to that is *almost* clear-cut... because there is *almost* no difference between the two.

There is some:
- Hardware devices like drives and PCIe add-in cards may initialize differently on power-on, though in most cases, not, or not meaningfully.
- Memory may be tested, and in any case will contain garbage, on power-on. It may retain contents over a reboot, though the OS should ("should"!) zero pages before giving them to user processes.
- preboot code is fairly mysterious everywhere, and I know very very little about iBoot specifically. But really, other than device states, what can this do that's felt post-boot, other than touching memory (which, again, should be zeroed by the OS)?
- if you've been bootkitted (there's no evidence that such even exists for Macs, so far), there could be a difference, but why would there be? A malicious actor would want the same control whether you'd cold booted or warm booted.

The only other difference is that an off computer is a cold computer. In the old days, powering off for a while could cool down the circuit board enough to unwarp it, improving contact at bad solder joins. At least for a little while after powering on again. (Yes, I have an OG Apple II like that packed away in a box somewhere.) Eventually you'd be back to status quo ante as the system heated up. But if that's your issue you'll need to service your device anyway.

To summarize - there is basically no difference. While it's conceivable that there might be, in a few cases, they're vanishingly rare. You're not likely to see one. If you have attached devices, your odds might go up for seeing a difference with those devices.
 
  • Like
Reactions: Geoff777 and jchap

jchap

macrumors 6502a
Sep 25, 2009
636
1,164
I'm "tech support" for an elderly relative, and took a look at her MacBook Air's uptime out of curiousity. She lets it sit at a desk and just folds the screen down when she's not using it. Her uptime was like 18 months and I think it would've been a whole lot longer if I hadn't installed a software update on a previous visit.
Sounds familiar. My mother did the same with her old MacBook white model from 2010.

Actually, she would never quit open and unused apps, either—I remember that she was having some instability problems with Mail some time back. I found that she had not quit Mail or shut down her Mac for months. There were thousands of unread e-mails in her inbox, as I recall. Once we sorted them all properly into folders, set some automated Mail rules up for her to weed out the junk and marketing stuff, massively thinned out her inbox by deleting the rest and actually quit the Mail app (ha!), everything worked fine afterwards.

Guess that some people have better things to do in life than maintain their inboxes, or turn off their computers.
 
  • Like
Reactions: Geoff777

jchap

macrumors 6502a
Sep 25, 2009
636
1,164
So... The answer to that is *almost* clear-cut... because there is *almost* no difference between the two.

There is some:
- Hardware devices like drives and PCIe add-in cards may initialize differently on power-on, though in most cases, not, or not meaningfully.
I've noticed that some third-party Bluetooth devices may not connect automatically upon boot from full shutdown, but those same devices may reconnect upon "warm" restart from a power-on state. I'm not sure why this is the case, whether it is a failure in the respective Bluetooth devices to adhere to certain specs, a driver problem, a limitation of Bluetooth itself or a problem with macOS and how it handles Bluetooth devices connections on start/restart.
 
  • Like
Reactions: Geoff777

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
EDIT: Sorry, offhand I don't have any cites but you can probably find *hundreds* of articles related to this on lwn.net alone, if you spend enough time looking. For more general (non-Linux-specific) info you could try googling for something like "kernel resource exhaustion".
Thanks, with that search term I was able to find this article by Howard Oakley, which makes it more concrete for me:

Oakley mentions if you have a kernel panic you can check your logs for a "kernel zone_map_exhaustion" message. But what about more routine wonky behavior, which doesn't involve a kernel panic? If it's indeed due to insufficient zone memory, would that also show up as a "kernel zone_map_exhaustion" message?

I'm specifically wondering if the routine, and less catstrophic, wonky behavior I typically see with extended uptime could be due to 3rd-party apps running out of zone memory instead of the OS, and whether that would generate a different message. Alternately, since even app zone memory is alloacated by the OS, would zone memory insufficiency, regardless of whether it's memory for an app or memory for the OS, always generate a "kernel zone_map_exhaustion" message?

Oakley also writes:
"You can inspect the kalloc.n zones with the command sudo zprint kalloc which should make it clear which are growing to a dangerous size. zprint also supports options which allow you to monitor changing allocations, which could help."

There is this option:
-s zprint sorts the zones, showing the zone wasting the most memory first.

But I'm not sure what that means. I would think it would be more interesting to sort them by the zone that is closest to its maximum, and/or approaching its maximum most quickly.

It would be cool if there were some third-party utility that could monitor and summarize this for us—maybe something akin to Activity Monitor's "memory pressure" indicator, except instead displaying a graphic that communicates when the "zone memory pressure" is getting excessive for any zone.

Indeed, why doesn't MacOS have a built-in ability to monitor this and restart any programs whose zone memory use threatens system stability, clearning the zone in the process? In cases where the restart could affect the user, it could simply flash a warning saying "MacOS needs to restart program X because of a memory leak. Please save your work and press OK when you are ready."
 
Last edited:
  • Like
Reactions: Geoff777

Confused-User

macrumors 6502a
Oct 14, 2014
850
984
Thanks, with that search term I was able to find this article by Howard Oakley, which makes it more concrete for me:

Oakley mentions if you have a kernel panic you can check your logs for a "kernel zone_map_exhaustion" message. But what about more routine wonky behavior, which doesn't involve a kernel panic? If it's indeed due to insufficient zone memory, would that also show up as a "kernel zone_map_exhaustion" message?

I'm specifically wondering if the routine, and less catstrophic, wonky behavior I typically see could be due to 3rd-party apps running out of zone memory instead of the OS, and whether that would generate a different message. Alternately, since even app zone memory is alloacated by the OS, would zone memory insufficiency, regardless of whether it's memory for an app or memory for the OS, always generate a "kernel zone_map_exhaustion" message?
App bugs, 3rd-party or otherwise, can not cause zone alloc failures. That is exclusively a kernel issue, as only the kernel (and kernel extensions) can kalloc(). Of course, apps (even without bugs) can trigger kernel bugs that leak memory, eventually leading to zone failures.

Your questions show that you don't know enough about kernel isolation from user processes to have a good conversation about this yet. And I'm sorry I can't give you good pointers to resources on the topic. I will note that it's likely that high-quality info on Linux or other unixlikes is likely to carry over to MacOS in general outlines, if not in specifics. So there is quite a lot of material out there.

The answer to your last question, however, is a straight "no". There are many types of resource exhaustion, as I have tried to say a couple of times now. The followup question is likely to be "well, what kinds of log messages will I see then?", and the answer is, "A whole bunch of different possibilities". There is nothing simple, no short list of errors, that I can point you to and say "look for this".

EDIT to add: You haven't shown evidence that you're actually having a kernel issue. There are other components, as I've previously mentioned, that are not part of the kernel, that can cause all sorts of wierdness. Windowserver and Dock.app come immediately to mind, but there are more.
 
Last edited:
  • Like
Reactions: Geoff777

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
Your questions show that you don't know enough about kernel isolation from user processes to have a good conversation about this yet.
That's certainly possible!

Alternately, it might also be the case that you don't know enough about the art of teaching to give a technical explanation to someone who doesn't know as much as you.

I see this all the time on Chemistry StackExchange: Students will post questions about thermodynamics, and the other experts will close the question because they think the student doesn't know enough to be asking it. Then I'll take a look at the question and think "wait a sec, I think this will provide the insight the student needs", ask them to reopen the question, and provide an explanation that gets the poster to say 'thanks, that's exactly what I was looking for!'.

So this may be my limitation, it may be your limitation, or it may be both of ours.
 
Last edited:
  • Like
Reactions: Geoff777 and jchap

Geoff777

macrumors regular
Original poster
Jun 17, 2020
228
144
Thanks very much to all of you for contributing, there's some very helpful information there.
Bottom line for me is that it's not necessary to shut down regularly but it doesn't do any harm either.
I'm an old dog, but can learn new tricks, so I'll only shut down when going away for a while or when things start to look a bit flaky!
 
  • Like
Reactions: Bungaree.Chubbins

theluggage

macrumors G3
Jul 29, 2011
8,009
8,443
The only other difference is that an off computer is a cold computer. In the old days, powering off for a while could cool down the circuit board enough to unwarp it, improving contact at bad solder joins. At least for a little while after powering on again.

Yes, but that's just delaying the inevitable - the flipside is that such heating/cooling cycles can cause bad solder joints

I suppose I was counting on them lasting longer through lower thermal fatigue,

Thermal fatigue is certainly a thing - I believe that's what caused the infamous 2011 MBP GPU failures - but I'm skeptical about thermal fatigue from powering off at night being significant with 21st century gear...

An old school, good old days computer was either on, with everything running hot, or off and cooling to room temperature. A modern computer is loaded with power management features that turn of or slow down components that aren't being used... so your processor might be running at 30ºC while you are text editing and then shoot up to near-100ºC when you start rendering graphics, then down again... if the hardware can cope with those sorts of thermal stresses (which are probably what did for the 2011 GPUs) then cycling from room temperature to idle should not bother it! Even the few remaining moving parts - fans, mechanical HDs if you still have them, will be constantly starting and stopping.

Of course, that also reduces the need to shut down to save power - but if you really aren't using the computer overnight then even 12 hours at a few Watts - at current energy prices - is money that you don't need to spend, even if it's not going to save the world...

Other factor is, now everything is SSD, gone are the days of 2 minutes of disc thrashing every time you boot so the time saving of using sleep instead is pretty minimal. For my desktop system, the bottleneck is the displays waking up and being recognised (and throwing the occasional hissy fit), not the few seconds it takes MacOS to boot...
 
  • Like
Reactions: Geoff777

Confused-User

macrumors 6502a
Oct 14, 2014
850
984
That's certainly possible!

Alternately, it might also be the case that you don't know enough about the art of teaching to give a technical explanation to someone who doesn't know as much as you.

I see this all the time on Chemistry StackExchange: Students will post questions about thermodyanmics, and the other experts will close the question because they think the student doesn't know enough to be asking it. Then I'll take a look at the question and think "wait a sec, I think this will provide the insight the student needs", ask them to reopen the question, and provide an explanation that gets the poster to say 'thanks, that's exactly what I was looking for!'.

So this may be my limitation, it may be your limitation, or it may be both of ours.
Well, it's correct that I didn't make enough of an effort to teach you enough about this to have the conversation. I am unwilling to do so- it would be a significant effort and I didn't sign up for that. I'm sure (from seeing some of your other posts here) that you're more than capable of figuring it all out if you do enough reading. I invite you to do so, and in the meantime, I've offered some brief guidance.
 
  • Like
Reactions: Geoff777
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.