I've never been able to find any authoritative statement on this, nor any authoritative explanation for why a machine that can consistently run without issue when the uptime is kept <= 3 days will consistently run into a problem when the uptime is increased to 2 weeks. So I don't know if such an answer is available.
Well, I can give you a *correct* answer. It's not the only possible answer, but it likely covers most cases.
First, as you pointed out, if you're running third-party kernel modules, they can be the source of all sorts of issues. But most people are not these days, and over time that practice will end entirely as Apple migrates more and more stuff into user space (and, who knows, maybe virtualized domains at some point in the future, like Xen's "stub domains"). So I'll assume there aren't any of those.
The thing that kills OSes over time - not just MacOS - is generally resource starvation, due to resource leaks. You could just say "memory" and leave it at that, but for the OS there are many more specific possibilities. I/O descriptors, network packet structs (mbuf/sk_buff/whatever your OS calls it), swap space, thread and process descriptors, etc. etc. ad nauseum. Ultimately these all come down to memory too, whether the resource has fixed initial limits, or can grow over time, but there are a lot of them.
There are similar but not identical issues- for example, in the old days, you could easily lock up an OS' network stack by flooding it with bogus SYNs. There the resource starvation wasn't due to a leak, but rather an attack that simply used up all available resources (half-open TCP connections, in this case), of which there were (as shipped by OS manufacturers, way back when) a stupidly low fixed number.
Anyway, most of the time, those leaks are triggered by user software. The more time it's actually running and calling the kernel, the more the leaks will happen. So for some Mac somewhere, it runs software that tickles those bugs often enough that after 3 days it's doing fine, but after 2 weeks... not so much. But there are also *many* Macs where uptime of months is easily achieved and sustainable. It depends on what's running.
Relatedly, you can have kernel bugs that produce a broken pointer into kernel space. Use-after-free, double-free, bad pointer math, bad boundary checks, and other less common possibilities. If these trigger only rarely, or only after a certain amount of resource allocation, that can explain why a system seems to crash in certain ranges of days of uptime.
And lastly, all this stuff is more or less true about some other parts of the OS, like the Windowserver. If that gets hosed you're not likely to be able to tell the difference between that and an actual crash/lockup, unless you have remote SSH logins set up and working, so you can figure it out and then tell it to kill and restart the windowserver or whatever else is dead.