I can now confirm that those trim messages that show trim counts do NOT show up when using an older USB 2.0 SATA adapter. (which also doesn't cause any filesystem corruption)
The messages that show up instead are ones like the following every time the disk is plugged in and mounted:
2022-11-01 07:33:44.511273-0400 0x6673a Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.004934 s (no trims)
2022-11-01 07:33:49.257875-0400 0x66793 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.005193 s (no trims)
2022-11-01 07:39:14.650951-0400 0x67709 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.005184 s (no trims)
2022-11-01 07:39:25.747366-0400 0x67785 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.079850 s (no trims)
2022-11-01 07:41:39.043031-0400 0x67e80 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.005413 s (no trims)
2022-11-01 07:42:03.210066-0400 0x67fa4 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.077970 s (no trims)
2022-11-01 07:43:00.660717-0400 0x68351 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.004405 s (no trims)
2022-11-01 07:43:18.837811-0400 0x68413 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.077847 s (no trims)
2022-11-01 07:45:35.271865-0400 0x68b3d Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.037514 s (no trims)
2022-11-01 07:45:53.127851-0400 0x68c6f Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.492142 s (no trims)
2022-11-01 07:46:52.751895-0400 0x69102 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.038380 s (no trims)
2022-11-01 07:47:00.524165-0400 0x6918a Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.493337 s (no trims)
Whereas when the same disk is plugged in with a USB3.0 adapter (either ones that do or don't cause the corruption, depending on the adapter chipset), the following immediately shows up after mounting the filesystem:
2022-11-01 07:48:44.382547-0400 0x6971a Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3154: disk3 scan took 1.663866 s, trims took 1.578495 s
2022-11-01 07:48:44.382559-0400 0x6971a Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3164: disk3 488032117 blocks trimmed in 157 extents (10054 us/trim, 99 trims/s)
2022-11-01 07:48:44.382564-0400 0x6971a Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3167: disk3 trim distribution 1:12 2+:11 4+:4 16+:4 64+:0 256+:126
So the problem seems to be related to trimming, and as of Monterey, trimming is apparently now enabled for all external USB drives that can support it via their adapters, which means any USB adapter or external drive with UASP potentially will show this problem if anything goes wrong with trimming. "trimforce disable" doesn't prevent trimming from occurring on these disks. And some USB adapters are not handling the trimming correctly.
I'm not yet sure why this would only occur with drives larger than 2TB. On one of my adapters, that makes some sense, since my Sabrent/JMicron(152d:1561 v2.04) is actually converting the sector size to 4K instead of 512 bytes for the 4TB drive, but not the 2TB drive. So some bug is getting exposed with 4K sectors. It's not clear if that's a bug in MacOS (less likely since the internal drives are all 4K also) or a bug in the adapter, but more likely there's something happening in the adapter. Perhaps it's passing through the UASP UNMAP command without doing the LBA translation for 4K to 512 byte sectors.
On the other failing adapter (Sabrent/VLI(2109:0715 v0.00)) it's using 512 byte sectors still, but fails on the 4TB drive. Not immediately during format, but a brief speed test afterwards writes enough data to cause it to fail. (or perhaps it was already failed after the format anyway-- need to check the moment of failure more carefully)
But trims have occurred after the format, so most likely the corruption started at this point. It always trims again on a remount, so a good disk effectively "goes bad" after mounting using this controller.
Since again there's the 2TB vs 4TB difference, where it just fails on drives larger than 2TB, I'm wondering if it's a bug in the adapter with passing the higher-order bits of the LBA correctly to the TRIM command for the drive.
(either it's getting it wrong when translating UASP UNMAP to SATA TRIM, or it's a UASP SCSI passthrough, and it's not passing those bits through, or the drive itself (MX500 4TB) has some problem when it's passed through that way)
For the record, I'm seeing working trims in the logs and no corruptions so far with the same 4TB SSD with a different Sabrent/Jmicron(152d:0576 v12.14), so it's definitely possible to pass the UASP UNMAP to the drive correctly and have it work. I'm not sure if that case is converting it to a SATA TRIM, or passing it through as a SCSI UNMAP. If anyone has a SATA protocol analyzer and could check, that would be good knowledge to have. Also if anyone has a USB 3.0 protocol analyzer, it would be good to know if the UASP UNMAP looks valid for the case with the 4K sector size, although I suspect it is, and that the adapter or the SSD is what's botching the resulting TRIM/UNMAP.
to summarize: I think the root of this problem is that Monterey has enabled TRIM (via UASP UNMAP) for APFS universally across all external USB drives and adapters that will allow it, and there are bugs getting exposed as a result, which are most likely in the adapters or drives, but they were never exposed before under Big Sur, even if "trimforce enable" was turned on before. Unfortunately people are finding this out the hard way when they upgrade and an adapter or external disk that seemed to work just fine before ends up corrupted and they completely lose a 4TB or larger filesystem.