I just hit this issue (which I unfortunately did not know about previously) with a 4TB external Crucial MX500 SSD connected via a Sabrent USB to SATA, on a 2015 iMac after an upgrade from Big Sur to Monterey. The drive must have briefly worked after the restart since I saw it showing up as mounted, but it showed no files or directories anymore. The machine had been sitting for a while after the upgrade. Once I saw the empty directory, I immediately unplugged the drive to avoid having it persist the empty state back to the disk, and was hoping it would be normal after reconnecting, but instead it could no longer read the APFS partition at all. Upon examining the raw data, I found that the first 105 sectors (4K sectorsize) are now all zero in that partition, which is the 2nd partition.
I had a 2nd external drive, also APFS, also MX500 but only 2TB, hooked up via a similar Sabrent USB device, but it seems to be okay.
One big difference between the SSDs (besides 4TB vs 2TB) is that the 4TB SSD was using a 4K "native" sector size with that Sabrent controller. (presumably emulated by joining 8 512 byte sectors and presenting them as 4K via the USB interface). I had run into issues with that in the past anyway because it meant that I couldn't use that SSD with a different USB controller without changing the partition table to reflect the sector offsets as multiples of 8. (if moving it over without reformatting; I discovered this last year because I had formatted and populated the drive from a different USB controller first, so it had 512b sectors, then when it seemed unreadable using the Sabrent, and I discovered it was showing up as a 4K device, I modified the partition table to have the correct offsets and sizes for 4K instead of 512b and it had been working fine every since in Big Sur)
I notice from the system logs that a bunch of TRIMs did get issued to the drive. (I believe I had trimforce enabled prior to the upgrade and presumably it stayed enabled) There were some TRIM errors though. See logs at the end of this post.
So I'm wondering if Monterey is having some problem with TRIM now on external SSDs that are presenting a 4K sector size (or more specifically ones that might be really native 512b but have a controller presenting them as 4K).
If the TRIM is ending up in the wrong place or is the wrong size, that might explain why I'm seeing a large zeroed region right at the start of the APFS partition.
I was using encrypted APFS, so I'm not sure if there's any way to recover the rest of it when the start of it is zeroed, and I also don't know if random zeroing happened throughout the entire partition if it's trimming the wrong places.
Trim logs:
2022-10-24 18:14:06.034369-0400 0x5552 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk4 scan took 0.002236 s (no trims)
2022-10-24 18:14:06.034379-0400 0x5552 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3154: disk4 scan took 0.000006 s, trims took 0.000000 s
2022-10-24 18:14:07.092788-0400 0x561c Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk5 scan took 0.000679 s (no trims)
2022-10-24 18:14:07.092797-0400 0x561c Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3154: disk5 scan took 0.000005 s, trims took 0.000000 s
2022-10-24 18:14:07.147691-0400 0x55c1 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk4 scan took 0.779199 s (no trims)
2022-10-24 18:14:12.921254-0400 0x55c1 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3154: disk4 scan took 5.773553 s, trims took 4.995204 s
2022-10-24 18:14:12.921264-0400 0x55c1 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3164: disk4 60485175 blocks trimmed in 150672 extents (33 us/trim, 30163 trims/s)
2022-10-24 18:14:12.921268-0400 0x55c1 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3167: disk4 trim distribution 1:58638 2+:34318 4+:27923 16+:11442 64+:12070 256+:6281
2022-10-24 18:14:15.704651-0400 0x5a99 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk5 scan took 0.953756 s (no trims)
2022-10-24 18:15:02.820673-0400 0x5a99 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3154: disk5 scan took 47.116016 s, trims took 46.245133 s
2022-10-24 18:15:02.820678-0400 0x5a99 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3164: disk5 668705095 blocks trimmed in 588 extents (78648 us/trim, 12 trims/s)
2022-10-24 18:15:02.820680-0400 0x5a99 Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3167: disk5 trim distribution 1:378 2+:42 4+:6 16+:0 64+:0 256+:162
2022-10-24 18:15:02.820682-0400 0x5a99 Default 0x0 0 0 kernel: (apfs) nx_mount_trim_thread:874: disk5 *** error trim'ing free blocks: 92
2022-10-24 20:23:50.631094-0400 0x1a3ce Default 0x0 0 0 kernel: (apfs) _vnode_dev_unmap_flush_and_unlock:1624: disk5 trim'ing 7 blocks from trim_list failed w/: 6 (entry 0:0 ; 0:0)
2022-10-24 20:23:50.631118-0400 0x1a3ce Default 0x0 0 0 kernel: (apfs) _vnode_dev_unmap_flush_and_unlock:1624: disk5 trim'ing 7 blocks from trim_list failed w/: 6 (entry 0:0 ; 0:0)
2022-10-24 20:25:14.474095-0400 0x1a79c Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3172: disk3 scan took 0.722715 s (no trims)
2022-10-24 20:25:21.077005-0400 0x1a79c Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3154: disk3 scan took 6.602901 s, trims took 5.778378 s
2022-10-24 20:25:21.077012-0400 0x1a79c Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3164: disk3 60498708 blocks trimmed in 149263 extents (38 us/trim, 25831 trims/s)
2022-10-24 20:25:21.077014-0400 0x1a79c Default 0x0 0 0 kernel: (apfs) spaceman_scan_free_blocks:3167: disk3 trim distribution 1:57710 2+:33877 4+:27849 16+:11443 64+:12083 256+:6301