Posters here seem to be stuck with the old mentality that fingerprint is 1:50,000. That number is based on capacitive 2D fingerprint sensors with 500 DPI introduced by Apple in 2013.
Current 3D ultrasonic and optical scanners look beyond the fingerprint and into the blood capillaries to prevent spoofing. The DPI has doubled.
As I just noted, adding a third dimension adds accuracy/reduces false positives. That's true whether we're assessing faces or fingerprints.
It's not whether DPI is doubled - it's how that greater resolution is used. How many measurements are taken, not just how accurate they are. You could make 500 measurements with a 500 DPI scanner, or 250 measurements with a 1,000 DPI scanner... Arguably, 500 points of comparison would be superior to 250 points of comparison, even if those 250 points are measured with greater resolution. Of course, ongoing increases to computing power mean it's likely that a higher-resolution scan would also be accompanied by an increase in the points of comparison... but I think you get my point.
The accuracy of any biometrics system stands or falls on how many measurements it can take and how distinctive the points it measures can be. I doubt that horse race is over. My appreciation for Face ID has less to do with statistics (which can change with every new iteration of a system), and more to do with functionality. "Touch vs. glance." I don't have to touch the screen to be recognized. Like voice-control vs. mouse/keyboard control - glancing and speaking can be done while hands are otherwise employed.