Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
There’re a few long threads on the subject. Here’s one. Do a search for more, but please don’t tangle up every thread on Face ID with the safety conversation.

Yeah, that one’s just as inconclusive as this one. Please don’t tell everyone that a potential health risk is absolutely, 100% nothing to worry about when you have no more knowledge on the subject than anyone else.

And if you don’t like the subject being raised in this thread, why are you putting your opinion on it across anyway?
 
Nicely done, and very interesting! You can probably get this photo published in a Wired or Verge piece.

The Verge already showcased this in their full review of the X using a video camcorder.
 
[*]Random? Who said random? These are (now very clearly) patterned squares! Each with a differing intensity apparently. I would field that it's one patterned substrate duplicated through some sort of rectilinear (straight and gridded, rather than spherical) bug's-eye sort of lens.

View attachment 735098
Cool! If I cross my eyes, I can see a dolphin!
 
  • Like
Reactions: ardo111
Yeah, that one’s just as inconclusive as this one. Please don’t tell everyone that a potential health risk is absolutely, 100% nothing to worry about when you have no more knowledge on the subject than anyone else.

And if you don’t like the subject being raised in this thread, why are you putting your opinion on it across anyway?
I didn’t say anything about the potential health effects. Why don’t you figure out to whom you’re responding before you pop off like a jerk?

I’m simply asking for common forum etiquette, which might be beyond you, so excuse me for treating you like a reasonable adult.
 
Hi! First post on the forum.

I'm a photographer by trade and got my iPhone X today.

I used a 720nm infrared camera and my iPhone X 256GB (using Animoji to ensure continuous read of Face ID dot matrix projection) to capture the image on the right. The image on the left is just an iPhone X selfie for reference. :)

It really shows the layout, size and accuracy of the dots... I thought it was impressive!

Hope you enjoy!

View attachment 731829

Excellent work. Thank you very much!

What would be interesting is a small video or gif to show if the dots are moving.
 
Uhm, now I see why hairs are a bit of a problem with the front camera Portrait mode (according to various reviews)
 
Y’all can joke about it all you like, but the fact remains, this is a focused laser hitting your eyes multiple times a day - it’s absolutely not the same thing as ambient IR. There are several studies into this kind of thing that are worth looking at before presenting off-the-top-of-your-head opinion as ‘fact’.

That's why the iPhone adapts to your changing appearance, when your face starts to melt :)
 
I didn’t say anything about the potential health effects. Why don’t you figure out to whom you’re responding before you pop off like a jerk?

I’m simply asking for common forum etiquette, which might be beyond you, so excuse me for treating you like a reasonable adult.

Sorry – my bad. I mistakenly assumed your post was a reply from Haruhiko.
 
Hi! First post on the forum.

I'm a photographer by trade and got my iPhone X today.

I used a 720nm infrared camera and my iPhone X 256GB (using Animoji to ensure continuous read of Face ID dot matrix projection) to capture the image on the right. The image on the left is just an iPhone X selfie for reference. :)

It really shows the layout, size and accuracy of the dots... I thought it was impressive!

Hope you enjoy!

View attachment 731829

Didn’t Apple say that in Face ID you are bombarded with 30,000 Dots on your face ? Your picture doesn’t show 30,000 dots on your face but interesting.
 
Uhm, now I see why hairs are a bit of a problem with the front camera Portrait mode (according to various reviews)

Yeah hair doesn't really reflect IR. The Portrait Mode camera just sort of sees what's close and what's far and applies a gaussian blur on the data in between. As someone with well-defined hair (tight curls) it really messes this up. It'd be less noticeable on someone with short, straight hair or a girl with smoother hair that extends below their jawline.

Didn’t Apple say that in Face ID you are bombarded with 30,000 Dots on your face ? Your picture doesn’t show 30,000 dots on your face but interesting.

Hahaha well "bombarding you with 30,000 dots" and "bombarding your face with 30,000 dots" are different, I guess!



Okay I counted the dots!

My photo doesn't show the entire map, but it's pretty close. Numbers are approximate but give a good idea.

I counted 138 dots per rectangle * 94 ~identical rectangles = 12,972 dots captured in my photo. Roughly.

Okay my counting could be off, and sure the photo doesn't show ALL of the dots, but 12,972 is only 43% of 30,000. I don't think I somehow more than halved the number of dots. Apple is exaggerating. Haha
 
  • Like
Reactions: hank moody
Infrared Lasers. Popping them retinas since 2017!

Where is that tinfoil hat when I need one......
By the way, the sun have probably been doing billion times more harm to your eyes but probably it won't be worth arguing this with you by the look of your signature.

People now days :(
 
Last edited:
Okay I counted the dots!

My photo doesn't show the entire map, but it's pretty close. Numbers are approximate but give a good idea.

I counted 138 dots per rectangle * 94 ~identical rectangles = 12,972 dots captured in my photo. Roughly.

Okay my counting could be off, and sure the photo doesn't show ALL of the dots, but 12,972 is only 43% of 30,000. I don't think I somehow more than halved the number of dots. Apple is exaggerating. Haha
You said that you took the photo using Animojis. Are we sure the level of precision for that application is the same as when authenticating with FaceID? Maybe the accuracy is cut in half when using the TrueDepth camera for other applications. :D
 
Didn’t Apple say that in Face ID you are bombarded with 30,000 Dots on your face ? Your picture doesn’t show 30,000 dots on your face but interesting.

It's typical Apple PR speak designed to make something sound more than it is. Apple didn't actually claim that 30,000 dots hit our face... but only that that many were projected :)

Looks like there's (much) less than a thousand dots involved. Which is apparently enough.

I wonder what the significance of the asymmetrical alignment of the dots is, if significant at all for Face ID.

The original 2010 Kinect had the same type of pattern. Which is not a surprise, as it was licensed technology from a company called Prime Sense, which Apple later bought in 2013.

From what I gather, the primary reason for the speckled pattern is that it's easier to recognize which particular dot you're looking at, from the pattern of its surrounding dots. It's a self-marking scheme.

--

The original worked something like this: during assembly, the subsystem is forced to project the dot pattern at a test target, and the device-specific pattern viewed by the camera is burned into the iPhone as a flat template to check against. (The Kinect pattern used different size dots as well, meant for different depth fields.)

When later the system is trying to recognize you, it searches for key dots. Their X offset is compared against the stored flat template to determine primary depth markers. (Apparently you can ignore Y offsets from depth. Go figure.)

Then, using the assumption that surrounding dots cannot be too different in depth, it works its way outward from the primary dots to their surrounding dots, "growing" more depth info as it goes.
 
Last edited:
  • Like
Reactions: HeadphoneAddict
The original worked something like this: during assembly, the subsystem is forced to project the dot pattern at a test target, and the device-specific pattern viewed by the camera is burned into the iPhone as a flat template to check against. (The Kinect pattern used different size dots as well, meant for different depth fields.)

When later the system is trying to recognize you, it searches for key dots. Their X offset is compared against the stored flat template to determine primary depth markers. (Apparently you can ignore Y offsets from depth. Go figure.)

Then, using the assumption that surrounding dots cannot be too different in depth, it works its way outward from the primary dots to their surrounding dots, "growing" more depth info as it goes.

I thought reading some of the primesense whitepapers (or research, I don't remember) that it is something about measuring the diffraction caused by subtle movements of the subject, or camera, to determine the depth. But what you say makes more sense...

The dot projector pulses too, so they could be using time of flight to determine distance for the dots.

EDIT: Here is how it actually works. This is about primesense tech in the Kinect, but TrueDepth is actually the same tech since Apple acquired them.

https://courses.engr.illinois.edu/c... 25 - How the Kinect Works - CP Fall 2011.pdf
 
Last edited:
You said that you took the photo using Animojis. Are we sure the level of precision for that application is the same as when authenticating with FaceID? Maybe the accuracy is cut in half when using the TrueDepth camera for other applications. :D

Okay, here's a shot from using Face ID. Not as pretty as the others, but it demonstrates that no, Face ID does not use a different projection from Animoji.

This was before I used Animoji (more reliable dot projection rather than just one or two; at this point I was locking the phone, hitting unlock and quickly spamming the shutter on the IR DSLR and hoping I got the timing right).

DSC_6699.jpg

It's typical Apple PR speak designed to make something sound more than it is. Apple didn't actually claim that 30,000 dots hit our face... but only that that many were projected :)

Looks like there's (much) less than a thousand dots involved. Which is apparently enough.

I counted the dots. There are roughly 15,000 in total.
 
It's typical Apple PR speak designed to make something sound more than it is. Apple didn't actually claim that 30,000 dots hit our face... but only that that many were projected :)

Looks like there's (much) less than a thousand dots involved. Which is apparently enough.



The original 2010 Kinect had the same type of pattern. Which is not a surprise, as it was licensed technology from a company called Prime Sense, which Apple later bought in 2013.

From what I gather, the primary reason for the speckled pattern is that it's easier to recognize which particular dot you're looking at, from the pattern of its surrounding dots. It's a self-marking scheme.

--

The original worked something like this: during assembly, the subsystem is forced to project the dot pattern at a test target, and the device-specific pattern viewed by the camera is burned into the iPhone as a flat template to check against. (The Kinect pattern used different size dots as well, meant for different depth fields.)

When later the system is trying to recognize you, it searches for key dots. Their X offset is compared against the stored flat template to determine primary depth markers. (Apparently you can ignore Y offsets from depth. Go figure.)

Then, using the assumption that surrounding dots cannot be too different in depth, it works its way outward from the primary dots to their surrounding dots, "growing" more depth info as it goes.

Kind of what I thought. Sort of like an earth based person looking at the sky for constellations.
 
EDIT: Here is how it actually works. This is about primesense tech in the Kinect, but TrueDepth is actually the same tech since Apple acquired them.

https://courses.engr.illinois.edu/cs498dh/fa2011/lectures/Lecture 25 - How the Kinect Works - CP Fall 2011.pdf

Yep, that's how it worked for Kinect gaming.

For FaceId, it's a bit different, as instead of being concerned about figuring out body and arm positions of multiple users, it's about 3D mapping a single face.

I counted the dots. There are roughly 15,000 in total.

I meant the number of dots on just the face.

Cheers!
 
Last edited:
Could it be doing multiple scans? Eg doing two flashes with the dot positions moved between captures, to allow for higher effective resolution with a lower number of actual dots?
 
Could it be doing multiple scans? Eg doing two flashes with the dot positions moved between captures, to allow for higher effective resolution with a lower number of actual dots?

Yep it totally could. But then you need the ability to project an A/B set of dots or even more. The fact that they fit 15,000 dots into one LED/laser behind gorilla glass in the notch is impressive enough. I think the "verification" only uses one projection, while the "registration" of rotating your face around twice would actually produce a very high resolution image. So it might have 10,000 data points about your face to compare to, and as long as most of the 1,000 that end up on your face in any one Face ID scan match up somewhere on that 10,000 image inside the secure enclave, you're good.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.