Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ipedro

macrumors 603
Original poster
Nov 30, 2004
6,347
8,947
Toronto, ON
I've been following disparate technologies for years that combined with the direction Apple has taken on public products like iPhone (FaceID, LiDar), Watch (miniaturization, display, battery efficiency), Mac (Apple silicon) and iOS (AR), are finally coming together into a product that together will be unlike anything on the market today. Very much like iPhone did in 2007.

Predictions:

These are going to be smaller than most people here are expecting. Apple will do away with traditional optical approaches and deliver sharpness in software.

TrueDepth sensors, made commonplace with Face ID, will measure the distance from the display to the eye and the shape of the eye. An infrared camera will measure the direction and dilation of the pupil to know where the user is looking, including in three dimensions since pupil dilation determines focus and depth perception. Knowing the exact distance, enables offsetting distortion in the opposite direction to a user's vision.

There'll be a standard eye test at setup. A slider, perhaps controlled by an Apple Watch-like Digital Crown will be adjusted until the user can see every line in sharpness. I expect this technology to appear on iPhones and iPads (and future Macs with a TrueDepth array).

This digital optical variability will enable dynamic adjustments multiple times per second, solving the problem of edge blur in existing VR headsets as your face moves the set. This may also solve the motion sickness some users experience with VR headsets. Both of these are important problems to solve if Apple is going to break this segment into the mainstream.

Removing unnecessary glass and mechanical adjustments traditionally needed to account for vision differences, will enable a shorter profile ahead of the user and a much more lightweight assembly. The fact that an external power source is expected, reinforces this. They're going for small and lightweight. This leads me to believe that it won’t have a strap, but instead typical stems that sit on your ears.

All of the processing and short term battery will reside in the stems to keep the forward frame as streamlined as possible. For context in how this is possible, an Apple Watch's entire System in Package would fit along a single stem, leaving room for batteries in the tips of the stem, the farthest point away from the face for better weight distribution. Multiply that by 2 stems, each running an M-class chip and you have a powerful computer on your face. Notably, Apple Silicon has been designed to work in modules. M Pro and Max chips are simply multiple chips connected to one another.

I'm not confident Apple is going to introduce external cameras, given the "Glasshole" trap Google stepped into. The same spatial awareness required for AR can be accomplished with LiDar, developed in iPad and iPhone, which measures the external world in topography instead of light. LiDar can map hands in real time, doing away with the need for controllers. If they decide to include an external camera, used for photography and video, then it’ll be hidden behind mirrored glass or variable opacity glass.

Because I don't think Apple is all too invested in full immersion and Extended/Mixed Reality is the future they’re headed towards, these glasses will have variable opacity, from fully opaque to tinted like sunglasses. To enable this, the displays will either have to be transparent OLEDs that you can see through when the front glass is transparent or Apple has made advancements in retinal projection – projectors shooting light directly onto your retina to produce images.

This technology is going to disrupt opticians for the most common eye tests. Apple is getting into this ahead of Extended Reality glasses that look like standard glasses and use retinal projectors instead of displays. When this first generation morphs into full XR that you can wear around in public, every device would be custom made for each user's eyesight. Warby Parker has these logiistics figured out, so will Apple. Having your iPhone's TrueDepth system perform the eye test, would simplify the process, enabling mass market scaling.

I've gone into the software (Memoji, AR Kit) and real world use cases in other threads, and those too will have been developed in secret out in the open, as Apple has done extensively for decades now. New devices are unveiled, yet they feel familiar because we'd seen their components spread out amongst different features and technologies on existing devices, and then they're brought together in a single device that makes the experience cohesive.
 
  • Like
Reactions: Kierkegaarden
Good point that they’ve already miniaturized and implemented [in iPhone] some of the technologies that will make their way into this product. I wonder if the authentication will be named something like “EyeID” and instead of moving your head around, you will move your eyes around in a circular motion (if only for knowing the shape of the eye for customization).
 
Good point that they’ve already miniaturized and implemented [in iPhone] some of the technologies that will make their way into this product. I wonder if the authentication will be named something like “EyeID” and instead of moving your head around, you will move your eyes around in a circular motion (if only for knowing the shape of the eye for customization).

IrisID maybe.

I don't think you'd need to move your eye around. Unlike a face that has protruding elements, an eye is spherical and only the front of the eye would be needed for capturing the iris. It wouldn't be using the topography of the eye, it would be capturing the iris pattern which is far more unique than a face and even more than a fingerprint. The IR camera could capture it in an instant snapshot to generate a unique code and then use that code to unlock your device to your profile.

The TrueDepth array would be used for eyelid and eyebrow captures to determine emotion, that could be translated into a Mimoji or Animoji.
 
  • Like
Reactions: Kierkegaarden
IrisID maybe.

I don't think you'd need to move your eye around. Unlike a face that has protruding elements, an eye is spherical and only the front of the eye would be needed for capturing the iris. It wouldn't be using the topography of the eye, it would be capturing the iris pattern which is far more unique than a face and even more than a fingerprint. The IR camera could capture it in an instant snapshot to generate a unique code and then use that code to unlock your device to your profile.

The TrueDepth array would be used for eyelid and eyebrow captures to determine emotion, that could be translated into a Mimoji or Animoji.
Strong authentication via the iris could have major implications for viewing media. Any content provider would be virtually guaranteed that only the person that paid to view the content will be able to view it — movies, concerts, etc. — no worries about sharing with others and losing on that revenue. So we could see better content as a result, because there would be a sustainable income stream instead of the content showing up on YouTube for free.
 
I think most of your predictions are incorrect. Here are a couple of the more egregious ones:
To enable this, the displays will either have to be transparent OLEDs that you can see through when the front glass is transparent
If you just had a transparent OLED screen in front of your eye, you couldn't focus on it. And if you put a lens in front of the screen so you could focus on it, you could no longer focus on the real world behind it. Transparent OLED technology is completely irrelevant to VR.
I'm not confident Apple is going to introduce external cameras, given the "Glasshole" trap Google stepped into. The same spatial awareness required for AR can be accomplished with LiDar, developed in iPad and iPhone, which measures the external world in topography instead of light. LiDar can map hands in real time, doing away with the need for controllers. If they decide to include an external camera, used for photography and video, then it’ll be hidden behind mirrored glass or variable opacity glass.
iPhone LIDAR doesn't work well in the dark, last time I checked. It can give you a sparse depth map, but it can't be used for SLAM... it can't be used to map the space or figuring out where you are in that space. And because this will almost certainly be an opaque device, cameras will be needed to show the real world to the user.
 
I think most of your predictions are incorrect. Here are a couple of the more egregious ones:

If you just had a transparent OLED screen in front of your eye, you couldn't focus on it. And if you put a lens in front of the screen so you could focus on it, you could no longer focus on the real world behind it. Transparent OLED technology is completely irrelevant to VR.

Apple is allegedly testing software adjusted sharpness for personalized vision acuity on iPhone – something of course perfectly suited for extended reality glasses.

For this to work, you need a) to know precisely how far the eye is from the screen and b) the user's depth of field at any moment, and c) the user's vision acuity. The first can be accomplished with the TrueDepth array used for Face ID. Depth can be tracked by measuring the pupil with an IR camera and the third would be tuned at setup. It's entirely possible given that all these technologies exist and proven in real world use, are already in the iPhone and can be ported to the glasses. All these technologies are relevant to both AR and VR.

An opacity adjustable glass on the far side of the screen could darken the area behind the screen to produce an isolated VR environment, and reduce opaqueness to see out into the world with superimposed images for AR, with its depth of field adjusted to match that of the world outside.

iPhone LIDAR doesn't work well in the dark, last time I checked. It can give you a sparse depth map, but it can't be used for SLAM... it can't be used to map the space or figuring out where you are in that space. And because this will almost certainly be an opaque device, cameras will be needed to show the real world to the user.

LiDar works in pitch darkness. It doesn't utilize visible light.
 
LiDar works in pitch darkness. It doesn't utilize visible light.
I worded that poorly.
Yes, the LIDAR projector and sensor work in the dark, but LIDAR-equipped Apple devices can't do SLAM (simultaneous localization and mapping) in the dark. In other words, it can't make a 3D model of the environment and detect its own position in the environment.
The cameras using visible light are a more important part of the process than the LIDAR sensor—that's why Apple devices with out LIDAR can do AR, but devices with LIDAR can't do tracking for AR in the dark.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.